Repository: opral/lix Branch: main Commit: a9023fb8ad90 Files: 493 Total size: 5.8 MB Directory structure: gitextract_5kejlpdk/ ├── .gitattributes ├── .gitignore ├── .infisical.json ├── .prettierignore ├── CONTRIBUTING.md ├── Cargo.toml ├── README.md ├── benchmarks/ │ ├── 10k-entities/ │ │ ├── Cargo.toml │ │ ├── README.md │ │ └── src/ │ │ ├── main.rs │ │ ├── sqlite_backend.rs │ │ └── wasmtime_runtime.rs │ ├── engine2-json-pointer/ │ │ ├── Cargo.toml │ │ ├── README.md │ │ └── src/ │ │ ├── main.rs │ │ └── sqlite_backend.rs │ └── git-compare/ │ ├── Cargo.toml │ ├── README.md │ └── src/ │ └── main.rs ├── blog/ │ ├── 001-introducing-lix/ │ │ └── index.md │ ├── 002-modeling-a-company-as-a-repository/ │ │ └── index.md │ ├── 003-february-2026-update/ │ │ └── index.md │ ├── 004-march-2026-update/ │ │ └── index.md │ ├── 005-april-2026-update/ │ │ └── index.md │ ├── authors.json │ └── table_of_contents.json ├── cla-signatures.json ├── docs/ │ ├── api-reference.md │ ├── backend.md │ ├── comparison-to-git.md │ ├── getting-started.md │ ├── history.md │ ├── lix-for-ai-agents.md │ ├── persistence.md │ ├── schemas.md │ ├── sql-functions.md │ ├── surfaces.md │ ├── table_of_contents.json │ ├── versions.md │ └── what-is-lix.md ├── nx.json ├── optimization_log6_crud.md ├── optimization_log7.md ├── optimization_log8.md ├── optimization_log9_sql2.md ├── package.json ├── packages/ │ ├── cli/ │ │ ├── Cargo.toml │ │ └── src/ │ │ ├── app/ │ │ │ ├── context.rs │ │ │ ├── mod.rs │ │ │ ├── run.rs │ │ │ └── welcome.rs │ │ ├── cli/ │ │ │ ├── exp.rs │ │ │ ├── init.rs │ │ │ ├── mod.rs │ │ │ ├── redo.rs │ │ │ ├── root.rs │ │ │ ├── sql.rs │ │ │ ├── undo.rs │ │ │ └── version.rs │ │ ├── commands/ │ │ │ ├── exp/ │ │ │ │ ├── git_replay.rs │ │ │ │ └── mod.rs │ │ │ ├── init.rs │ │ │ ├── mod.rs │ │ │ ├── redo.rs │ │ │ ├── sql/ │ │ │ │ ├── execute.rs │ │ │ │ └── mod.rs │ │ │ ├── undo.rs │ │ │ └── version/ │ │ │ ├── create.rs │ │ │ ├── merge.rs │ │ │ ├── mod.rs │ │ │ └── switch.rs │ │ ├── db/ │ │ │ └── mod.rs │ │ ├── error.rs │ │ ├── hints.rs │ │ ├── lib.rs │ │ ├── main.rs │ │ └── output/ │ │ └── mod.rs │ ├── engine/ │ │ ├── .gitignore │ │ ├── AGENTS.md │ │ ├── Cargo.toml │ │ ├── benches/ │ │ │ ├── fixtures/ │ │ │ │ └── pnpm-lock.fixture.json │ │ │ ├── json_pointer_crud/ │ │ │ │ └── main.rs │ │ │ ├── json_pointer_physical/ │ │ │ │ └── main.rs │ │ │ ├── optimization9_sql2/ │ │ │ │ ├── json_pointer.schema.json │ │ │ │ ├── main.rs │ │ │ │ └── pnpm-lock.fixture.json │ │ │ ├── physical_layout/ │ │ │ │ ├── backend_kv.rs │ │ │ │ ├── changelog.rs │ │ │ │ ├── json_store.rs │ │ │ │ ├── main.rs │ │ │ │ ├── tracked_state.rs │ │ │ │ └── workflow.rs │ │ │ ├── storage/ │ │ │ │ ├── README.md │ │ │ │ ├── backend.rs │ │ │ │ ├── binary_cas.rs │ │ │ │ ├── changelog.rs │ │ │ │ ├── commit_graph.rs │ │ │ │ ├── json_store.rs │ │ │ │ ├── main.rs │ │ │ │ ├── rocksdb_backend.rs │ │ │ │ ├── sqlite_backend.rs │ │ │ │ ├── storage_api.rs │ │ │ │ ├── tracked_state.rs │ │ │ │ └── untracked_state.rs │ │ │ └── transaction/ │ │ │ └── main.rs │ │ ├── src/ │ │ │ ├── backend/ │ │ │ │ ├── kv.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── testing.rs │ │ │ │ └── types.rs │ │ │ ├── binary_cas/ │ │ │ │ ├── chunking.rs │ │ │ │ ├── codec.rs │ │ │ │ ├── context.rs │ │ │ │ ├── kv.rs │ │ │ │ ├── mod.rs │ │ │ │ └── types.rs │ │ │ ├── catalog/ │ │ │ │ ├── context.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── schema.rs │ │ │ │ └── snapshot.rs │ │ │ ├── cel/ │ │ │ │ ├── context.rs │ │ │ │ ├── error.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── provider.rs │ │ │ │ ├── runtime.rs │ │ │ │ └── value.rs │ │ │ ├── commit_graph/ │ │ │ │ ├── context.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── types.rs │ │ │ │ └── walker.rs │ │ │ ├── commit_store/ │ │ │ │ ├── codec.rs │ │ │ │ ├── context.rs │ │ │ │ ├── materialization.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── storage.rs │ │ │ │ └── types.rs │ │ │ ├── common/ │ │ │ │ ├── error.rs │ │ │ │ ├── fingerprint.rs │ │ │ │ ├── fs_path.rs │ │ │ │ ├── identity.rs │ │ │ │ ├── json_pointer.rs │ │ │ │ ├── metadata.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── types.rs │ │ │ │ └── wire.rs │ │ │ ├── domain.rs │ │ │ ├── engine.rs │ │ │ ├── entity_identity.rs │ │ │ ├── functions/ │ │ │ │ ├── context.rs │ │ │ │ ├── deterministic.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── provider.rs │ │ │ │ ├── state.rs │ │ │ │ └── types.rs │ │ │ ├── init.rs │ │ │ ├── json_store/ │ │ │ │ ├── compression.rs │ │ │ │ ├── context.rs │ │ │ │ ├── encoded.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── store.rs │ │ │ │ └── types.rs │ │ │ ├── lib.rs │ │ │ ├── live_state/ │ │ │ │ ├── context.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── overlay.rs │ │ │ │ ├── reader.rs │ │ │ │ ├── types.rs │ │ │ │ └── visibility.rs │ │ │ ├── plugin/ │ │ │ │ ├── archive.rs │ │ │ │ ├── component.rs │ │ │ │ ├── install.rs │ │ │ │ ├── manifest.rs │ │ │ │ ├── materializer.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── plugin_manifest.json │ │ │ │ └── storage.rs │ │ │ ├── schema/ │ │ │ │ ├── annotations/ │ │ │ │ │ ├── defaults.rs │ │ │ │ │ └── mod.rs │ │ │ │ ├── builtin/ │ │ │ │ │ ├── lix_account.json │ │ │ │ │ ├── lix_active_account.json │ │ │ │ │ ├── lix_binary_blob_ref.json │ │ │ │ │ ├── lix_change.json │ │ │ │ │ ├── lix_change_author.json │ │ │ │ │ ├── lix_commit.json │ │ │ │ │ ├── lix_commit_edge.json │ │ │ │ │ ├── lix_directory_descriptor.json │ │ │ │ │ ├── lix_file_descriptor.json │ │ │ │ │ ├── lix_key_value.json │ │ │ │ │ ├── lix_label.json │ │ │ │ │ ├── lix_label_assignment.json │ │ │ │ │ ├── lix_registered_schema.json │ │ │ │ │ ├── lix_version_descriptor.json │ │ │ │ │ ├── lix_version_ref.json │ │ │ │ │ └── mod.rs │ │ │ │ ├── compatibility.rs │ │ │ │ ├── definition.json │ │ │ │ ├── definition.rs │ │ │ │ ├── key.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── seed.rs │ │ │ │ └── tests.rs │ │ │ ├── session/ │ │ │ │ ├── context.rs │ │ │ │ ├── create_version.rs │ │ │ │ ├── execute.rs │ │ │ │ ├── merge/ │ │ │ │ │ ├── analysis.rs │ │ │ │ │ ├── apply.rs │ │ │ │ │ ├── conflicts.rs │ │ │ │ │ ├── mod.rs │ │ │ │ │ ├── stats.rs │ │ │ │ │ └── version.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── optimization9_sql2_bench.rs │ │ │ │ └── switch_version.rs │ │ │ ├── sql2/ │ │ │ │ ├── change_provider.rs │ │ │ │ ├── classify.rs │ │ │ │ ├── context.rs │ │ │ │ ├── directory_history_provider.rs │ │ │ │ ├── directory_provider.rs │ │ │ │ ├── dml.rs │ │ │ │ ├── entity_history_provider.rs │ │ │ │ ├── entity_provider.rs │ │ │ │ ├── error.rs │ │ │ │ ├── execute.rs │ │ │ │ ├── file_history_provider.rs │ │ │ │ ├── file_provider.rs │ │ │ │ ├── filesystem_planner.rs │ │ │ │ ├── filesystem_predicates.rs │ │ │ │ ├── filesystem_visibility.rs │ │ │ │ ├── history_projection.rs │ │ │ │ ├── history_provider.rs │ │ │ │ ├── history_route.rs │ │ │ │ ├── lix_state_provider.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── predicate_typecheck.rs │ │ │ │ ├── public_bind/ │ │ │ │ │ ├── assignment.rs │ │ │ │ │ ├── capability.rs │ │ │ │ │ ├── dml.rs │ │ │ │ │ ├── mod.rs │ │ │ │ │ └── table.rs │ │ │ │ ├── read_only.rs │ │ │ │ ├── record_batch.rs │ │ │ │ ├── result_metadata.rs │ │ │ │ ├── runtime.rs │ │ │ │ ├── session.rs │ │ │ │ ├── udfs/ │ │ │ │ │ ├── common.rs │ │ │ │ │ ├── lix_active_version_commit_id.rs │ │ │ │ │ ├── lix_empty_blob.rs │ │ │ │ │ ├── lix_json.rs │ │ │ │ │ ├── lix_json_get.rs │ │ │ │ │ ├── lix_json_get_text.rs │ │ │ │ │ ├── lix_text_decode.rs │ │ │ │ │ ├── lix_text_encode.rs │ │ │ │ │ ├── lix_timestamp.rs │ │ │ │ │ ├── lix_uuid_v7.rs │ │ │ │ │ ├── mod.rs │ │ │ │ │ └── public_call.rs │ │ │ │ ├── version_provider.rs │ │ │ │ ├── version_scope.rs │ │ │ │ └── write_normalization.rs │ │ │ ├── storage/ │ │ │ │ ├── context.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── read_scope.rs │ │ │ │ └── types.rs │ │ │ ├── storage_bench.rs │ │ │ ├── test_support.rs │ │ │ ├── tracked_state/ │ │ │ │ ├── by_file_index.rs │ │ │ │ ├── codec.rs │ │ │ │ ├── context.rs │ │ │ │ ├── diff.rs │ │ │ │ ├── materialization.rs │ │ │ │ ├── materializer.rs │ │ │ │ ├── merge.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── storage.rs │ │ │ │ ├── tree.rs │ │ │ │ └── types.rs │ │ │ ├── transaction/ │ │ │ │ ├── commit.rs │ │ │ │ ├── context.rs │ │ │ │ ├── live_state_overlay.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── normalization.rs │ │ │ │ ├── prep.rs │ │ │ │ ├── schema_resolver.rs │ │ │ │ ├── staging.rs │ │ │ │ ├── types.rs │ │ │ │ └── validation.rs │ │ │ ├── untracked_state/ │ │ │ │ ├── codec.rs │ │ │ │ ├── context.rs │ │ │ │ ├── materialization.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── storage.rs │ │ │ │ └── types.rs │ │ │ ├── version/ │ │ │ │ ├── context.rs │ │ │ │ ├── lifecycle.rs │ │ │ │ ├── mod.rs │ │ │ │ ├── refs.rs │ │ │ │ ├── stage_rows.rs │ │ │ │ └── types.rs │ │ │ └── wasm/ │ │ │ └── mod.rs │ │ ├── tests/ │ │ │ ├── branching.rs │ │ │ ├── code_structure.rs │ │ │ ├── commit_graph.rs │ │ │ ├── engine.rs │ │ │ ├── json_pointer_crud_storage.rs │ │ │ ├── sql/ │ │ │ │ ├── entity_history.rs │ │ │ │ ├── errors.rs │ │ │ │ ├── history_conformance.rs │ │ │ │ ├── lix_change.rs │ │ │ │ ├── lix_commit.rs │ │ │ │ ├── lix_directory.rs │ │ │ │ ├── lix_directory_history.rs │ │ │ │ ├── lix_file.rs │ │ │ │ ├── lix_file_history.rs │ │ │ │ ├── lix_json.rs │ │ │ │ ├── lix_key_value.rs │ │ │ │ ├── lix_label_assignment.rs │ │ │ │ ├── lix_registered_schema.rs │ │ │ │ ├── lix_state.rs │ │ │ │ ├── lix_state_history.rs │ │ │ │ ├── lix_version.rs │ │ │ │ ├── metadata.rs │ │ │ │ ├── read_only.rs │ │ │ │ └── udfs.rs │ │ │ ├── sql.rs │ │ │ ├── storage_accounting.rs │ │ │ ├── support/ │ │ │ │ ├── mod.rs │ │ │ │ └── simulation_test/ │ │ │ │ ├── engine/ │ │ │ │ │ ├── expect_same.rs │ │ │ │ │ ├── kv_backend.rs │ │ │ │ │ ├── macro_runtime.rs │ │ │ │ │ ├── mod.rs │ │ │ │ │ ├── mode.rs │ │ │ │ │ ├── rebuild_tracked_state.rs │ │ │ │ │ └── simulation.rs │ │ │ │ └── mod.rs │ │ │ ├── tmp_lix_key_value_amplification.rs │ │ │ └── transaction.rs │ │ └── wit/ │ │ └── lix-plugin.wit │ ├── js-kysely/ │ │ ├── .gitignore │ │ ├── package.json │ │ ├── src/ │ │ │ ├── create-lix-kysely.ts │ │ │ ├── eb-entity.ts │ │ │ ├── index.ts │ │ │ ├── qb.test-d.ts │ │ │ ├── qb.ts │ │ │ └── schema.ts │ │ ├── tests/ │ │ │ ├── eb-entity.test.ts │ │ │ └── transaction.test.ts │ │ ├── tsconfig.json │ │ ├── tsconfig.type-tests.json │ │ └── vitest.config.ts │ ├── js-sdk/ │ │ ├── .gitignore │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── SKILL.md │ │ ├── package.json │ │ ├── scripts/ │ │ │ ├── build.js │ │ │ ├── sync-builtin-schemas.js │ │ │ └── sync-engine-src.js │ │ ├── src/ │ │ │ ├── builtin-schemas.ts │ │ │ ├── engine-wasm/ │ │ │ │ ├── index.ts │ │ │ │ └── value.test.ts │ │ │ ├── index.ts │ │ │ ├── open-lix.test.ts │ │ │ ├── open-lix.ts │ │ │ ├── sqlite/ │ │ │ │ ├── better-sqlite3.d.ts │ │ │ │ ├── index.test.ts │ │ │ │ └── index.ts │ │ │ └── types.ts │ │ ├── tsconfig.json │ │ ├── vitest.config.ts │ │ └── wasm-bindgen.rs │ ├── plugin-json-v2/ │ │ ├── .gitignore │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── benches/ │ │ │ ├── apply_changes.rs │ │ │ ├── common/ │ │ │ │ └── mod.rs │ │ │ ├── detect_changes.rs │ │ │ └── roundtrip.rs │ │ ├── schema/ │ │ │ └── json_pointer.json │ │ ├── src/ │ │ │ └── lib.rs │ │ └── tests/ │ │ ├── apply_changes.rs │ │ ├── common/ │ │ │ └── mod.rs │ │ ├── detect_changes.rs │ │ ├── roundtrip.rs │ │ └── schema.rs │ ├── plugin-md-v2/ │ │ ├── .gitignore │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── benches/ │ │ │ ├── common/ │ │ │ │ └── mod.rs │ │ │ └── detect_changes.rs │ │ ├── manifest.json │ │ ├── schema/ │ │ │ ├── markdown_block.json │ │ │ └── markdown_document.json │ │ ├── src/ │ │ │ ├── apply_changes.rs │ │ │ ├── common.rs │ │ │ ├── detect_changes.rs │ │ │ ├── lib.rs │ │ │ └── schemas.rs │ │ └── tests/ │ │ ├── apply_changes.rs │ │ ├── common/ │ │ │ └── mod.rs │ │ ├── detect_changes.rs │ │ ├── roundtrip.rs │ │ └── schema.rs │ ├── react-utils/ │ │ ├── .oxlintrc.json │ │ ├── .prettierrc.json │ │ ├── LICENSE │ │ ├── README.md │ │ ├── package.json │ │ ├── src/ │ │ │ ├── hooks/ │ │ │ │ ├── use-lix.test.tsx │ │ │ │ ├── use-lix.ts │ │ │ │ ├── use-query.test.tsx │ │ │ │ └── use-query.ts │ │ │ ├── index.ts │ │ │ └── provider.tsx │ │ ├── test-setup.ts │ │ ├── tsconfig.json │ │ └── vitest.config.ts │ ├── rs-sdk/ │ │ ├── Cargo.toml │ │ ├── src/ │ │ │ ├── in_memory_backend.rs │ │ │ ├── lib.rs │ │ │ └── lix.rs │ │ └── tests/ │ │ └── e2e.rs │ ├── text-plugin/ │ │ ├── Cargo.toml │ │ ├── README.md │ │ ├── benches/ │ │ │ ├── apply_changes.rs │ │ │ ├── common/ │ │ │ │ └── mod.rs │ │ │ └── detect_changes.rs │ │ ├── manifest.json │ │ ├── schema/ │ │ │ ├── text_document.json │ │ │ └── text_line.json │ │ ├── src/ │ │ │ └── lib.rs │ │ └── tests/ │ │ ├── apply_changes.rs │ │ ├── common/ │ │ │ └── mod.rs │ │ ├── detect_changes.rs │ │ ├── roundtrip.rs │ │ └── schema.rs │ └── website/ │ ├── .gitignore │ ├── .vscode/ │ │ └── settings.json │ ├── HTML_DIFF_LIX_DEV_SEO_FOLLOWUP.md │ ├── README.md │ ├── content/ │ │ └── plugins/ │ │ └── index.md │ ├── package.json │ ├── public/ │ │ ├── _redirects │ │ ├── manifest.json │ │ └── robots.txt │ ├── scripts/ │ │ ├── plugin-readme-sync.test.ts │ │ ├── plugin-readme-sync.ts │ │ └── post-build-seo.js │ ├── src/ │ │ ├── blog/ │ │ │ ├── blogMetadata.ts │ │ │ └── og-image.ts │ │ ├── components/ │ │ │ ├── code-snippet.tsx │ │ │ ├── doc-code-snippet-element.tsx │ │ │ ├── docs-layout.tsx │ │ │ ├── docs-prev-next.tsx │ │ │ ├── footer.tsx │ │ │ ├── header.tsx │ │ │ ├── landing-page.tsx │ │ │ ├── markdown-page.interactive.js │ │ │ ├── markdown-page.style.css │ │ │ ├── markdown-page.tsx │ │ │ └── prev-next-nav.tsx │ │ ├── github-stars-cache.ts │ │ ├── lib/ │ │ │ ├── build-doc-map.test.ts │ │ │ ├── build-doc-map.ts │ │ │ ├── plugin-sidebar.ts │ │ │ ├── seo.test.ts │ │ │ └── seo.ts │ │ ├── router.tsx │ │ ├── routes/ │ │ │ ├── -seo-smoke.test.ts │ │ │ ├── __root.tsx │ │ │ ├── blog/ │ │ │ │ ├── $slug.tsx │ │ │ │ └── index.tsx │ │ │ ├── docs/ │ │ │ │ ├── $slugId.tsx │ │ │ │ ├── index.tsx │ │ │ │ └── redirects.json │ │ │ ├── guide/ │ │ │ │ ├── $slugId.tsx │ │ │ │ └── index.tsx │ │ │ ├── index.tsx │ │ │ ├── plugins/ │ │ │ │ ├── $pluginKey.tsx │ │ │ │ ├── index.tsx │ │ │ │ └── plugin.registry.json │ │ │ └── rfc/ │ │ │ ├── $slug.tsx │ │ │ └── index.tsx │ │ ├── ssg/ │ │ │ └── github-stars-plugin.ts │ │ ├── styles.css │ │ └── types/ │ │ └── lix-js-plugin-json.d.ts │ ├── tsconfig.json │ ├── vite.config.ts │ └── wrangler.json ├── pnpm-workspace.yaml ├── rfcs/ │ ├── 001-preprocess-writes/ │ │ └── index.md │ ├── 002-rewrite-in-rust/ │ │ └── index.md │ └── 003-canonical-lix-value/ │ └── index.md └── skills/ └── cli/ └── SKILL.md ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitattributes ================================================ pnpm-lock.yaml merge=ours # automatically normalize line endings in text files to be line feed # https://github.com/opral/monorepo/pull/3340#issue-2782271138 * text=auto eol=lf ================================================ FILE: .gitignore ================================================ ### inlang ### # .devcontainer.json .pnpm-store # **/out examples/svelte/package-lock.json examples/sveltekit/package-lock.json /build /package .env* .dev.vars .nx # Benchmark reports and scratch databases benchmarks/engine2-json-pointer/output*/ packages/engine/benches/storage/output*/ # Playwright **/test-results/ **/playwright-report/ **/playwright/.cache/ packages/vscode-docs-replay/results/ # SEO – Generated sitemap inlang/**/sitemap.xml # Created by https://www.toptal.com/developers/gitignore/api/windows,macos,linux,node,visualstudiocode,intellij # Edit at https://www.toptal.com/developers/gitignore?templates=windows,macos,linux,node,visualstudiocode,intellij ### Intellij ### # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839 # User-specific stuff .idea/**/workspace.xml .idea/**/tasks.xml .idea/**/usage.statistics.xml .idea/**/dictionaries .idea/**/shelf # AWS User-specific .idea/**/aws.xml # Generated files .idea/**/contentModel.xml # Sensitive or high-churn files .idea/**/dataSources/ .idea/**/dataSources.ids .idea/**/dataSources.local.xml .idea/**/sqlDataSources.xml .idea/**/dynamic.xml .idea/**/uiDesigner.xml .idea/**/dbnavigator.xml # Gradle .idea/**/gradle.xml .idea/**/libraries # Gradle and Maven with auto-import # When using Gradle or Maven with auto-import, you should exclude module files, # since they will be recreated, and may cause churn. Uncomment if using # auto-import. # .idea/artifacts # .idea/compiler.xml # .idea/jarRepositories.xml # .idea/modules.xml # .idea/*.iml # .idea/modules # *.iml # *.ipr # CMake cmake-build-*/ # Mongo Explorer plugin .idea/**/mongoSettings.xml # File-based project format *.iws # IntelliJ out/ # mpeltonen/sbt-idea plugin .idea_modules/ # JIRA plugin atlassian-ide-plugin.xml # Cursive Clojure plugin .idea/replstate.xml # SonarLint plugin .idea/sonarlint/ # Crashlytics plugin (for Android Studio and IntelliJ) com_crashlytics_export_strings.xml crashlytics.properties crashlytics-build.properties fabric.properties # Editor-based Rest Client .idea/httpRequests # Android studio 3.1+ serialized cache file .idea/caches/build_file_checksums.ser ### Intellij Patch ### # Comment Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-215987721 # *.iml # modules.xml # .idea/misc.xml # *.ipr # Sonarlint plugin # https://plugins.jetbrains.com/plugin/7973-sonarlint .idea/**/sonarlint/ # SonarQube Plugin # https://plugins.jetbrains.com/plugin/7238-sonarqube-community-plugin .idea/**/sonarIssues.xml # Markdown Navigator plugin # https://plugins.jetbrains.com/plugin/7896-markdown-navigator-enhanced .idea/**/markdown-navigator.xml .idea/**/markdown-navigator-enh.xml .idea/**/markdown-navigator/ # Cache file creation bug # See https://youtrack.jetbrains.com/issue/JBR-2257 .idea/$CACHE_FILE$ # CodeStream plugin # https://plugins.jetbrains.com/plugin/12206-codestream .idea/codestream.xml # Azure Toolkit for IntelliJ plugin # https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij .idea/**/azureSettings.xml ### Linux ### *~ # temporary files which can be created if a process still has a handle open of a deleted file .fuse_hidden* # KDE directory preferences .directory # Linux trash folder which might appear on any partition or disk .Trash-* # .nfs files are created when an open file is removed but is still being accessed .nfs* ### macOS ### # General .DS_Store .AppleDouble .LSOverride # Icon must end with two \r Icon # Thumbnails ._* # Files that might appear in the root of a volume .DocumentRevisions-V100 .fseventsd .Spotlight-V100 .TemporaryItems .Trashes .VolumeIcon.icns .com.apple.timemachine.donotpresent # Directories potentially created on remote AFP share .AppleDB .AppleDesktop Network Trash Folder Temporary Items .apdisk ### macOS Patch ### # iCloud generated files *.icloud ### Node ### # Logs logs *.log npm-debug.log* yarn-debug.log* yarn-error.log* lerna-debug.log* .pnpm-debug.log* # Diagnostic reports (https://nodejs.org/api/report.html) report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json # Runtime data pids *.pid *.seed *.pid.lock # Directory for instrumented libs generated by jscoverage/JSCover lib-cov # Coverage directory used by tools like istanbul coverage *.lcov # nyc test coverage .nyc_output # Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files) .grunt # Bower dependency directory (https://bower.io/) bower_components # node-waf configuration .lock-wscript # Compiled binary addons (https://nodejs.org/api/addons.html) build/Release # Dependency directories node_modules/ jspm_packages/ # Snowpack dependency directory (https://snowpack.dev/) web_modules/ # TypeScript cache *.tsbuildinfo # Optional npm cache directory .npm # Optional eslint cache .eslintcache # Optional stylelint cache .stylelintcache # Microbundle cache .rpt2_cache/ .rts2_cache_cjs/ .rts2_cache_es/ .rts2_cache_umd/ # Optional REPL history .node_repl_history # Output of 'npm pack' *.tgz # Yarn Integrity file .yarn-integrity # dotenv environment variable files .env .env.development.local .env.test.local .env.production.local .env.local # parcel-bundler cache (https://parceljs.org/) .cache .parcel-cache # Next.js build output .next out # Nuxt.js build / generate output .nuxt dist # Gatsby files .cache/ # Comment in the public line in if your project uses Gatsby and not Next.js # https://nextjs.org/blog/next-9-1#public-directory-support # public # vuepress build output .vuepress/dist # vuepress v2.x temp and cache directory .temp # Docusaurus cache and generated files .docusaurus # Serverless directories .serverless/ # FuseBox cache .fusebox/ # DynamoDB Local files .dynamodb/ # TernJS port file .tern-port # Stores Visual Studio Code versions used for testing Visual Studio Code extension (Sherlock)s .vscode-test # yarn v2 .yarn/cache .yarn/unplugged .yarn/build-state.yml .yarn/install-state.gz .pnp.* ### Node Patch ### # Serverless Webpack directories .webpack/ # Optional stylelint cache # SvelteKit build / generate output .svelte-kit ### VisualStudioCode ### .vscode/* !.vscode/settings.json !.vscode/tasks.json !.vscode/launch.json !.vscode/extensions.json !.vscode/*.code-snippets # Local History for Visual Studio Code .history/ # Built Visual Studio Code Extensions *.vsix ### VisualStudioCode Patch ### # Ignore all local history of files .history .ionide ### Windows ### # Windows thumbnail cache files Thumbs.db Thumbs.db:encryptable ehthumbs.db ehthumbs_vista.db # Dump file *.stackdump # Folder config file [Dd]esktop.ini # Recycle Bin used on file shares $RECYCLE.BIN/ # Windows Installer files *.cab *.msi *.msix *.msm *.msp # Windows shortcuts *.lnk # End of https://www.toptal.com/developers/gitignore/api/windows,macos,linux,node,visualstudiocode,intellij inlang/packages/paraglide/paraglide-sveltekit/example/build inlang/packages/paraglide/paraglide-solidstart/example/.solid *.h.ts.mjs **/vite.config.ts.timestamp-* **/vite.config.js.timestamp-* # Fink version.json inlang/packages/editor/version.json # Lix website build packages/lix-website/build # gitea test instance data lix/packages/gitea # VitePress cache packages/lix-docs/docs/.vitepress/cache packages/lix-docs/docs/.vitepress/dist artifact/* packages/engine/artifact/* target # Built plugin archive artifacts packages/*/*.lixplugin ================================================ FILE: .infisical.json ================================================ { "workspaceId": "6e0353e4-b0b0-4c6d-a338-38f09cfafa22", "defaultEnvironment": "", "gitBranchToEnvironmentMapping": null } ================================================ FILE: .prettierignore ================================================ ## adding the copied sources from the markdown plugin to be able to see changes since copy.. packages/md-app/src/components/editor/plugins/markdown-plate-fork/** packages/md-app/src/components/editor/plugins/*.tsx packages/md-app/src/components/editor/plugins/*.ts # also exclude ui packages/md-app/src/components/plate-ui/*.tsx packages/md-app/src/components/plate-ui/*.ts packages/md-app/src/components/editor/plugins/markdown/fixtures/*.md ================================================ FILE: CONTRIBUTING.md ================================================ # Contributing ## Prerequisites - [Node.js](https://nodejs.org/en/) (v20 or higher) - [pnpm](https://pnpm.io/) (v8 or higher) > [!INFO] > If you are developing on Windows, you need to use [WSL](https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux). ## Development 1. Clone the repository 2. run `pnpm i` in the root of the repo 3. run `pnpm --filter ... build` to build the dependencies of the package you want to work on 4. run `pnpm --filter dev|test|...` to run the commands of the package you work on ### Example > [!INFO] > You need to run the build for the dependencies of the package via the three dots `...` at least once. [Here](https://pnpm.io/filtering#--filter-package_name-1) is the pnpm documentation for filtering. 1. `pnpm i` 2. `pnpm --filter @inlang/paraglide-js... build` 3. `pnpm --filter @inlang/paraglide-js dev` ## Opening a PR 1. run `pnpm run ci` to run all tests and checks 2. run `npx changeset` to write a changelog and trigger a version bumb. watch this loom video to see how to use changesets: https://www.loom.com/share/1c5467ae3a5243d79040fc3eb5aa12d6 ================================================ FILE: Cargo.toml ================================================ [workspace] resolver = "2" members = [ "benchmarks/git-compare", "benchmarks/10k-entities", "benchmarks/engine2-json-pointer", "packages/engine", "packages/js-sdk", "packages/text-plugin", "packages/rs-sdk", "packages/plugin-md-v2", "packages/cli", ] exclude = ["packages/plugin-json-v2"] [profile.test] debug = 1 [profile.bench] debug = true strip = false ================================================ FILE: README.md ================================================

Lix

Embeddable version control system

weekly downloads on NPM Discord GitHub Stars X (Twitter)

> [!NOTE] > > **Lix is in alpha** · [Follow progress to v1.0 →](https://github.com/opral/lix/issues/374) --- Lix is an **embeddable version control system for files of any format** (DOCX, XLSX, CAD, PDF, JSON) with semantic, per-entity diffs. Branches, merge, and an immutable change history, exposed as SQL, all in-process. Use it inside a contract editor, a feature-flag service, an artifact registry, an AI-agent platform, a versioned filesystem, or a domain-specific CLI. > Lix is to version control what DuckDB is to analytics: an embeddable engine with pluggable support for file formats. - **It's just a library.** `npm install`, import, run. No daemon, no protocol, no remote. - **Semantic per-entity diffs.** XLSX cells, DOCX clauses, CAD parts. Not line-by-line text. - **History is SQL.** Diffs, blame, and audit are direct queries against `lix_change`. The entity foundation ships today. A plugin API is on the [roadmap](#roadmap); once it lands, anyone can author a plugin that turns a file format (DOCX, XLSX, CAD, PDF, anything else) into entities. [How does Lix compare to Git? →](https://lix.dev/docs/comparison-to-git) ## Getting started

JavaScript JavaScript · Python Python · Rust Rust · Go Go

```bash npm install @lix-js/sdk ``` ```ts import { openLix } from "@lix-js/sdk"; const lix = await openLix(); // in-memory by default; pass a backend for persistence // Register a schema for a tracked entity await lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", [ JSON.stringify({ "x-lix-key": "task", "x-lix-version": "1", "x-lix-primary-key": ["/id"], type: "object", required: ["id", "title"], properties: { id: { type: "string" }, title: { type: "string" }, }, additionalProperties: false, }), ], ); // Write rows like any SQL table await lix.execute( "INSERT INTO task (id, title) VALUES ($1, $2)", ["t1", "Ship v1"], ); // Every change is journaled; query it with SQL const changes = await lix.execute( "SELECT entity_id, schema_key, snapshot_content FROM lix_change", ); ``` ## Semantic change (delta) tracking Unlike Git's line-based diffs, Lix understands file structure through plugins. Lix sees `price: 10 → 12` or `cell B4: pending → shipped`, not "line 4 changed" or "binary files differ". ### JSON file example **Before:** ```json {"theme":"light","notifications":true,"language":"en"} ``` **After:** ```json {"theme":"dark","notifications":true,"language":"en"} ``` **Git sees:** ```diff -{"theme":"light","notifications":true,"language":"en"} +{"theme":"dark","notifications":true,"language":"en"} ``` **Lix sees:** ```diff property theme: - light + dark ``` ### Excel file example The same approach works for binary formats. With an XLSX plugin, Lix shows cell-level changes: **Before:** ```diff | order_id | product | status | | -------- | -------- | -------- | | 1001 | Widget A | shipped | | 1002 | Widget B | pending | ``` **After:** ```diff | order_id | product | status | | -------- | -------- | -------- | | 1001 | Widget A | shipped | | 1002 | Widget B | shipped | ``` **Git sees:** ```diff -Binary files differ ``` **Lix sees:** ```diff order_id 1002 status: - pending + shipped ``` ## How Lix Works Lix uses SQL databases as query engine and persistence layer. Virtual tables like `file` and `file_history` are exposed on top: ```sql SELECT * FROM file_history WHERE path = '/orders.xlsx' ORDER BY created_at DESC; ``` When a file is written, a plugin parses it and detects entity-level changes. These changes (deltas) are stored in the database, enabling branching, merging, and audit trails. ``` ┌─────────────────────────────────────────────────┐ │ Lix │ │ │ │ ┌────────────┐ ┌──────────┐ ┌─────────┐ ┌─────┐ │ │ │ Filesystem │ │ Branches │ │ History │ │ ... │ │ │ └────────────┘ └──────────┘ └─────────┘ └─────┘ │ └────────────────────────┬────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────┐ │ SQL database │ │ (SQLite, Postgres, etc.) │ └─────────────────────────────────────────────────┘ ``` [Read more about Lix architecture →](https://lix.dev/docs/architecture) ## Roadmap - [x] Core API ( = Result; #[derive(Parser, Debug)] #[command( name = "10k-entities-benchmark", about = "Benchmark file-write vs direct-entity-write paths for a 10k-prop JSON document" )] struct Args { #[arg(long, default_value_t = DEFAULT_PROPS)] props: usize, #[arg(long, default_value_t = DEFAULT_WARMUPS)] warmups: usize, #[arg(long, default_value_t = DEFAULT_ITERATIONS)] iterations: usize, #[arg(long, default_value = DEFAULT_OUTPUT_DIR)] output_dir: PathBuf, } #[derive(Debug, Clone, Copy)] enum BenchmarkCaseKind { FileWriteJson, DirectEntityWrites, } impl BenchmarkCaseKind { fn id(self) -> &'static str { match self { Self::FileWriteJson => "file_write_json_10k_props", Self::DirectEntityWrites => "direct_entity_writes_10k", } } fn title(self) -> &'static str { match self { Self::FileWriteJson => "File Write JSON With 10k Props", Self::DirectEntityWrites => "Direct Entity Writes 10k", } } fn timed_operation(self) -> &'static str { match self { Self::FileWriteJson => { "INSERT INTO lix_file for one 10k-prop JSON payload inside a buffered write transaction, then commit" } Self::DirectEntityWrites => { "UPDATE the root json_pointer row and INSERT 10k property json_pointer rows inside a buffered write transaction, then commit" } } } fn notes(self) -> Vec<&'static str> { match self { Self::FileWriteJson => vec![ "This is the real file-write path with plugin detect-changes enabled.", "The timed write is one INSERT INTO lix_file statement.", "The semantic layer derives json_pointer rows during commit.", "This case includes plugin detect-changes cost plus direct semantic row commit cost.", ], Self::DirectEntityWrites => vec![ "This isolates direct semantic writes through the engine without detect-changes.", "Outside the timer, the benchmark inserts an empty {} JSON file to establish the file descriptor and root entity.", "Inside the timer, it updates the root json_pointer row and inserts the 10k property rows through chunked lix_state statements.", "This case still includes normal commit, live-state rebuild, and file-cache refresh work for direct entity writes.", "The report records whether lix_file matched the expected payload after commit, but row-count verification is the hard invariant for this case.", ], } } fn timed_sql(self) -> &'static str { match self { Self::FileWriteJson => "INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)", Self::DirectEntityWrites => { "UPDATE lix_state root row; INSERT INTO lix_state (...) VALUES (... x chunk_size), repeated until props rows are written" } } } fn verification(self) -> &'static str { match self { Self::FileWriteJson => { "Verify committed json_pointer row count for the file and verify lix_file JSON matches the input payload." } Self::DirectEntityWrites => { "Verify committed json_pointer row count for the file and record whether lix_file JSON matched the expected 10k-prop payload." } } } fn setup_outside_timer(self) -> Vec<&'static str> { match self { Self::FileWriteJson => vec![ "Build plugin-json-v2 wasm.", "Create a fresh SQLite database.", "Boot the engine and install the JSON plugin.", ], Self::DirectEntityWrites => vec![ "Build plugin-json-v2 wasm.", "Create a fresh SQLite database.", "Boot the engine and install the JSON plugin.", "Insert an empty {} JSON file so direct state writes target an existing JSON file.", "Load the committed root json_pointer entity id for that file.", ], } } } #[derive(Debug, Serialize)] struct Report { generated_at_unix_ms: u128, benchmark: BenchmarkMetadata, shared_setup: SharedSetupReport, cases: Vec, comparison: ComparisonSummary, } #[derive(Debug, Serialize)] struct BenchmarkMetadata { name: &'static str, notes: Vec<&'static str>, } #[derive(Debug, Serialize)] struct SharedSetupReport { props: usize, input_bytes: usize, direct_property_rows: usize, expected_state_rows_after_commit: u64, plugin_key: &'static str, schema_key: &'static str, plugin_wasm_path: String, sqlite_mode: &'static str, } #[derive(Debug, Serialize)] struct CaseReport { case_id: &'static str, title: &'static str, timed_operation: &'static str, notes: Vec<&'static str>, setup: CaseSetupReport, warmups: Vec, samples: Vec, timing_ms: TimingSummary, } #[derive(Debug, Serialize)] struct CaseSetupReport { timed_rows: usize, timed_sql: &'static str, setup_outside_timer: Vec<&'static str>, verification: &'static str, } #[derive(Debug, Clone, Serialize)] struct RunSample { index: usize, write_ms: f64, commit_ms: f64, total_ms: f64, committed_state_rows: u64, file_matches_expected: bool, } #[derive(Debug, Serialize)] struct TimingSummary { sample_count: usize, write: PhaseSummary, commit: PhaseSummary, total: PhaseSummary, } #[derive(Debug, Serialize)] struct PhaseSummary { mean_ms: f64, median_ms: f64, min_ms: f64, max_ms: f64, } #[derive(Debug, Serialize)] struct ComparisonSummary { file_write_total_mean_ms: f64, direct_entity_total_mean_ms: f64, file_write_minus_direct_entity_total_mean_ms: f64, file_write_commit_mean_ms: f64, direct_entity_commit_mean_ms: f64, file_write_minus_direct_entity_commit_mean_ms: f64, file_write_write_mean_ms: f64, direct_entity_write_mean_ms: f64, file_write_minus_direct_entity_write_mean_ms: f64, file_write_to_direct_entity_total_ratio: f64, } struct TempSqlitePath { path: PathBuf, } impl TempSqlitePath { fn new(label: &str) -> Self { Self { path: temp_sqlite_path(label), } } fn path(&self) -> &Path { &self.path } } impl Drop for TempSqlitePath { fn drop(&mut self) { for suffix in ["", "-wal", "-shm", "-journal"] { let _ = std::fs::remove_file(format!("{}{}", self.path.display(), suffix)); } } } fn main() { let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("tokio runtime should initialize"); if let Err(error) = runtime.block_on(run(Args::parse())) { eprintln!("error: {error}"); std::process::exit(1); } } async fn run(args: Args) -> BenchResult<()> { if args.props == 0 { return Err("--props must be greater than 0".to_string()); } if args.iterations == 0 { return Err("--iterations must be greater than 0".to_string()); } fs::create_dir_all(&args.output_dir).map_err(io_err)?; let repo_root = repo_root()?; let plugin_wasm_path = build_plugin_json_v2_wasm(&repo_root)?; let plugin_wasm_bytes = fs::read(&plugin_wasm_path).map_err(io_err)?; let plugin_archive = build_plugin_archive(&plugin_wasm_bytes)?; let payload = build_flat_json_payload(args.props)?; let expected_state_rows_after_commit = (args.props + 1) as u64; let wasm_runtime: Arc = Arc::new(wasmtime_runtime::TestWasmtimeRuntime::new().map_err(lix_err)?); let file_write_case = run_case( BenchmarkCaseKind::FileWriteJson, &args, Arc::clone(&wasm_runtime), &plugin_archive, &payload, expected_state_rows_after_commit, ) .await?; let direct_entity_case = run_case( BenchmarkCaseKind::DirectEntityWrites, &args, Arc::clone(&wasm_runtime), &plugin_archive, &payload, expected_state_rows_after_commit, ) .await?; let comparison = build_comparison_summary(&file_write_case, &direct_entity_case)?; let report = Report { generated_at_unix_ms: now_unix_ms()?, benchmark: BenchmarkMetadata { name: "10k-entities-json-file-vs-direct-state", notes: vec![ "Both cases use a fresh file-backed SQLite database per run.", "Plugin wasm build, engine init, plugin install, and database setup are outside the timer.", "Each case reports write_ms, commit_ms, and total_ms separately.", "The goal is to separate file/plugin detect overhead from direct 10k entity write overhead.", ], }, shared_setup: SharedSetupReport { props: args.props, input_bytes: payload.len(), direct_property_rows: args.props, expected_state_rows_after_commit, plugin_key: PLUGIN_KEY, schema_key: PLUGIN_SCHEMA_KEY, plugin_wasm_path: plugin_wasm_path.display().to_string(), sqlite_mode: "fresh file-backed SQLite database per run", }, cases: vec![file_write_case, direct_entity_case], comparison, }; let report_json_path = args.output_dir.join("report.json"); let report_markdown_path = args.output_dir.join("report.md"); fs::write( &report_json_path, serde_json::to_vec_pretty(&report).map_err(serde_err)?, ) .map_err(io_err)?; fs::write(&report_markdown_path, render_markdown_report(&report)).map_err(io_err)?; print_summary(&report, &report_json_path, &report_markdown_path); Ok(()) } async fn run_case( kind: BenchmarkCaseKind, args: &Args, wasm_runtime: Arc, plugin_archive: &[u8], payload: &[u8], expected_state_rows_after_commit: u64, ) -> BenchResult { let mut warmups = Vec::with_capacity(args.warmups); for index in 0..args.warmups { warmups.push( run_sample( kind, index, Arc::clone(&wasm_runtime), plugin_archive, payload, expected_state_rows_after_commit, ) .await?, ); } let mut samples = Vec::with_capacity(args.iterations); for index in 0..args.iterations { samples.push( run_sample( kind, index, Arc::clone(&wasm_runtime), plugin_archive, payload, expected_state_rows_after_commit, ) .await?, ); } Ok(CaseReport { case_id: kind.id(), title: kind.title(), timed_operation: kind.timed_operation(), notes: kind.notes(), setup: CaseSetupReport { timed_rows: match kind { BenchmarkCaseKind::FileWriteJson => 1, BenchmarkCaseKind::DirectEntityWrites => args.props + 1, }, timed_sql: kind.timed_sql(), setup_outside_timer: kind.setup_outside_timer(), verification: kind.verification(), }, warmups, samples: samples.clone(), timing_ms: summarize_timings(&samples)?, }) } async fn run_sample( kind: BenchmarkCaseKind, index: usize, wasm_runtime: Arc, plugin_archive: &[u8], payload: &[u8], expected_state_rows_after_commit: u64, ) -> BenchResult { match kind { BenchmarkCaseKind::FileWriteJson => { run_file_write_sample( index, wasm_runtime, plugin_archive, payload, expected_state_rows_after_commit, ) .await } BenchmarkCaseKind::DirectEntityWrites => { run_direct_entity_write_sample( index, wasm_runtime, plugin_archive, payload, expected_state_rows_after_commit, ) .await } } } async fn run_file_write_sample( index: usize, wasm_runtime: Arc, plugin_archive: &[u8], payload: &[u8], expected_state_rows_after_commit: u64, ) -> BenchResult { let sqlite_path = TempSqlitePath::new(&format!("10k-entities-file-write-{index}")); let session = open_prepared_session(sqlite_path.path(), wasm_runtime, plugin_archive).await?; let file_id = format!("json-file-write-{index}"); let file_path = format!("/{file_id}.json"); let active_version_id = session.active_version_id(); let mut transaction = Some( session .begin_transaction_with_options(ExecuteOptions::default()) .await .map_err(lix_err)?, ); let started_at = Instant::now(); let write_started_at = Instant::now(); let write_result = { let transaction = transaction .as_mut() .expect("transaction should be available during write phase"); transaction .execute( "INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)", &[ Value::Text(file_id.clone()), Value::Text(file_path), Value::Blob(payload.to_vec()), ], ) .await .map_err(lix_err) }; if let Err(error) = write_result { if let Some(transaction) = transaction.take() { let _ = transaction.rollback().await; } return Err(error); } let write_ms = write_started_at.elapsed().as_secs_f64() * 1000.0; let commit_started_at = Instant::now(); transaction .take() .expect("transaction should be available for commit") .commit() .await .map_err(lix_err)?; let commit_ms = commit_started_at.elapsed().as_secs_f64() * 1000.0; let total_ms = started_at.elapsed().as_secs_f64() * 1000.0; finish_sample( index, &session, &file_id, &active_version_id, payload, expected_state_rows_after_commit, true, write_ms, commit_ms, total_ms, ) .await } async fn run_direct_entity_write_sample( index: usize, wasm_runtime: Arc, plugin_archive: &[u8], payload: &[u8], expected_state_rows_after_commit: u64, ) -> BenchResult { let sqlite_path = TempSqlitePath::new(&format!("10k-entities-direct-state-{index}")); let session = open_prepared_session(sqlite_path.path(), wasm_runtime, plugin_archive).await?; let file_id = format!("json-direct-state-{index}"); let file_path = format!("/{file_id}.json"); let active_version_id = session.active_version_id(); bootstrap_empty_json_file(&session, &file_id, &file_path).await?; let root_entity_id = load_root_json_pointer_entity_id(&session, &file_id, &active_version_id).await?; let direct_write_sql_batches = build_direct_entity_write_sql_batches( &file_id, &root_entity_id, payload, DIRECT_ENTITY_WRITE_CHUNK_SIZE, )?; let mut transaction = Some( session .begin_transaction_with_options(ExecuteOptions::default()) .await .map_err(lix_err)?, ); let started_at = Instant::now(); let write_started_at = Instant::now(); let write_result = { let transaction = transaction .as_mut() .expect("transaction should be available during write phase"); let mut result = Ok(()); for sql in &direct_write_sql_batches { if let Err(error) = transaction.execute(sql, &[]).await.map_err(lix_err) { result = Err(error); break; } } result }; if let Err(error) = write_result { if let Some(transaction) = transaction.take() { let _ = transaction.rollback().await; } return Err(error); } let write_ms = write_started_at.elapsed().as_secs_f64() * 1000.0; let commit_started_at = Instant::now(); transaction .take() .expect("transaction should be available for commit") .commit() .await .map_err(lix_err)?; let commit_ms = commit_started_at.elapsed().as_secs_f64() * 1000.0; let total_ms = started_at.elapsed().as_secs_f64() * 1000.0; finish_sample( index, &session, &file_id, &active_version_id, payload, expected_state_rows_after_commit, false, write_ms, commit_ms, total_ms, ) .await } async fn finish_sample( index: usize, session: &Session, file_id: &str, active_version_id: &str, expected_payload: &[u8], expected_state_rows_after_commit: u64, enforce_file_match: bool, write_ms: f64, commit_ms: f64, total_ms: f64, ) -> BenchResult { let committed_state_rows = scalar_count( session, "SELECT COUNT(*) \ FROM lix_state_by_version \ WHERE file_id = ?1 \ AND version_id = ?2 \ AND schema_key = ?3 \ AND snapshot_content IS NOT NULL", &[ Value::Text(file_id.to_string()), Value::Text(active_version_id.to_string()), Value::Text(PLUGIN_SCHEMA_KEY.to_string()), ], ) .await?; if committed_state_rows != expected_state_rows_after_commit { return Err(format!( "expected {expected_state_rows_after_commit} committed json_pointer rows for '{file_id}', got {committed_state_rows}" )); } let file_matches_expected = match verify_file_json_matches(session, file_id, expected_payload).await { Ok(()) => true, Err(error) if !enforce_file_match => { let _ = error; false } Err(error) => return Err(error), }; Ok(RunSample { index, write_ms, commit_ms, total_ms, committed_state_rows, file_matches_expected, }) } async fn open_prepared_session( sqlite_path: &Path, wasm_runtime: Arc, plugin_archive: &[u8], ) -> BenchResult { let backend = sqlite_backend::BenchSqliteBackend::file_backed(sqlite_path).map_err(lix_err)?; let mut boot_args = BootArgs::new(Box::new(backend), wasm_runtime); boot_args.access_to_internal = true; let engine = Arc::new(boot(boot_args)); engine.initialize().await.map_err(lix_err)?; let session = engine.open_session().await.map_err(lix_err)?; session .install_plugin(plugin_archive) .await .map_err(lix_err)?; Ok(session) } async fn bootstrap_empty_json_file( session: &Session, file_id: &str, file_path: &str, ) -> BenchResult<()> { session .execute( "INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)", &[ Value::Text(file_id.to_string()), Value::Text(file_path.to_string()), Value::Blob(b"{}".to_vec()), ], ) .await .map_err(lix_err)?; Ok(()) } async fn load_root_json_pointer_entity_id( session: &Session, file_id: &str, active_version_id: &str, ) -> BenchResult { let result = session .execute( "SELECT entity_id \ FROM lix_state_by_version \ WHERE file_id = ?1 \ AND version_id = ?2 \ AND schema_key = ?3 \ AND snapshot_content IS NOT NULL \ ORDER BY entity_id ASC \ LIMIT 1", &[ Value::Text(file_id.to_string()), Value::Text(active_version_id.to_string()), Value::Text(PLUGIN_SCHEMA_KEY.to_string()), ], ) .await .map_err(lix_err)?; let value = result .statements .first() .and_then(|statement| statement.rows.first()) .and_then(|row| row.first()) .ok_or_else(|| format!("query returned no root json_pointer row for '{file_id}'"))?; match value { Value::Text(text) => Ok(text.clone()), other => Err(format!( "expected text entity_id for root json_pointer row of '{file_id}', got {other:?}" )), } } fn build_direct_entity_write_sql_batches( file_id: &str, root_entity_id: &str, payload: &[u8], chunk_size: usize, ) -> BenchResult> { if chunk_size == 0 { return Err("direct entity write chunk size must be greater than 0".to_string()); } let expected_json: serde_json::Value = serde_json::from_slice(payload).map_err(serde_err)?; let object = expected_json .as_object() .ok_or_else(|| "expected generated payload to be a JSON object".to_string())?; let root_snapshot_content = serde_json::json!({ "path": root_entity_id, "value": expected_json, }); let root_snapshot_content = serde_json::to_string(&root_snapshot_content).map_err(serde_err)?; let root_entity_id_json = serde_json::to_string(&serde_json::json!([root_entity_id])).map_err(serde_err)?; let mut statements = vec![format!( "UPDATE lix_state \ SET snapshot_content = '{}' \ WHERE entity_id = lix_json('{}') \ AND file_id = '{}' \ AND schema_key = '{}' \ AND plugin_key = '{}'", escape_sql_string(&root_snapshot_content), escape_sql_string(&root_entity_id_json), escape_sql_string(file_id), PLUGIN_SCHEMA_KEY, PLUGIN_KEY, )]; let entries = object .iter() .map(|(key, value)| -> BenchResult { let entity_id = format!("/{}", escape_json_pointer_segment(key)); let snapshot_content = serde_json::json!({ "path": entity_id, "value": value, }); let snapshot_content = serde_json::to_string(&snapshot_content).map_err(serde_err)?; Ok(format!( "('{}', '{}', '{}', '{}', '{}')", escape_sql_string(&entity_id), escape_sql_string(file_id), PLUGIN_SCHEMA_KEY, PLUGIN_KEY, escape_sql_string(&snapshot_content), )) }) .collect::>>()?; for chunk in entries.chunks(chunk_size) { statements.push(format!( "INSERT INTO lix_state (entity_id, file_id, schema_key, plugin_key, snapshot_content) VALUES {}", chunk.join(", ") )); } Ok(statements) } async fn verify_file_json_matches( session: &Session, file_id: &str, expected_payload: &[u8], ) -> BenchResult<()> { let result = session .execute( "SELECT data FROM lix_file WHERE id = ?1 LIMIT 1", &[Value::Text(file_id.to_string())], ) .await .map_err(lix_err)?; let value = result .statements .first() .and_then(|statement| statement.rows.first()) .and_then(|row| row.first()) .ok_or_else(|| format!("query returned no file data row for '{file_id}'"))?; let actual_bytes = match value { Value::Blob(bytes) => bytes.clone(), other => { return Err(format!( "expected blob data from lix_file for '{file_id}', got {other:?}" )); } }; let actual_json: serde_json::Value = serde_json::from_slice(&actual_bytes).map_err(serde_err)?; let expected_json: serde_json::Value = serde_json::from_slice(expected_payload).map_err(serde_err)?; if actual_json != expected_json { return Err(format!( "lix_file JSON for '{file_id}' did not match expected payload" )); } Ok(()) } fn build_plugin_archive(plugin_wasm_bytes: &[u8]) -> BenchResult> { let options = SimpleFileOptions::default().compression_method(CompressionMethod::Stored); let mut writer = ZipWriter::new(Cursor::new(Vec::new())); writer .start_file("manifest.json", options) .map_err(io_err)?; writer .write_all(PLUGIN_ARCHIVE_MANIFEST_JSON.as_bytes()) .map_err(io_err)?; writer.start_file("plugin.wasm", options).map_err(io_err)?; writer.write_all(plugin_wasm_bytes).map_err(io_err)?; writer .start_file("schema/json_pointer.json", options) .map_err(io_err)?; writer .write_all(JSON_POINTER_SCHEMA_JSON.as_bytes()) .map_err(io_err)?; writer .finish() .map(|cursor| cursor.into_inner()) .map_err(io_err) } async fn scalar_count(session: &Session, sql: &str, params: &[Value]) -> BenchResult { let result = session.execute(sql, params).await.map_err(lix_err)?; let value = result .statements .first() .and_then(|statement| statement.rows.first()) .and_then(|row| row.first()) .ok_or_else(|| format!("query returned no scalar value: {sql}"))?; match value { Value::Integer(number) => { if *number < 0 { Err(format!("query returned negative count {number}: {sql}")) } else { Ok(*number as u64) } } other => Err(format!( "query returned non-integer scalar {other:?}: {sql}" )), } } fn summarize_timings(samples: &[RunSample]) -> BenchResult { if samples.is_empty() { return Err("cannot summarize empty samples".to_string()); } Ok(TimingSummary { sample_count: samples.len(), write: summarize_phase(samples.iter().map(|sample| sample.write_ms).collect())?, commit: summarize_phase(samples.iter().map(|sample| sample.commit_ms).collect())?, total: summarize_phase(samples.iter().map(|sample| sample.total_ms).collect())?, }) } fn summarize_phase(mut values: Vec) -> BenchResult { if values.is_empty() { return Err("cannot summarize empty timing phase".to_string()); } values.sort_by(|left, right| left.partial_cmp(right).unwrap_or(std::cmp::Ordering::Equal)); let sum = values.iter().sum::(); let median_ms = if values.len() % 2 == 0 { let upper = values.len() / 2; (values[upper - 1] + values[upper]) / 2.0 } else { values[values.len() / 2] }; Ok(PhaseSummary { mean_ms: sum / values.len() as f64, median_ms, min_ms: values[0], max_ms: values[values.len() - 1], }) } fn build_comparison_summary( file_write_case: &CaseReport, direct_entity_case: &CaseReport, ) -> BenchResult { let file_write_total_mean_ms = file_write_case.timing_ms.total.mean_ms; let direct_entity_total_mean_ms = direct_entity_case.timing_ms.total.mean_ms; let ratio = if direct_entity_total_mean_ms == 0.0 { return Err("cannot compare cases: direct-entity total mean is zero".to_string()); } else { file_write_total_mean_ms / direct_entity_total_mean_ms }; Ok(ComparisonSummary { file_write_total_mean_ms, direct_entity_total_mean_ms, file_write_minus_direct_entity_total_mean_ms: file_write_total_mean_ms - direct_entity_total_mean_ms, file_write_commit_mean_ms: file_write_case.timing_ms.commit.mean_ms, direct_entity_commit_mean_ms: direct_entity_case.timing_ms.commit.mean_ms, file_write_minus_direct_entity_commit_mean_ms: file_write_case.timing_ms.commit.mean_ms - direct_entity_case.timing_ms.commit.mean_ms, file_write_write_mean_ms: file_write_case.timing_ms.write.mean_ms, direct_entity_write_mean_ms: direct_entity_case.timing_ms.write.mean_ms, file_write_minus_direct_entity_write_mean_ms: file_write_case.timing_ms.write.mean_ms - direct_entity_case.timing_ms.write.mean_ms, file_write_to_direct_entity_total_ratio: ratio, }) } fn build_flat_json_payload(props: usize) -> BenchResult> { let mut root = serde_json::Map::new(); for index in 0..props { root.insert( format!("prop_{index:05}"), serde_json::Value::String(format!("value_{index:05}")), ); } serde_json::to_vec(&serde_json::Value::Object(root)).map_err(serde_err) } fn build_plugin_json_v2_wasm(repo_root: &Path) -> BenchResult { let manifest_path = repo_root.join("packages/plugin-json-v2/Cargo.toml"); let wasm_path = repo_root.join("packages/plugin-json-v2/target/wasm32-wasip2/release/plugin_json_v2.wasm"); let build = || { Command::new("cargo") .arg("build") .arg("--manifest-path") .arg(&manifest_path) .arg("--target") .arg("wasm32-wasip2") .arg("--release") .output() .map_err(io_err) }; let output = build()?; if !output.status.success() { let stderr = String::from_utf8_lossy(&output.stderr); if stderr.contains("wasm32-wasip2") && (stderr.contains("target may not be installed") || stderr.contains("can't find crate for `core`")) { let rustup = Command::new("rustup") .arg("target") .arg("add") .arg("wasm32-wasip2") .output() .map_err(io_err)?; if !rustup.status.success() { return Err(format!( "rustup target add wasm32-wasip2 failed:\n{}", String::from_utf8_lossy(&rustup.stderr) )); } let retry = build()?; if !retry.status.success() { return Err(format!( "cargo build for plugin_json_v2 failed after installing wasm32-wasip2:\n{}", String::from_utf8_lossy(&retry.stderr) )); } } else { return Err(format!( "cargo build for plugin_json_v2 failed:\n{}", String::from_utf8_lossy(&output.stderr) )); } } if !wasm_path.exists() { return Err(format!( "plugin wasm build succeeded but output was missing at {}", wasm_path.display() )); } Ok(wasm_path) } fn render_markdown_report(report: &Report) -> String { let case_sections = report .cases .iter() .map(render_case_markdown) .collect::>() .join("\n\n"); format!( "# 10k Entities Benchmark Comparison\n\n\ - Props: {}\n\ - Input bytes: {}\n\ - Direct property rows inside timed direct-write case: {}\n\ - Expected committed json_pointer rows after each case: {}\n\ - Plugin key: `{}`\n\ - Schema key: `{}`\n\ - SQLite mode: `{}`\n\ - Plugin wasm: `{}`\n\n\ ## Comparison\n\n\ | metric | file write | direct entities | delta |\n\ | --- | ---: | ---: | ---: |\n\ | write mean ms | {:.3} | {:.3} | {:.3} |\n\ | commit mean ms | {:.3} | {:.3} | {:.3} |\n\ | total mean ms | {:.3} | {:.3} | {:.3} |\n\ | total ratio | {:.3}x | 1.000x | {:.3}x |\n\n\ {}\n", report.shared_setup.props, report.shared_setup.input_bytes, report.shared_setup.direct_property_rows, report.shared_setup.expected_state_rows_after_commit, report.shared_setup.plugin_key, report.shared_setup.schema_key, report.shared_setup.sqlite_mode, report.shared_setup.plugin_wasm_path, report.comparison.file_write_write_mean_ms, report.comparison.direct_entity_write_mean_ms, report .comparison .file_write_minus_direct_entity_write_mean_ms, report.comparison.file_write_commit_mean_ms, report.comparison.direct_entity_commit_mean_ms, report .comparison .file_write_minus_direct_entity_commit_mean_ms, report.comparison.file_write_total_mean_ms, report.comparison.direct_entity_total_mean_ms, report .comparison .file_write_minus_direct_entity_total_mean_ms, report.comparison.file_write_to_direct_entity_total_ratio, report.comparison.file_write_to_direct_entity_total_ratio, case_sections, ) } fn render_case_markdown(case: &CaseReport) -> String { let sample_rows = case .samples .iter() .map(|sample| { format!( "| {} | {:.3} | {:.3} | {:.3} | {} | {} |", sample.index, sample.write_ms, sample.commit_ms, sample.total_ms, sample.committed_state_rows, sample.file_matches_expected ) }) .collect::>() .join("\n"); let notes = case .notes .iter() .map(|note| format!("- {note}")) .collect::>() .join("\n"); let setup_notes = case .setup .setup_outside_timer .iter() .map(|note| format!("- {note}")) .collect::>() .join("\n"); format!( "## {}\n\n\ Timed operation: {}\n\n\ {}\n\n\ Setup outside timer:\n\ {}\n\n\ - Timed rows: {}\n\ - Timed SQL: `{}`\n\ - Verification: {}\n\n\ ### Timing\n\n\ | phase | mean ms | median ms | min ms | max ms |\n\ | --- | ---: | ---: | ---: | ---: |\n\ | write | {:.3} | {:.3} | {:.3} | {:.3} |\n\ | commit | {:.3} | {:.3} | {:.3} | {:.3} |\n\ | total | {:.3} | {:.3} | {:.3} | {:.3} |\n\n\ ### Samples\n\n\ | run | write ms | commit ms | total ms | committed state rows | file matches expected |\n\ | --- | ---: | ---: | ---: | ---: | --- |\n\ {}\n", case.title, case.timed_operation, notes, setup_notes, case.setup.timed_rows, case.setup.timed_sql, case.setup.verification, case.timing_ms.write.mean_ms, case.timing_ms.write.median_ms, case.timing_ms.write.min_ms, case.timing_ms.write.max_ms, case.timing_ms.commit.mean_ms, case.timing_ms.commit.median_ms, case.timing_ms.commit.min_ms, case.timing_ms.commit.max_ms, case.timing_ms.total.mean_ms, case.timing_ms.total.median_ms, case.timing_ms.total.min_ms, case.timing_ms.total.max_ms, sample_rows, ) } fn print_summary(report: &Report, report_json_path: &Path, report_markdown_path: &Path) { println!("10k entities benchmark comparison"); println!( "props={} input_bytes={} expected_state_rows_after_commit={}", report.shared_setup.props, report.shared_setup.input_bytes, report.shared_setup.expected_state_rows_after_commit ); for case in &report.cases { println!("case={} title={}", case.case_id, case.title); println!( "write_ms mean={:.3} median={:.3} min={:.3} max={:.3}", case.timing_ms.write.mean_ms, case.timing_ms.write.median_ms, case.timing_ms.write.min_ms, case.timing_ms.write.max_ms, ); println!( "commit_ms mean={:.3} median={:.3} min={:.3} max={:.3}", case.timing_ms.commit.mean_ms, case.timing_ms.commit.median_ms, case.timing_ms.commit.min_ms, case.timing_ms.commit.max_ms, ); println!( "total_ms mean={:.3} median={:.3} min={:.3} max={:.3} samples={}", case.timing_ms.total.mean_ms, case.timing_ms.total.median_ms, case.timing_ms.total.min_ms, case.timing_ms.total.max_ms, case.timing_ms.sample_count, ); } println!( "comparison total_mean_delta_ms={:.3} total_ratio={:.3}x", report .comparison .file_write_minus_direct_entity_total_mean_ms, report.comparison.file_write_to_direct_entity_total_ratio, ); println!("report_json={}", report_json_path.display()); println!("report_markdown={}", report_markdown_path.display()); } fn repo_root() -> BenchResult { Path::new(env!("CARGO_MANIFEST_DIR")) .join("../..") .canonicalize() .map_err(io_err) } fn temp_sqlite_path(label: &str) -> PathBuf { let nanos = SystemTime::now() .duration_since(UNIX_EPOCH) .expect("system time should be after unix epoch") .as_nanos(); std::env::temp_dir().join(format!("lix-{label}-{nanos}.sqlite")) } fn now_unix_ms() -> BenchResult { Ok(SystemTime::now() .duration_since(UNIX_EPOCH) .map_err(io_err)? .as_millis()) } fn escape_sql_string(value: &str) -> String { value.replace('\'', "''") } fn escape_json_pointer_segment(segment: &str) -> String { segment.replace('~', "~0").replace('/', "~1") } fn io_err(error: impl std::fmt::Display) -> String { error.to_string() } fn serde_err(error: impl std::fmt::Display) -> String { error.to_string() } fn lix_err(error: LixError) -> String { format!("{}: {}", error.code, error.description) } ================================================ FILE: benchmarks/10k-entities/src/sqlite_backend.rs ================================================ use std::path::Path; use std::str::FromStr; use std::sync::Arc; use lix_engine::{ collapse_prepared_batch_for_dialect, LixBackend, LixBackendTransaction, LixError, PreparedBatch, QueryResult, SqlDialect, TransactionMode, Value, }; use sqlx::sqlite::{SqliteConnectOptions, SqlitePoolOptions}; use sqlx::{Column, Executor, Row, TypeInfo, ValueRef}; use tokio::sync::OnceCell; #[derive(Clone)] pub struct BenchSqliteBackend { inner: Arc, } struct BenchSqliteBackendInner { filename: String, pool: OnceCell, } struct BenchSqliteTransaction { conn: sqlx::pool::PoolConnection, mode: TransactionMode, } impl BenchSqliteBackend { pub fn file_backed(path: &Path) -> Result { if let Some(parent) = path.parent() { std::fs::create_dir_all(parent).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!( "failed to create sqlite benchmark directory {}: {error}", parent.display() ), hint: None, })?; } Ok(Self { inner: Arc::new(BenchSqliteBackendInner { filename: path.display().to_string(), pool: OnceCell::const_new(), }), }) } async fn pool(&self) -> Result<&sqlx::SqlitePool, LixError> { self.inner .pool .get_or_try_init(|| async { let conn = if self.inner.filename == ":memory:" { "sqlite::memory:".to_string() } else if self.inner.filename.starts_with("sqlite:") || self.inner.filename.starts_with("file:") { self.inner.filename.clone() } else { format!("sqlite://{}", self.inner.filename) }; let options = SqliteConnectOptions::from_str(&conn) .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, })? .create_if_missing(true) .foreign_keys(true) .busy_timeout(std::time::Duration::from_secs(30)); SqlitePoolOptions::new() .max_connections(1) .connect_with(options) .await .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, }) }) .await } } #[async_trait::async_trait(?Send)] impl LixBackend for BenchSqliteBackend { fn dialect(&self) -> SqlDialect { SqlDialect::Sqlite } async fn execute(&self, sql: &str, params: &[Value]) -> Result { let mut transaction = self.begin_transaction(TransactionMode::Deferred).await?; let result = transaction.execute(sql, params).await; match result { Ok(result) => { transaction.commit().await?; Ok(result) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } async fn begin_transaction( &self, mode: TransactionMode, ) -> Result, LixError> { let pool = self.pool().await?; let mut conn = pool.acquire().await.map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, })?; sqlx::query(match mode { TransactionMode::Read | TransactionMode::Deferred => "BEGIN", TransactionMode::Write => "BEGIN IMMEDIATE", }) .execute(&mut *conn) .await .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, })?; Ok(Box::new(BenchSqliteTransaction { conn, mode })) } async fn begin_savepoint( &self, _name: &str, ) -> Result, LixError> { self.begin_transaction(TransactionMode::Write).await } } #[async_trait::async_trait(?Send)] impl LixBackendTransaction for BenchSqliteTransaction { fn dialect(&self) -> SqlDialect { SqlDialect::Sqlite } fn mode(&self) -> TransactionMode { self.mode } async fn execute(&mut self, sql: &str, params: &[Value]) -> Result { execute_query_with_connection(&mut self.conn, sql, params).await } async fn execute_batch(&mut self, batch: &PreparedBatch) -> Result { let collapsed = collapse_prepared_batch_for_dialect(batch, self.dialect())?; if collapsed.sql.trim().is_empty() { return Ok(QueryResult { rows: Vec::new(), columns: Vec::new(), }); } self.conn .execute(collapsed.sql.as_str()) .await .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, })?; Ok(QueryResult { rows: Vec::new(), columns: Vec::new(), }) } async fn commit(mut self: Box) -> Result<(), LixError> { sqlx::query("COMMIT") .execute(&mut *self.conn) .await .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, })?; Ok(()) } async fn rollback(mut self: Box) -> Result<(), LixError> { sqlx::query("ROLLBACK") .execute(&mut *self.conn) .await .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, })?; Ok(()) } } async fn execute_query_with_connection( conn: &mut sqlx::pool::PoolConnection, sql: &str, params: &[Value], ) -> Result { let mut query = sqlx::query(sql); for param in params { query = bind_param_sqlite(query, param); } let rows = query .fetch_all(&mut **conn) .await .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, })?; let columns = rows .first() .map(|row| { row.columns() .iter() .map(|column| column.name().to_string()) .collect::>() }) .unwrap_or_default(); let mut result_rows = Vec::with_capacity(rows.len()); for row in rows { let mut out = Vec::with_capacity(row.columns().len()); for index in 0..row.columns().len() { out.push(map_sqlite_value(&row, index)?); } result_rows.push(out); } Ok(QueryResult { rows: result_rows, columns, }) } fn bind_param_sqlite<'q>( query: sqlx::query::Query<'q, sqlx::Sqlite, sqlx::sqlite::SqliteArguments<'q>>, param: &Value, ) -> sqlx::query::Query<'q, sqlx::Sqlite, sqlx::sqlite::SqliteArguments<'q>> { match param { Value::Null => query.bind::>(None), Value::Boolean(value) => query.bind(*value), Value::Integer(value) => query.bind(*value), Value::Real(value) => query.bind(*value), Value::Text(value) => query.bind(value.clone()), Value::Blob(value) => query.bind(value.clone()), Value::Json(value) => query.bind(value.to_string()), } } fn map_sqlite_value(row: &sqlx::sqlite::SqliteRow, index: usize) -> Result { let raw = row.try_get_raw(index).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, })?; if raw.is_null() { return Ok(Value::Null); } match raw.type_info().name() { "INTEGER" => row.try_get::(index).map(Value::Integer), "REAL" => row.try_get::(index).map(Value::Real), "TEXT" => row.try_get::(index).map(Value::Text), "BLOB" => row.try_get::, _>(index).map(Value::Blob), _ => row .try_get::(index) .map(Value::Text) .or_else(|_| row.try_get::(index).map(Value::Integer)) .or_else(|_| row.try_get::(index).map(Value::Real)) .or_else(|_| row.try_get::, _>(index).map(Value::Blob)), } .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: error.to_string(), hint: None, }) } ================================================ FILE: benchmarks/10k-entities/src/wasmtime_runtime.rs ================================================ use std::collections::HashMap; use std::hash::{DefaultHasher, Hash, Hasher}; use std::sync::{Arc, Mutex}; use async_trait::async_trait; use lix_engine::wasm::{WasmComponentInstance, WasmLimits, WasmRuntime}; use lix_engine::{CanonicalJson, LixError}; use wasmtime::component::{Component, Linker, ResourceTable}; use wasmtime::{Config, Engine, Store}; use wasmtime_wasi::{IoView, WasiCtx, WasiCtxBuilder, WasiView}; mod plugin_bindings { wasmtime::component::bindgen!({ path: "../../packages/engine/wit", world: "plugin", }); } #[derive(Debug, serde::Deserialize)] struct WirePluginFile { id: String, path: String, data: Vec, } #[derive(Debug, serde::Deserialize)] struct WireDetectChangesRequest { before: Option, after: WirePluginFile, state_context: Option, } #[derive(Debug, serde::Deserialize)] struct WireDetectStateContext { active_state: Option>, } #[derive(Debug, serde::Deserialize)] struct WireActiveStateRow { entity_id: String, schema_key: Option, snapshot_content: Option, file_id: Option, plugin_key: Option, version_id: Option, change_id: Option, metadata: Option, created_at: Option, updated_at: Option, } #[derive(Debug, serde::Deserialize)] struct WirePluginEntityChange { entity_id: String, schema_key: String, snapshot_content: Option, } #[derive(Debug, serde::Deserialize)] struct WireApplyChangesRequest { file: WirePluginFile, changes: Vec, } #[derive(Debug, serde::Serialize)] struct WirePluginEntityChangeOutput { entity_id: String, schema_key: String, snapshot_content: Option, } pub struct TestWasmtimeRuntime { engine: Engine, component_cache: Mutex>>, } impl TestWasmtimeRuntime { pub fn new() -> Result { let mut config = Config::new(); config.wasm_component_model(true); config.async_support(false); config.consume_fuel(true); let engine = Engine::new(&config).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Failed to initialize wasmtime engine: {error}"), hint: None, })?; Ok(Self { engine, component_cache: Mutex::new(HashMap::new()), }) } } #[derive(Clone, PartialEq, Eq, Hash)] struct ComponentCacheKey { wasm_fingerprint: u64, wasm_len: usize, } impl ComponentCacheKey { fn from_bytes(bytes: &[u8]) -> Self { Self { wasm_fingerprint: wasm_fingerprint(bytes), wasm_len: bytes.len(), } } } struct TestWasmtimeInstance { engine: Engine, component: Arc, } struct WasiState { table: ResourceTable, ctx: WasiCtx, } impl IoView for WasiState { fn table(&mut self) -> &mut ResourceTable { &mut self.table } } impl WasiView for WasiState { fn ctx(&mut self) -> &mut WasiCtx { &mut self.ctx } } #[async_trait(?Send)] impl WasmRuntime for TestWasmtimeRuntime { async fn init_component( &self, bytes: Vec, _limits: WasmLimits, ) -> Result, LixError> { let cache_key = ComponentCacheKey::from_bytes(&bytes); if let Some(component) = self .component_cache .lock() .expect("component cache mutex poisoned") .get(&cache_key) .cloned() { return Ok(Arc::new(TestWasmtimeInstance { engine: self.engine.clone(), component, })); } let compiled = Arc::new( Component::new(&self.engine, &bytes).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Failed to compile wasm component: {error}"), hint: None, })?, ); let component = { let mut cache = self .component_cache .lock() .expect("component cache mutex poisoned"); cache .entry(cache_key) .or_insert_with(|| compiled.clone()) .clone() }; Ok(Arc::new(TestWasmtimeInstance { engine: self.engine.clone(), component, })) } } #[async_trait(?Send)] impl WasmComponentInstance for TestWasmtimeInstance { async fn call(&self, export: &str, input: &[u8]) -> Result, LixError> { let mut store = Store::new( &self.engine, WasiState { table: ResourceTable::new(), ctx: WasiCtxBuilder::new().build(), }, ); store.set_fuel(u64::MAX).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Failed to configure wasm fuel: {error}"), hint: None, })?; let mut linker = Linker::new(&self.engine); wasmtime_wasi::add_to_linker_sync(&mut linker).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Failed to add wasi imports to linker: {error}"), hint: None, })?; let bindings = plugin_bindings::Plugin::instantiate(&mut store, self.component.as_ref(), &linker) .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Failed to instantiate wasm component: {error}"), hint: None, })?; match export { "detect-changes" | "api#detect-changes" => { let request: WireDetectChangesRequest = serde_json::from_slice(input).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!( "Failed to decode detect-changes request payload: {error}" ), hint: None, })?; let before = request.before.map(wire_file_to_binding); let after = wire_file_to_binding(request.after); let state_context = request.state_context.map(wire_state_context_to_binding); let result = bindings .lix_plugin_api() .call_detect_changes( &mut store, before.as_ref(), &after, state_context.as_ref(), ) .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Wasm call failed for export '{export}': {error}"), hint: None, })?; match result { Ok(changes) => { let wire = changes .into_iter() .map(binding_change_to_wire) .collect::, _>>()?; serde_json::to_vec(&wire).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!( "Failed to encode detect-changes response payload: {error}" ), hint: None, }) } Err(error) => Err(map_plugin_error(error)), } } "apply-changes" | "api#apply-changes" => { let request: WireApplyChangesRequest = serde_json::from_slice(input).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!( "Failed to decode apply-changes request payload: {error}" ), hint: None, })?; let file = wire_file_to_binding(request.file); let changes = request .changes .into_iter() .map(wire_change_to_binding) .collect::>(); let result = bindings .lix_plugin_api() .call_apply_changes(&mut store, &file, &changes) .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Wasm call failed for export '{export}': {error}"), hint: None, })?; match result { Ok(output) => Ok(output), Err(error) => Err(map_plugin_error(error)), } } other => Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Unsupported export '{other}' for TestWasmtimeRuntime"), hint: None, }), } } } fn wasm_fingerprint(bytes: &[u8]) -> u64 { let mut hasher = DefaultHasher::new(); bytes.hash(&mut hasher); hasher.finish() } fn wire_file_to_binding(file: WirePluginFile) -> plugin_bindings::exports::lix::plugin::api::File { plugin_bindings::exports::lix::plugin::api::File { id: file.id, path: file.path, data: file.data, } } fn wire_change_to_binding( change: WirePluginEntityChange, ) -> plugin_bindings::exports::lix::plugin::api::EntityChange { plugin_bindings::exports::lix::plugin::api::EntityChange { entity_id: change.entity_id, schema_key: change.schema_key, snapshot_content: change.snapshot_content.map(Into::into), } } fn wire_state_context_to_binding( context: WireDetectStateContext, ) -> plugin_bindings::exports::lix::plugin::api::DetectStateContext { plugin_bindings::exports::lix::plugin::api::DetectStateContext { active_state: context.active_state.map(|rows| { rows.into_iter() .map(wire_active_state_row_to_binding) .collect::>() }), } } fn wire_active_state_row_to_binding( row: WireActiveStateRow, ) -> plugin_bindings::exports::lix::plugin::api::ActiveStateRow { plugin_bindings::exports::lix::plugin::api::ActiveStateRow { entity_id: row.entity_id, schema_key: row.schema_key, snapshot_content: row.snapshot_content.map(Into::into), file_id: row.file_id, plugin_key: row.plugin_key, version_id: row.version_id, change_id: row.change_id, metadata: row.metadata.map(Into::into), created_at: row.created_at, updated_at: row.updated_at, } } fn binding_change_to_wire( change: plugin_bindings::exports::lix::plugin::api::EntityChange, ) -> Result { Ok(WirePluginEntityChangeOutput { entity_id: change.entity_id, schema_key: change.schema_key, snapshot_content: change .snapshot_content .map(CanonicalJson::from_text) .transpose()?, }) } fn map_plugin_error(error: plugin_bindings::exports::lix::plugin::api::PluginError) -> LixError { match error { plugin_bindings::exports::lix::plugin::api::PluginError::InvalidInput(message) => { LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Plugin invalid-input error: {message}"), hint: None, } } plugin_bindings::exports::lix::plugin::api::PluginError::Internal(message) => LixError { code: "LIX_ERROR_UNKNOWN".to_string(), description: format!("Plugin internal error: {message}"), hint: None, }, } } ================================================ FILE: benchmarks/engine2-json-pointer/Cargo.toml ================================================ [package] name = "engine2_json_pointer_benchmark" version = "0.1.0" edition = "2021" publish = false [dependencies] async-trait = "0.1" clap = { version = "4.5.31", features = ["derive"] } lix_rs_sdk = { path = "../../packages/rs-sdk" } rusqlite = { version = "0.32", features = ["bundled"] } serde = { version = "1", features = ["derive"] } serde_json = "1" tokio = { version = "1", features = ["rt"] } ================================================ FILE: benchmarks/engine2-json-pointer/README.md ================================================ # Engine2 JSON Pointer Benchmark This benchmark exercises engine2 end to end on a fresh on-disk SQLite-backed KV store. The first case measures direct insertion of `json_pointer` semantic rows through `lix_state`: - initialize engine2 storage - open the generated main version - register `packages/plugin-json-v2/schema/json_pointer.json` - insert `N` JSON pointer rows in chunked SQL statements - verify the committed row count through the normal SQL surface ## Usage ```bash cargo run --release -p engine2_json_pointer_benchmark -- \ --rows 10000 \ --warmups 1 \ --iterations 5 \ --output-dir artifact/benchmarks/engine2-json-pointer ``` Fast CI smoke: ```bash cargo run --release -p engine2_json_pointer_benchmark -- \ --rows 10000 \ --warmups 0 \ --iterations 1 \ --output-dir artifact/benchmarks/engine2-json-pointer ``` ================================================ FILE: benchmarks/engine2-json-pointer/src/main.rs ================================================ use clap::Parser; use lix_rs_sdk::{open_lix, ExecuteResult, Lix, LixError, OpenLixOptions, Value}; use serde::Serialize; use std::fs; use std::path::PathBuf; use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH}; use tokio::runtime::Builder; mod sqlite_backend; use sqlite_backend::Engine2SqliteBackend; const DEFAULT_OUTPUT_DIR: &str = "artifact/benchmarks/engine2-json-pointer"; const DEFAULT_ROWS: usize = 10_000; const DEFAULT_WARMUPS: usize = 1; const DEFAULT_ITERATIONS: usize = 5; const DEFAULT_CHUNK_SIZE: usize = 500; const JSON_POINTER_SCHEMA_JSON: &str = include_str!("../../../packages/plugin-json-v2/schema/json_pointer.json"); type BenchResult = Result; #[derive(Parser, Debug)] #[command( name = "engine2-json-pointer-benchmark", about = "Benchmark engine2 json_pointer writes on an on-disk SQLite KV backend" )] struct Args { #[arg(long, default_value_t = DEFAULT_ROWS)] rows: usize, #[arg(long, default_value_t = DEFAULT_WARMUPS)] warmups: usize, #[arg(long, default_value_t = DEFAULT_ITERATIONS)] iterations: usize, #[arg(long, default_value_t = DEFAULT_CHUNK_SIZE)] chunk_size: usize, #[arg(long, default_value = DEFAULT_OUTPUT_DIR)] output_dir: PathBuf, #[arg(long)] keep_databases: bool, } #[derive(Debug, Serialize)] struct Report { generated_at_unix_ms: u128, benchmark: &'static str, rows: usize, chunk_size: usize, warmups: Vec, samples: Vec, timing_ms: TimingSummary, } #[derive(Debug, Clone, Serialize)] struct RunSample { index: usize, sqlite_path: String, insert_ms: f64, verify_ms: f64, total_ms: f64, committed_rows: usize, } #[derive(Debug, Serialize)] struct TimingSummary { sample_count: usize, insert: PhaseSummary, verify: PhaseSummary, total: PhaseSummary, } #[derive(Debug, Serialize)] struct PhaseSummary { mean_ms: f64, median_ms: f64, min_ms: f64, max_ms: f64, } fn main() { if let Err(error) = run() { eprintln!("{error}"); std::process::exit(1); } } fn run() -> BenchResult<()> { let args = Args::parse(); fs::create_dir_all(&args.output_dir).map_err(|error| { format!( "failed to create output directory {}: {error}", args.output_dir.display() ) })?; let runtime = Builder::new_current_thread() .enable_all() .build() .map_err(|error| format!("failed to create tokio runtime: {error}"))?; let mut warmups = Vec::new(); for index in 0..args.warmups { warmups.push(runtime.block_on(run_insert_case(&args, "warmup", index))?); } let mut samples = Vec::new(); for index in 0..args.iterations { samples.push(runtime.block_on(run_insert_case(&args, "sample", index))?); } let report = Report { generated_at_unix_ms: unix_ms(), benchmark: "engine2_json_pointer_insert", rows: args.rows, chunk_size: args.chunk_size, timing_ms: summarize_samples(&samples), warmups, samples, }; let json_path = args.output_dir.join("report.json"); let md_path = args.output_dir.join("report.md"); fs::write( &json_path, serde_json::to_string_pretty(&report) .map_err(|error| format!("failed to serialize report: {error}"))?, ) .map_err(|error| format!("failed to write {}: {error}", json_path.display()))?; fs::write(&md_path, render_markdown_report(&report)) .map_err(|error| format!("failed to write {}: {error}", md_path.display()))?; println!("wrote {}", json_path.display()); println!("wrote {}", md_path.display()); println!( "insert_{}: mean {:.2}ms, median {:.2}ms", args.rows, report.timing_ms.insert.mean_ms, report.timing_ms.insert.median_ms ); Ok(()) } async fn run_insert_case(args: &Args, label: &str, index: usize) -> BenchResult { let db_path = args .output_dir .join(format!("{label}-{index}-{}.sqlite", std::process::id())); let cleanup = CleanupDatabase { path: db_path.clone(), keep: args.keep_databases, }; cleanup.remove_existing()?; let backend = Engine2SqliteBackend::file_backed(&db_path).map_err(display_lix_error)?; let lix = open_lix(OpenLixOptions { backend: Some(Box::new(backend)), }) .await .map_err(display_lix_error)?; ensure_benchmark_file_descriptor(&lix).await?; register_json_pointer_schema(&lix).await?; let started = Instant::now(); let insert_started = Instant::now(); for sql in build_insert_batches(args.rows, args.chunk_size)? { let result = lix.execute(&sql, &[]).await.map_err(display_lix_error)?; let ExecuteResult::AffectedRows(affected_rows) = result else { return Err("json pointer insert should return affected rows".to_string()); }; if affected_rows == 0 { return Err("json pointer insert unexpectedly affected zero rows".to_string()); } } let insert_elapsed = insert_started.elapsed(); let verify_started = Instant::now(); let committed_rows = count_json_pointer_rows(&lix).await?; let verify_elapsed = verify_started.elapsed(); if committed_rows != args.rows { return Err(format!( "committed json_pointer row count mismatch: expected {}, got {committed_rows}", args.rows )); } let total_elapsed = started.elapsed(); let sample = RunSample { index, sqlite_path: db_path.display().to_string(), insert_ms: millis(insert_elapsed), verify_ms: millis(verify_elapsed), total_ms: millis(total_elapsed), committed_rows, }; drop(cleanup); Ok(sample) } async fn register_json_pointer_schema(lix: &Lix) -> BenchResult<()> { let schema = sql_string(JSON_POINTER_SCHEMA_JSON); let sql = format!( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (lix_json('{schema}'), true, true)" ); match lix.execute(&sql, &[]).await.map_err(display_lix_error)? { ExecuteResult::AffectedRows(1) => Ok(()), other => Err(format!( "schema registration returned unexpected result: {other:?}" )), } } async fn ensure_benchmark_file_descriptor(lix: &Lix) -> BenchResult<()> { let snapshot = serde_json::json!({ "id": "bench.json", "directory_id": null, "name": "bench", "extension": "json", "hidden": false }); let sql = format!( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ 'bench.json', 'lix_file_descriptor', NULL, lix_json('{}'), false, false\ )", sql_string(&snapshot.to_string()) ); match lix.execute(&sql, &[]).await.map_err(display_lix_error)? { ExecuteResult::AffectedRows(1) => Ok(()), other => Err(format!( "file descriptor insert returned unexpected result: {other:?}" )), } } fn build_insert_batches(row_count: usize, chunk_size: usize) -> BenchResult> { if chunk_size == 0 { return Err("chunk_size must be greater than zero".to_string()); } let mut batches = Vec::new(); let mut next = 0; while next < row_count { let end = (next + chunk_size).min(row_count); let mut sql = String::from( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES ", ); for index in next..end { if index > next { sql.push(','); } let pointer = format!("/prop_{index}"); let snapshot = serde_json::json!({ "path": pointer, "value": { "index": index, "label": format!("value-{index}") } }); sql.push_str(&format!( "('{}','json_pointer','bench.json',lix_json('{}'),false,false)", sql_string(&pointer), sql_string(&snapshot.to_string()) )); } batches.push(sql); next = end; } Ok(batches) } async fn count_json_pointer_rows(lix: &Lix) -> BenchResult { let result = lix .execute( "SELECT COUNT(*) \ FROM lix_state \ WHERE schema_key = 'json_pointer' \ AND file_id = 'bench.json' \ AND snapshot_content IS NOT NULL", &[], ) .await .map_err(display_lix_error)?; let ExecuteResult::Rows(rows) = result else { return Err("COUNT query should return rows".to_string()); }; let Some(row) = rows.rows().first() else { return Err("COUNT query returned no rows".to_string()); }; match row.values().first() { Some(Value::Integer(value)) => { usize::try_from(*value).map_err(|_| format!("COUNT returned negative value: {value}")) } other => Err(format!("COUNT returned unexpected value: {other:?}")), } } fn summarize_samples(samples: &[RunSample]) -> TimingSummary { TimingSummary { sample_count: samples.len(), insert: summarize_phase(samples.iter().map(|sample| sample.insert_ms).collect()), verify: summarize_phase(samples.iter().map(|sample| sample.verify_ms).collect()), total: summarize_phase(samples.iter().map(|sample| sample.total_ms).collect()), } } fn summarize_phase(mut values: Vec) -> PhaseSummary { if values.is_empty() { return PhaseSummary { mean_ms: 0.0, median_ms: 0.0, min_ms: 0.0, max_ms: 0.0, }; } values.sort_by(|left, right| left.total_cmp(right)); let sum = values.iter().sum::(); let midpoint = values.len() / 2; let median = if values.len() % 2 == 0 { (values[midpoint - 1] + values[midpoint]) / 2.0 } else { values[midpoint] }; PhaseSummary { mean_ms: sum / values.len() as f64, median_ms: median, min_ms: values[0], max_ms: values[values.len() - 1], } } fn render_markdown_report(report: &Report) -> String { format!( "# Engine2 JSON Pointer Benchmark\n\n\ - Rows: `{}`\n\ - Chunk size: `{}`\n\ - Samples: `{}`\n\n\ | Phase | Mean ms | Median ms | Min ms | Max ms |\n\ | --- | ---: | ---: | ---: | ---: |\n\ | Insert | {:.2} | {:.2} | {:.2} | {:.2} |\n\ | Verify | {:.2} | {:.2} | {:.2} | {:.2} |\n\ | Total | {:.2} | {:.2} | {:.2} | {:.2} |\n", report.rows, report.chunk_size, report.timing_ms.sample_count, report.timing_ms.insert.mean_ms, report.timing_ms.insert.median_ms, report.timing_ms.insert.min_ms, report.timing_ms.insert.max_ms, report.timing_ms.verify.mean_ms, report.timing_ms.verify.median_ms, report.timing_ms.verify.min_ms, report.timing_ms.verify.max_ms, report.timing_ms.total.mean_ms, report.timing_ms.total.median_ms, report.timing_ms.total.min_ms, report.timing_ms.total.max_ms, ) } fn sql_string(value: &str) -> String { value.replace('\'', "''") } fn display_lix_error(error: LixError) -> String { format!("{}: {}", error.code, error.description) } fn millis(duration: Duration) -> f64 { duration.as_secs_f64() * 1000.0 } fn unix_ms() -> u128 { SystemTime::now() .duration_since(UNIX_EPOCH) .map(|duration| duration.as_millis()) .unwrap_or_default() } struct CleanupDatabase { path: PathBuf, keep: bool, } impl CleanupDatabase { fn remove_existing(&self) -> BenchResult<()> { for path in self.paths() { if path.exists() { fs::remove_file(&path) .map_err(|error| format!("failed to remove {}: {error}", path.display()))?; } } Ok(()) } fn paths(&self) -> Vec { ["", "-wal", "-shm", "-journal"] .into_iter() .map(|suffix| PathBuf::from(format!("{}{}", self.path.display(), suffix))) .collect() } } impl Drop for CleanupDatabase { fn drop(&mut self) { if self.keep { return; } for path in self.paths() { let _ = fs::remove_file(path); } } } ================================================ FILE: benchmarks/engine2-json-pointer/src/sqlite_backend.rs ================================================ use async_trait::async_trait; use lix_rs_sdk::{ KvPair, KvScanRange, LixBackend, LixBackendTransaction, LixError, TransactionBeginMode, }; use rusqlite::{params, Connection, OptionalExtension}; use std::path::Path; use std::sync::{Arc, Mutex, MutexGuard}; const KV_TABLE: &str = "lix_engine2_kv"; #[derive(Clone)] pub struct Engine2SqliteBackend { conn: Arc>, } pub struct Engine2SqliteTransaction { conn: Arc>, finalized: bool, mode: TransactionBeginMode, } impl Engine2SqliteBackend { pub fn file_backed(path: &Path) -> Result { if let Some(parent) = path.parent() { std::fs::create_dir_all(parent).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!( "failed to create sqlite benchmark directory {}: {error}", parent.display() ), ) })?; } let conn = Connection::open(path).map_err(sqlite_error)?; configure_connection(&conn)?; ensure_kv_table(&conn)?; Ok(Self { conn: Arc::new(Mutex::new(conn)), }) } fn lock_conn(&self) -> Result, LixError> { self.conn .lock() .map_err(|_| LixError::new("LIX_ERROR_UNKNOWN", "sqlite benchmark mutex poisoned")) } } #[async_trait] impl LixBackend for Engine2SqliteBackend { async fn begin_transaction( &self, mode: TransactionBeginMode, ) -> Result, LixError> { { let conn = self.lock_conn()?; conn.execute_batch(match mode { TransactionBeginMode::Read | TransactionBeginMode::Deferred => "BEGIN TRANSACTION", TransactionBeginMode::Write => "BEGIN IMMEDIATE", }) .map_err(sqlite_error)?; } Ok(Box::new(Engine2SqliteTransaction { conn: Arc::clone(&self.conn), finalized: false, mode, })) } async fn kv_get(&self, namespace: &str, key: &[u8]) -> Result>, LixError> { let conn = self.lock_conn()?; kv_get_with_connection(&conn, namespace, key) } async fn kv_scan( &self, namespace: &str, range: KvScanRange, limit: Option, ) -> Result, LixError> { let conn = self.lock_conn()?; kv_scan_with_connection(&conn, namespace, &range, limit) } } #[async_trait] impl LixBackendTransaction for Engine2SqliteTransaction { fn mode(&self) -> TransactionBeginMode { self.mode } async fn kv_get(&mut self, namespace: &str, key: &[u8]) -> Result>, LixError> { let conn = self.lock_conn()?; kv_get_with_connection(&conn, namespace, key) } async fn kv_scan( &mut self, namespace: &str, range: KvScanRange, limit: Option, ) -> Result, LixError> { let conn = self.lock_conn()?; kv_scan_with_connection(&conn, namespace, &range, limit) } async fn kv_put(&mut self, namespace: &str, key: &[u8], value: &[u8]) -> Result<(), LixError> { let conn = self.lock_conn()?; conn.execute( &format!( "INSERT INTO {KV_TABLE} (namespace, key, value) VALUES (?1, ?2, ?3) \ ON CONFLICT(namespace, key) DO UPDATE SET value = excluded.value" ), params![namespace, key, value], ) .map_err(sqlite_error)?; Ok(()) } async fn kv_delete(&mut self, namespace: &str, key: &[u8]) -> Result<(), LixError> { let conn = self.lock_conn()?; conn.execute( &format!("DELETE FROM {KV_TABLE} WHERE namespace = ?1 AND key = ?2"), params![namespace, key], ) .map_err(sqlite_error)?; Ok(()) } async fn commit(mut self: Box) -> Result<(), LixError> { self.lock_conn()? .execute_batch("COMMIT") .map_err(sqlite_error)?; self.finalized = true; Ok(()) } async fn rollback(mut self: Box) -> Result<(), LixError> { self.lock_conn()? .execute_batch("ROLLBACK") .map_err(sqlite_error)?; self.finalized = true; Ok(()) } } impl Engine2SqliteTransaction { fn lock_conn(&self) -> Result, LixError> { self.conn .lock() .map_err(|_| LixError::new("LIX_ERROR_UNKNOWN", "sqlite benchmark mutex poisoned")) } } impl Drop for Engine2SqliteTransaction { fn drop(&mut self) { if self.finalized || std::thread::panicking() { return; } if let Ok(conn) = self.conn.lock() { let _ = conn.execute_batch("ROLLBACK"); } } } fn configure_connection(conn: &Connection) -> Result<(), LixError> { conn.execute_batch( "PRAGMA journal_mode = WAL;\ PRAGMA synchronous = NORMAL;\ PRAGMA temp_store = MEMORY;", ) .map_err(sqlite_error)?; Ok(()) } fn ensure_kv_table(conn: &Connection) -> Result<(), LixError> { conn.execute_batch(&format!( "CREATE TABLE IF NOT EXISTS {KV_TABLE} (\ namespace TEXT NOT NULL,\ key BLOB NOT NULL,\ value BLOB NOT NULL,\ PRIMARY KEY(namespace, key)\ ) WITHOUT ROWID" )) .map_err(sqlite_error)?; Ok(()) } fn kv_get_with_connection( conn: &Connection, namespace: &str, key: &[u8], ) -> Result>, LixError> { conn.query_row( &format!("SELECT value FROM {KV_TABLE} WHERE namespace = ?1 AND key = ?2"), params![namespace, key], |row| row.get::<_, Vec>(0), ) .optional() .map_err(sqlite_error) } fn kv_scan_with_connection( conn: &Connection, namespace: &str, range: &KvScanRange, limit: Option, ) -> Result, LixError> { let mut pairs = match range { KvScanRange::Prefix(prefix) => { let mut stmt = conn .prepare(&format!( "SELECT key, value FROM {KV_TABLE} WHERE namespace = ?1 ORDER BY key" )) .map_err(sqlite_error)?; let rows = stmt .query_map(params![namespace], |row| { Ok((row.get::<_, Vec>(0)?, row.get::<_, Vec>(1)?)) }) .map_err(sqlite_error)?; collect_matching_rows(rows, |key| key.starts_with(prefix))? } KvScanRange::Range { start, end } => { let mut stmt = conn .prepare(&format!( "SELECT key, value FROM {KV_TABLE} \ WHERE namespace = ?1 AND key >= ?2 AND key < ?3 \ ORDER BY key" )) .map_err(sqlite_error)?; let rows = stmt .query_map(params![namespace, start, end], |row| { Ok((row.get::<_, Vec>(0)?, row.get::<_, Vec>(1)?)) }) .map_err(sqlite_error)?; collect_matching_rows(rows, |_| true)? } }; if let Some(limit) = limit { pairs.truncate(limit); } Ok(pairs) } fn collect_matching_rows( rows: rusqlite::MappedRows< '_, impl FnMut(&rusqlite::Row<'_>) -> rusqlite::Result<(Vec, Vec)>, >, mut matches: F, ) -> Result, LixError> where F: FnMut(&[u8]) -> bool, { let mut pairs = Vec::new(); for row in rows { let (key, value) = row.map_err(sqlite_error)?; if matches(&key) { pairs.push(KvPair::new(key, value)); } } Ok(pairs) } fn sqlite_error(error: rusqlite::Error) -> LixError { LixError::new( "LIX_ERROR_UNKNOWN", format!("sqlite benchmark error: {error}"), ) } ================================================ FILE: benchmarks/git-compare/Cargo.toml ================================================ [package] name = "git_compare_benchmark" version = "0.1.0" edition = "2021" publish = false [dependencies] clap = { version = "4.5.31", features = ["derive"] } lix_engine = { path = "../../packages/engine" } lix_rs_sdk = { path = "../../packages/rs-sdk" } pollster = "0.4" serde = { version = "1", features = ["derive"] } serde_json = "1" ================================================ FILE: benchmarks/git-compare/README.md ================================================ # Git Compare Benchmark This benchmark answers a narrower question than `exp git-replay`: - a repo already exists - a user changes files - the user finalizes one commit - how long do `write` and `commit` take for Git vs Lix? It cuts replay noise by: - selecting real first-parent commits from a production repo as workloads - building Git and Lix parent-state templates outside the timed section - timing only `apply workload` and `finalize commit` - interleaving Git and Lix runs - verifying the final Git tree and final Lix `lix_file` state after each trial ## What It Measures For each selected workload commit: - `write_ms` - Git: apply the commit's file mutations into a clean checkout - Lix: apply equivalent `lix_file` mutations inside an open transaction - `commit_ms` - Git: `git add -A` + `git commit` - Lix: `COMMIT` - `total_ms` - end-to-end write + commit ## Usage ```bash cargo run --release -p git_compare_benchmark -- \ --repo-path /Users/samuel/git-repos/paraglide-js \ --output-dir artifact/benchmarks/git-compare/paraglide-js \ --max-workloads 5 \ --runs 5 \ --warmups 1 \ --force ``` With the benchmark-tuned SQLite settings: ```bash cargo run --release -p git_compare_benchmark -- \ --repo-path /Users/samuel/git-repos/paraglide-js \ --output-dir artifact/benchmarks/git-compare/paraglide-js-tuned \ --sqlite-benchmark-tuned \ --max-workloads 5 \ --runs 5 \ --warmups 1 \ --force ``` Reports are written to: - `report.json` - `report.md` inside the chosen output directory. ## Notes - The current seed mode is hybrid on purpose: - Git uses a local parent checkout so the baseline tree is exact. - Lix seeds a fresh DB from the parent tree snapshot outside the timer. - Lix path seeding percent-encodes Git path characters that `lix_file` does not currently accept raw, so the benchmark still exercises the same file set even when the repo contains paths like `+layout.svelte` or `[locale]`. - Workloads are filtered to regular-file content changes. Mode-only or symlink-heavy commits are skipped because `lix_file` currently benchmarks `path + data`, not full Git file mode semantics. ================================================ FILE: benchmarks/git-compare/src/main.rs ================================================ use clap::Parser; use lix_engine::{ boot as boot_engine, BootArgs as EngineConfig, ExecuteOptions, Session, SessionTransaction, Value, }; use lix_rs_sdk::{SqliteBackend, WasmRuntime, WasmtimeRuntime}; use serde::Serialize; use std::collections::{BTreeMap, BTreeSet, HashMap}; use std::fs; use std::path::{Path, PathBuf}; use std::process::{Command, Stdio}; use std::sync::Arc; use std::time::Instant; #[cfg(unix)] use std::os::unix::fs::PermissionsExt; const NULL_OID: &str = "0000000000000000000000000000000000000000"; type DynError = Box; type DynResult = Result; #[derive(Parser, Debug, Clone)] #[command(about = "Benchmark write+commit latency for Git vs Lix on real repo workloads")] struct Args { #[arg(long)] repo_path: PathBuf, #[arg(long, default_value = "HEAD")] head_ref: String, #[arg(long = "commit-sha")] commit_shas: Vec, #[arg(long, default_value = "artifact/benchmarks/git-compare")] output_dir: PathBuf, #[arg(long, default_value_t = 5)] max_workloads: usize, #[arg(long, default_value_t = 200)] scan_commits: usize, #[arg(long, default_value_t = 5)] runs: usize, #[arg(long, default_value_t = 1)] warmups: usize, #[arg(long, default_value_t = 1)] min_changed_paths: usize, #[arg(long, default_value_t = 25)] max_changed_paths: usize, #[arg(long)] skip_verify: bool, #[arg(long)] keep_temp: bool, #[arg(long)] force: bool, } #[derive(Clone)] struct CommitInfo { sha: String, parents: Vec, subject: String, } #[derive(Clone)] struct PatchSet { changes: Vec, blobs: HashMap>, } #[derive(Clone)] struct RawChange { status: char, old_mode: String, new_mode: String, old_oid: String, new_oid: String, old_path: Option, new_path: Option, } #[derive(Clone)] enum OperationKind { Add, Modify, Delete, Rename, Copy, } #[derive(Clone)] struct FileOperation { kind: OperationKind, old_path: Option, new_path: Option, new_bytes: Option>, new_executable: bool, } #[derive(Clone)] struct Workload { commit_sha: String, parent_sha: String, subject: String, changed_paths: usize, child_tree_sha: String, operations: Vec, expected_files: BTreeMap>, } #[derive(Clone)] struct LixTemplate { seed_rows: Vec, path_to_id: BTreeMap, } #[derive(Clone)] struct LixSeedRow { id: String, path: String, data: Vec, } #[derive(Clone)] struct PreparedWorkload { workload: Workload, git_template_dir: PathBuf, lix_template: LixTemplate, } #[derive(Serialize)] struct Report { repo_path: String, head_ref: String, head_commit: String, config: ConfigReport, workload_selection: WorkloadSelectionReport, template_seed: TemplateSeedReport, workloads: Vec, overall: OverallReport, } #[derive(Serialize)] struct ConfigReport { runs: usize, warmups: usize, verify_state: bool, min_changed_paths: usize, max_changed_paths: usize, max_workloads: usize, scan_commits: usize, } #[derive(Serialize)] struct WorkloadSelectionReport { selected_count: usize, skipped: Vec, } #[derive(Serialize)] struct SkippedCandidate { commit_sha: String, subject: String, reason: String, } #[derive(Serialize)] struct TemplateSeedReport { mode: &'static str, } #[derive(Serialize)] struct WorkloadReport { commit_sha: String, parent_sha: String, subject: String, changed_paths: usize, child_tree_sha: String, git: MetricReport, lix: MetricReport, total_ratio_lix_over_git: f64, total_pct_less_time_for_lix: f64, trials: Vec, } #[derive(Serialize)] struct OverallReport { git: MetricReport, lix: MetricReport, total_ratio_lix_over_git: f64, total_pct_less_time_for_lix: f64, } #[derive(Serialize, Clone)] struct MetricReport { write_ms: SummaryStats, commit_ms: SummaryStats, total_ms: SummaryStats, } #[derive(Serialize, Clone, Default)] struct SummaryStats { samples: usize, min_ms: f64, p50_ms: f64, p95_ms: f64, mean_ms: f64, max_ms: f64, } #[derive(Serialize, Clone)] struct TrialResult { workload_commit_sha: String, system: &'static str, iteration: usize, warmup: bool, write_ms: f64, commit_ms: f64, total_ms: f64, verified: bool, } fn main() { if let Err(error) = run_with_large_stack(real_main) { eprintln!("{error}"); std::process::exit(1); } } fn run_with_large_stack(f: F) -> DynResult<()> where F: FnOnce() -> DynResult<()> + Send + 'static, { let handle = std::thread::Builder::new() .name("git-compare-benchmark".to_string()) .stack_size(32 * 1024 * 1024) .spawn(f)?; match handle.join() { Ok(result) => result, Err(_) => Err("benchmark thread panicked".into()), } } fn real_main() -> DynResult<()> { let args = Args::parse(); validate_args(&args)?; let repo_path = fs::canonicalize(&args.repo_path)?; ensure_git_repo(&repo_path)?; prepare_output_dir(&args.output_dir, args.force)?; let tmp_root = args.output_dir.join("tmp"); fs::create_dir_all(&tmp_root)?; let head_commit = rev_parse_commit(&repo_path, &args.head_ref)?; let (workloads, skipped) = select_workloads(&repo_path, &args, &head_commit)?; let prepared = prepare_workloads(&repo_path, &args, &tmp_root, &workloads)?; let mut workload_reports = Vec::with_capacity(prepared.workloads.len()); let mut all_trials = Vec::new(); println!( "[git-compare] selected {} workloads from {}", prepared.workloads.len(), repo_path.display() ); for prepared_workload in &prepared.workloads { println!( "[git-compare] workload {} {} ({} changed paths)", &prepared_workload.workload.commit_sha[..12], prepared_workload.workload.subject, prepared_workload.workload.changed_paths ); let trials = run_workload_trials( &repo_path, &args, &tmp_root, prepared_workload, Arc::clone(&prepared.wasm_runtime), )?; let git_trials = filtered_trials(&trials, "git"); let lix_trials = filtered_trials(&trials, "lix"); let git_report = build_metric_report(&git_trials); let lix_report = build_metric_report(&lix_trials); let ratio = safe_ratio(lix_report.total_ms.p50_ms, git_report.total_ms.p50_ms); let pct_less = pct_less_time(lix_report.total_ms.p50_ms, git_report.total_ms.p50_ms); workload_reports.push(WorkloadReport { commit_sha: prepared_workload.workload.commit_sha.clone(), parent_sha: prepared_workload.workload.parent_sha.clone(), subject: prepared_workload.workload.subject.clone(), changed_paths: prepared_workload.workload.changed_paths, child_tree_sha: prepared_workload.workload.child_tree_sha.clone(), git: git_report, lix: lix_report, total_ratio_lix_over_git: ratio, total_pct_less_time_for_lix: pct_less, trials: trials.clone(), }); all_trials.extend(trials); } let overall_git = build_metric_report(&filtered_trials(&all_trials, "git")); let overall_lix = build_metric_report(&filtered_trials(&all_trials, "lix")); let report = Report { repo_path: repo_path.display().to_string(), head_ref: args.head_ref.clone(), head_commit, config: ConfigReport { runs: args.runs, warmups: args.warmups, verify_state: !args.skip_verify, min_changed_paths: args.min_changed_paths, max_changed_paths: args.max_changed_paths, max_workloads: args.max_workloads, scan_commits: args.scan_commits, }, workload_selection: WorkloadSelectionReport { selected_count: workload_reports.len(), skipped, }, template_seed: TemplateSeedReport { mode: "git-parent-checkout + lix-parent-snapshot", }, workloads: workload_reports, overall: OverallReport { git: overall_git.clone(), lix: overall_lix.clone(), total_ratio_lix_over_git: safe_ratio( overall_lix.total_ms.p50_ms, overall_git.total_ms.p50_ms, ), total_pct_less_time_for_lix: pct_less_time( overall_lix.total_ms.p50_ms, overall_git.total_ms.p50_ms, ), }, }; let json_path = args.output_dir.join("report.json"); let markdown_path = args.output_dir.join("report.md"); fs::write( &json_path, format!("{}\n", serde_json::to_string_pretty(&report)?), )?; fs::write(&markdown_path, render_markdown_report(&report))?; println!( "[git-compare] overall median total: git {:.2}ms, lix {:.2}ms, lix {:.2}% less time", report.overall.git.total_ms.p50_ms, report.overall.lix.total_ms.p50_ms, report.overall.total_pct_less_time_for_lix ); println!("[git-compare] json: {}", json_path.display()); println!("[git-compare] markdown: {}", markdown_path.display()); if !args.keep_temp { let _ = fs::remove_dir_all(&tmp_root); } Ok(()) } struct PreparedBenchmark { workloads: Vec, wasm_runtime: Arc, } fn validate_args(args: &Args) -> DynResult<()> { if args.max_workloads == 0 { return Err("--max-workloads must be >= 1".into()); } if args.runs == 0 { return Err("--runs must be >= 1".into()); } if args.min_changed_paths == 0 { return Err("--min-changed-paths must be >= 1".into()); } if args.min_changed_paths > args.max_changed_paths { return Err("--min-changed-paths must be <= --max-changed-paths".into()); } Ok(()) } fn ensure_git_repo(repo_path: &Path) -> DynResult<()> { run_git_text(repo_path, ["rev-parse", "--git-dir"])?; Ok(()) } fn prepare_output_dir(path: &Path, force: bool) -> DynResult<()> { if path.exists() { if !force { return Err(format!( "output dir already exists: {} (pass --force to overwrite)", path.display() ) .into()); } fs::remove_dir_all(path)?; } fs::create_dir_all(path)?; Ok(()) } fn select_workloads( repo_path: &Path, args: &Args, head_commit: &str, ) -> DynResult<(Vec, Vec)> { let commit_infos = if args.commit_shas.is_empty() { list_first_parent_commit_info(repo_path, &args.head_ref, Some(args.scan_commits))? } else { let mut commits = Vec::with_capacity(args.commit_shas.len()); for commit_sha in &args.commit_shas { commits.push(read_commit_info(repo_path, commit_sha)?); } commits }; let mut selected = Vec::new(); let mut skipped = Vec::new(); for commit in commit_infos { if selected.len() >= args.max_workloads { break; } if commit.sha == head_commit && commit.parents.is_empty() { skipped.push(SkippedCandidate { commit_sha: commit.sha, subject: commit.subject, reason: "root commit is not a useful user write+commit workload".to_string(), }); continue; } if commit.parents.len() != 1 { skipped.push(SkippedCandidate { commit_sha: commit.sha, subject: commit.subject, reason: "merge commit skipped as a timed workload".to_string(), }); continue; } let patch_set = read_commit_patch_set(repo_path, &commit.sha)?; if patch_set.changes.len() < args.min_changed_paths { skipped.push(SkippedCandidate { commit_sha: commit.sha, subject: commit.subject, reason: format!( "changed path count {} below minimum {}", patch_set.changes.len(), args.min_changed_paths ), }); continue; } if patch_set.changes.len() > args.max_changed_paths { skipped.push(SkippedCandidate { commit_sha: commit.sha, subject: commit.subject, reason: format!( "changed path count {} above maximum {}", patch_set.changes.len(), args.max_changed_paths ), }); continue; } if let Some(reason) = first_unsupported_change_reason(&patch_set.changes) { skipped.push(SkippedCandidate { commit_sha: commit.sha, subject: commit.subject, reason, }); continue; } let operations = compile_operations(&patch_set)?; let expected_files = normalize_snapshot_for_lix(&read_tree_snapshot(repo_path, &commit.sha)?); let child_tree_sha = rev_parse_tree(repo_path, &commit.sha)?; selected.push(Workload { commit_sha: commit.sha, parent_sha: commit.parents[0].clone(), subject: commit.subject, changed_paths: operations.len(), child_tree_sha, operations, expected_files, }); } if selected.is_empty() { return Err("no benchmark workloads selected; widen scan or changed-path filters".into()); } Ok((selected, skipped)) } fn prepare_workloads( repo_path: &Path, _args: &Args, tmp_root: &Path, workloads: &[Workload], ) -> DynResult { let wasm_runtime: Arc = Arc::new(WasmtimeRuntime::new()?); let git_templates_dir = tmp_root.join("git-templates"); fs::create_dir_all(&git_templates_dir)?; let mut prepared_workloads = Vec::with_capacity(workloads.len()); for workload in workloads { let parent_files = read_tree_snapshot(repo_path, &workload.parent_sha)?; let git_template_dir = git_templates_dir.join(&workload.commit_sha); create_git_checkout_template(repo_path, &git_template_dir, &workload.parent_sha)?; let lix_template = create_lix_snapshot_template(&parent_files)?; prepared_workloads.push(PreparedWorkload { workload: workload.clone(), git_template_dir, lix_template, }); } Ok(PreparedBenchmark { workloads: prepared_workloads, wasm_runtime, }) } fn run_workload_trials( repo_path: &Path, args: &Args, tmp_root: &Path, workload: &PreparedWorkload, wasm_runtime: Arc, ) -> DynResult> { let git_trial_root = tmp_root .join("git-runs") .join(&workload.workload.commit_sha); let lix_trial_root = tmp_root .join("lix-runs") .join(&workload.workload.commit_sha); fs::create_dir_all(&git_trial_root)?; fs::create_dir_all(&lix_trial_root)?; let total_iterations = args.warmups + args.runs; let mut trials = Vec::with_capacity(total_iterations * 2); for iteration in 0..total_iterations { let warmup = iteration < args.warmups; let order = if iteration % 2 == 0 { ["git", "lix"] } else { ["lix", "git"] }; for system in order { let trial = match system { "git" => run_git_trial( &git_trial_root, iteration, warmup, workload, !args.skip_verify, )?, "lix" => run_lix_trial( repo_path, &lix_trial_root, iteration, warmup, workload, Arc::clone(&wasm_runtime), !args.skip_verify, )?, _ => unreachable!(), }; trials.push(trial); } } Ok(trials) } fn run_git_trial( trial_root: &Path, iteration: usize, warmup: bool, workload: &PreparedWorkload, verify_state: bool, ) -> DynResult { let repo_dir = trial_root.join(format!("trial-{iteration}")); if repo_dir.exists() { fs::remove_dir_all(&repo_dir)?; } copy_directory(&workload.git_template_dir, &repo_dir)?; let write_started = Instant::now(); apply_operations_to_git(&repo_dir, &workload.workload.operations)?; let write_ms = elapsed_ms(write_started); let commit_started = Instant::now(); let commit_message = format!("bench {}", &workload.workload.commit_sha[..12]); run_git_text(&repo_dir, ["add", "-A"])?; run_git_text( &repo_dir, [ "-c", "core.hooksPath=/dev/null", "-c", "commit.gpgSign=false", "commit", "-q", "--allow-empty", "-m", &commit_message, ], )?; let commit_ms = elapsed_ms(commit_started); let verified = if verify_state { let actual_tree = run_git_text(&repo_dir, ["rev-parse", "HEAD^{tree}"])?; let actual_tree = actual_tree.trim(); if actual_tree != workload.workload.child_tree_sha { return Err(format!( "git trial tree mismatch for {}: expected {}, got {}", workload.workload.commit_sha, workload.workload.child_tree_sha, actual_tree ) .into()); } true } else { false }; fs::remove_dir_all(&repo_dir)?; Ok(TrialResult { workload_commit_sha: workload.workload.commit_sha.clone(), system: "git", iteration, warmup, write_ms, commit_ms, total_ms: write_ms + commit_ms, verified, }) } fn run_lix_trial( _repo_path: &Path, trial_root: &Path, iteration: usize, warmup: bool, workload: &PreparedWorkload, wasm_runtime: Arc, verify_state: bool, ) -> DynResult { let db_path = trial_root.join(format!("trial-{iteration}.lix")); if db_path.exists() { fs::remove_file(&db_path)?; } let session = create_initialized_session(&db_path, wasm_runtime)?; if !workload.lix_template.seed_rows.is_empty() { let seed_rows = workload.lix_template.seed_rows.clone(); pollster::block_on(session.transaction(ExecuteOptions::default(), |tx| { Box::pin(async move { for row in seed_rows { tx.execute( "INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)", &[ Value::Text(row.id), Value::Text(row.path), Value::Blob(row.data), ], ) .await?; } Ok(()) }) }))?; } let mut path_to_id = workload.lix_template.path_to_id.clone(); let mut next_file_id = next_file_id_from_map(&path_to_id); let mut transaction = pollster::block_on(session.begin_transaction_with_options(ExecuteOptions::default()))?; let write_started = Instant::now(); for operation in &workload.workload.operations { execute_engine_operation( &mut transaction, operation, &mut path_to_id, &mut next_file_id, )?; } let write_ms = elapsed_ms(write_started); let commit_started = Instant::now(); pollster::block_on(transaction.commit())?; let commit_ms = elapsed_ms(commit_started); let verified = if verify_state { verify_session_state(&session, &workload.workload.expected_files)?; true } else { false }; drop(session); let _ = fs::remove_file(&db_path); let _ = fs::remove_file(format!("{}-journal", db_path.display())); let _ = fs::remove_file(format!("{}-wal", db_path.display())); let _ = fs::remove_file(format!("{}-shm", db_path.display())); Ok(TrialResult { workload_commit_sha: workload.workload.commit_sha.clone(), system: "lix", iteration, warmup, write_ms, commit_ms, total_ms: write_ms + commit_ms, verified, }) } fn create_git_checkout_template( repo_path: &Path, template_dir: &Path, parent_sha: &str, ) -> DynResult<()> { if template_dir.exists() { fs::remove_dir_all(template_dir)?; } run_command( "git", [ "clone", "--local", "--quiet", repo_path.to_str().ok_or("invalid repo path")?, template_dir.to_str().ok_or("invalid template path")?, ], None, None, )?; run_git_text(template_dir, ["checkout", "--quiet", parent_sha])?; run_git_text(template_dir, ["config", "user.email", "bench@example.com"])?; run_git_text(template_dir, ["config", "user.name", "git-compare-bench"])?; run_git_text(template_dir, ["config", "core.hooksPath", "/dev/null"])?; run_git_text(template_dir, ["config", "commit.gpgSign", "false"])?; run_git_text(template_dir, ["config", "gc.auto", "0"])?; run_git_text(template_dir, ["config", "maintenance.auto", "false"])?; run_git_text(template_dir, ["config", "gc.autoDetach", "false"])?; Ok(()) } fn create_lix_snapshot_template( parent_files: &BTreeMap>, ) -> DynResult { let mut path_to_id = BTreeMap::new(); let mut next_file_id = 1_u64; let mut seed_rows = Vec::with_capacity(parent_files.len()); for (path, bytes) in parent_files { let file_id = allocate_file_id(&mut next_file_id); let lix_path = to_lix_path(path); path_to_id.insert(lix_path.clone(), file_id.clone()); seed_rows.push(LixSeedRow { id: file_id, path: lix_path, data: bytes.clone(), }); } Ok(LixTemplate { seed_rows, path_to_id, }) } fn apply_operations_to_git(repo_dir: &Path, operations: &[FileOperation]) -> DynResult<()> { for operation in operations { match operation.kind { OperationKind::Add | OperationKind::Copy | OperationKind::Modify => { let path = repo_dir.join( operation .new_path .as_ref() .ok_or("missing new path for git write")?, ); if let Some(parent) = path.parent() { fs::create_dir_all(parent)?; } fs::write( &path, operation .new_bytes .as_ref() .ok_or("missing bytes for git write")?, )?; set_executable_if_needed(&path, operation.new_executable)?; } OperationKind::Rename => { if let Some(old_path) = &operation.old_path { let old_full = repo_dir.join(old_path); if old_full.exists() { fs::remove_file(&old_full)?; } } let new_full = repo_dir.join( operation .new_path .as_ref() .ok_or("missing new path for rename")?, ); if let Some(parent) = new_full.parent() { fs::create_dir_all(parent)?; } fs::write( &new_full, operation .new_bytes .as_ref() .ok_or("missing bytes for rename")?, )?; set_executable_if_needed(&new_full, operation.new_executable)?; } OperationKind::Delete => { let path = repo_dir.join( operation .old_path .as_ref() .ok_or("missing old path for delete")?, ); if path.exists() { fs::remove_file(path)?; } } } } Ok(()) } fn set_executable_if_needed(path: &Path, executable: bool) -> DynResult<()> { #[cfg(unix)] { let mode = if executable { 0o755 } else { 0o644 }; let mut permissions = fs::metadata(path)?.permissions(); permissions.set_mode(mode); fs::set_permissions(path, permissions)?; } #[cfg(not(unix))] let _ = (path, executable); Ok(()) } fn execute_engine_operation( transaction: &mut SessionTransaction<'_>, operation: &FileOperation, path_to_id: &mut BTreeMap, next_file_id: &mut u64, ) -> DynResult<()> { match operation.kind { OperationKind::Add | OperationKind::Copy => { let path = to_lix_path( operation .new_path .as_ref() .ok_or("missing new path for Lix insert")?, ); let file_id = allocate_file_id(next_file_id); pollster::block_on( transaction.execute( "INSERT INTO lix_file (id, path, data) VALUES (?1, ?2, ?3)", &[ Value::Text(file_id.clone()), Value::Text(path.clone()), Value::Blob( operation .new_bytes .as_ref() .ok_or("missing bytes for Lix insert")? .clone(), ), ], ), )?; path_to_id.insert(path.clone(), file_id); } OperationKind::Modify => { let path = to_lix_path( operation .new_path .as_ref() .ok_or("missing path for Lix update")?, ); let file_id = path_to_id .get(&path) .cloned() .ok_or_else(|| format!("missing file id for modified path {path}"))?; pollster::block_on( transaction.execute( "UPDATE lix_file SET data = ?1 WHERE id = ?2", &[ Value::Blob( operation .new_bytes .as_ref() .ok_or("missing bytes for Lix update")? .clone(), ), Value::Text(file_id), ], ), )?; } OperationKind::Rename => { let old_path = to_lix_path( operation .old_path .as_ref() .ok_or("missing old path for Lix rename")?, ); let new_path = to_lix_path( operation .new_path .as_ref() .ok_or("missing new path for Lix rename")?, ); let file_id = path_to_id .remove(&old_path) .ok_or_else(|| format!("missing file id for renamed path {old_path}"))?; pollster::block_on( transaction.execute( "UPDATE lix_file SET path = ?1, data = ?2 WHERE id = ?3", &[ Value::Text(new_path.clone()), Value::Blob( operation .new_bytes .as_ref() .ok_or("missing bytes for Lix rename")? .clone(), ), Value::Text(file_id.clone()), ], ), )?; path_to_id.insert(new_path.clone(), file_id); } OperationKind::Delete => { let old_path = to_lix_path( operation .old_path .as_ref() .ok_or("missing old path for Lix delete")?, ); let file_id = path_to_id .remove(&old_path) .ok_or_else(|| format!("missing file id for deleted path {old_path}"))?; pollster::block_on(transaction.execute( "DELETE FROM lix_file WHERE id = ?1", &[Value::Text(file_id)], ))?; } } Ok(()) } fn verify_session_state( session: &Session, expected_files: &BTreeMap>, ) -> DynResult<()> { let result = pollster::block_on(session.execute("SELECT path, data FROM lix_file ORDER BY path", &[]))?; let mut actual = BTreeMap::new(); for row in &result.statements[0].rows { let path = expect_text(&row[0])?; let bytes = value_as_bytes(&row[1])?; actual.insert(path, bytes); } if &actual != expected_files { return Err(format!( "Lix state verification failed: expected {} files, got {} files", expected_files.len(), actual.len() ) .into()); } Ok(()) } fn create_initialized_session( path: &Path, wasm_runtime: Arc, ) -> DynResult { if path.exists() { fs::remove_file(path)?; } let init_backend = SqliteBackend::from_path(path)?; let engine = Arc::new(boot_engine(EngineConfig::new( Box::new(init_backend), Arc::clone(&wasm_runtime), ))); let _ = pollster::block_on(engine.initialize_if_needed())?; pollster::block_on(engine.open_existing())?; Ok(pollster::block_on(engine.open_session())?) } fn expect_text(value: &Value) -> DynResult { match value { Value::Text(text) => Ok(text.clone()), other => Err(format!("expected text value, got {other:?}").into()), } } fn value_as_bytes(value: &Value) -> DynResult> { match value { Value::Blob(bytes) => Ok(bytes.clone()), Value::Text(text) => Ok(text.as_bytes().to_vec()), other => Err(format!("expected blob/text value, got {other:?}").into()), } } fn next_file_id_from_map(path_to_id: &BTreeMap) -> u64 { path_to_id .values() .filter_map(|id| id.strip_prefix("bench-file-")) .filter_map(|tail| tail.parse::().ok()) .max() .unwrap_or(0) + 1 } fn allocate_file_id(next_file_id: &mut u64) -> String { let file_id = format!("bench-file-{next_file_id}"); *next_file_id += 1; file_id } fn filtered_trials(trials: &[TrialResult], system: &str) -> Vec { trials .iter() .filter(|trial| trial.system == system && !trial.warmup) .cloned() .collect() } fn build_metric_report(trials: &[TrialResult]) -> MetricReport { MetricReport { write_ms: summarize(trials.iter().map(|trial| trial.write_ms).collect()), commit_ms: summarize(trials.iter().map(|trial| trial.commit_ms).collect()), total_ms: summarize(trials.iter().map(|trial| trial.total_ms).collect()), } } fn summarize(mut values: Vec) -> SummaryStats { if values.is_empty() { return SummaryStats::default(); } values.sort_by(|left, right| left.partial_cmp(right).unwrap()); let samples = values.len(); let sum: f64 = values.iter().sum(); SummaryStats { samples, min_ms: values[0], p50_ms: percentile(&values, 0.50), p95_ms: percentile(&values, 0.95), mean_ms: sum / samples as f64, max_ms: values[samples - 1], } } fn percentile(sorted_values: &[f64], percentile: f64) -> f64 { if sorted_values.is_empty() { return 0.0; } let rank = percentile * (sorted_values.len().saturating_sub(1)) as f64; let lower = rank.floor() as usize; let upper = rank.ceil() as usize; if lower == upper { return sorted_values[lower]; } let weight = rank - lower as f64; sorted_values[lower] * (1.0 - weight) + sorted_values[upper] * weight } fn safe_ratio(numerator: f64, denominator: f64) -> f64 { if denominator == 0.0 { 0.0 } else { numerator / denominator } } fn pct_less_time(lix_ms: f64, git_ms: f64) -> f64 { if git_ms == 0.0 { 0.0 } else { (1.0 - (lix_ms / git_ms)) * 100.0 } } fn render_markdown_report(report: &Report) -> String { let mut output = String::new(); output.push_str("# Git Compare Benchmark\n\n"); output.push_str(&format!( "Repo: `{}` \nHead: `{}` (`{}`)\n\n", report.repo_path, report.head_ref, report.head_commit )); output.push_str("## Setup\n\n"); output.push_str(&format!( "- workloads: `{}`\n- runs per system: `{}`\n- warmups: `{}`\n- verification: `{}`\n\n", report.workload_selection.selected_count, report.config.runs, report.config.warmups, report.config.verify_state, )); output.push_str("## Overall Median\n\n"); output.push_str("| system | write ms | commit ms | total ms | p95 total ms |\n"); output.push_str("| --- | ---: | ---: | ---: | ---: |\n"); output.push_str(&format!( "| git | {:.2} | {:.2} | {:.2} | {:.2} |\n", report.overall.git.write_ms.p50_ms, report.overall.git.commit_ms.p50_ms, report.overall.git.total_ms.p50_ms, report.overall.git.total_ms.p95_ms )); output.push_str(&format!( "| lix | {:.2} | {:.2} | {:.2} | {:.2} |\n\n", report.overall.lix.write_ms.p50_ms, report.overall.lix.commit_ms.p50_ms, report.overall.lix.total_ms.p50_ms, report.overall.lix.total_ms.p95_ms )); output.push_str(&format!( "Lix median total time was `{:.2}%` less than Git on this benchmark (`{:.2}x` Lix/Git).\n\n", report.overall.total_pct_less_time_for_lix, report.overall.total_ratio_lix_over_git )); output.push_str("## Workloads\n\n"); output.push_str("| commit | changed paths | git total ms | lix total ms | lix less time |\n"); output.push_str("| --- | ---: | ---: | ---: | ---: |\n"); for workload in &report.workloads { output.push_str(&format!( "| `{}` | {} | {:.2} | {:.2} | {:.2}% |\n", &workload.commit_sha[..12], workload.changed_paths, workload.git.total_ms.p50_ms, workload.lix.total_ms.p50_ms, workload.total_pct_less_time_for_lix )); } output.push_str("\n## Notes\n\n"); output.push_str(&format!( "- template seed mode: `{}`\n- skipped candidate commits during workload selection: `{}`\n", report.template_seed.mode, report.workload_selection.skipped.len() )); output } fn list_first_parent_commit_info( repo_path: &Path, reference: &str, limit: Option, ) -> DynResult> { let mut args = vec![ "log".to_string(), "--first-parent".to_string(), "--format=%H%x1f%P%x1f%s%x1e".to_string(), ]; if let Some(limit) = limit { args.push("-n".to_string()); args.push(limit.to_string()); } args.push(reference.to_string()); let output = run_git_text(repo_path, args.iter().map(String::as_str))?; let mut commits = Vec::new(); for record in output.split('\x1e') { let trimmed = record.trim(); if trimmed.is_empty() { continue; } let mut parts = trimmed.split('\x1f'); let sha = parts.next().unwrap_or_default().trim().to_string(); let parent_part = parts.next().unwrap_or_default().trim(); let subject = parts.next().unwrap_or_default().trim().to_string(); commits.push(CommitInfo { sha, parents: if parent_part.is_empty() { Vec::new() } else { parent_part .split_whitespace() .map(ToString::to_string) .collect() }, subject, }); } Ok(commits) } fn read_commit_info(repo_path: &Path, reference: &str) -> DynResult { let sha = rev_parse_commit(repo_path, reference)?; let output = run_git_text(repo_path, ["log", "-1", "--format=%P%x1f%s", &sha])?; let trimmed = output.trim(); let mut parts = trimmed.split('\x1f'); let parent_part = parts.next().unwrap_or_default().trim(); let subject = parts.next().unwrap_or_default().trim().to_string(); Ok(CommitInfo { sha, parents: if parent_part.is_empty() { Vec::new() } else { parent_part .split_whitespace() .map(ToString::to_string) .collect() }, subject, }) } fn rev_parse_commit(repo_path: &Path, reference: &str) -> DynResult { Ok(run_git_text( repo_path, ["rev-parse", "--verify", &format!("{reference}^{{commit}}")], )? .trim() .to_string()) } fn rev_parse_tree(repo_path: &Path, commit_sha: &str) -> DynResult { Ok( run_git_text(repo_path, ["rev-parse", &format!("{commit_sha}^{{tree}}")])? .trim() .to_string(), ) } fn read_commit_patch_set(repo_path: &Path, commit_sha: &str) -> DynResult { let raw = run_git_bytes( repo_path, [ "diff-tree", "--root", "--raw", "-r", "-z", "-m", "--first-parent", "--find-renames", "--no-commit-id", commit_sha, ], None, )?; let changes = parse_raw_diff_tree(&raw)?; let wanted_blob_ids = collect_wanted_blob_ids(&changes); let blobs = read_blobs(repo_path, &wanted_blob_ids)?; Ok(PatchSet { changes, blobs }) } fn parse_raw_diff_tree(raw: &[u8]) -> DynResult> { if raw.is_empty() { return Ok(Vec::new()); } let tokens = raw .split(|byte| *byte == 0) .filter(|token| !token.is_empty()) .collect::>(); let mut changes = Vec::new(); let mut index = 0; while index < tokens.len() { let header = std::str::from_utf8(tokens[index])?; index += 1; if !header.starts_with(':') { continue; } let fields = header[1..].split(' ').collect::>(); if fields.len() < 5 { continue; } let status_token = fields[4]; let status = status_token.chars().next().unwrap_or('M'); let first_path = std::str::from_utf8(tokens.get(index).ok_or("missing diff-tree path")?)?.to_string(); index += 1; if status == 'R' || status == 'C' { let second_path = std::str::from_utf8(tokens.get(index).ok_or("missing rename target path")?)? .to_string(); index += 1; changes.push(RawChange { status, old_mode: fields[0].to_string(), new_mode: fields[1].to_string(), old_oid: fields[2].to_string(), new_oid: fields[3].to_string(), old_path: Some(first_path), new_path: Some(second_path), }); continue; } changes.push(RawChange { status, old_mode: fields[0].to_string(), new_mode: fields[1].to_string(), old_oid: fields[2].to_string(), new_oid: fields[3].to_string(), old_path: if status == 'A' { None } else { Some(first_path.clone()) }, new_path: if status == 'D' { None } else { Some(first_path) }, }); } Ok(changes) } fn collect_wanted_blob_ids(changes: &[RawChange]) -> Vec { let mut ids = BTreeSet::new(); for change in changes { if change.new_path.is_some() && is_regular_blob_mode(&change.new_mode) && change.new_oid != NULL_OID { ids.insert(change.new_oid.clone()); } } ids.into_iter().collect() } fn read_tree_snapshot(repo_path: &Path, commit_sha: &str) -> DynResult>> { let raw = run_git_bytes( repo_path, ["ls-tree", "-r", "-z", "--full-tree", commit_sha], None, )?; let mut path_by_oid = BTreeMap::new(); for token in raw .split(|byte| *byte == 0) .filter(|token| !token.is_empty()) { let entry = std::str::from_utf8(token)?; let (header, path) = entry.split_once('\t').ok_or("invalid ls-tree entry")?; let fields = header.split_whitespace().collect::>(); if fields.len() != 3 { continue; } let mode = fields[0]; let object_type = fields[1]; let oid = fields[2]; if object_type != "blob" || !is_regular_blob_mode(mode) { continue; } path_by_oid.insert(path.to_string(), oid.to_string()); } let blob_ids = path_by_oid.values().cloned().collect::>(); let blobs = read_blobs(repo_path, &blob_ids)?; let mut files = BTreeMap::new(); for (path, oid) in path_by_oid { let bytes = blobs .get(&oid) .cloned() .ok_or_else(|| format!("missing blob {oid} for path {path}"))?; files.insert(path, bytes); } Ok(files) } fn compile_operations(patch_set: &PatchSet) -> DynResult> { let mut operations = Vec::with_capacity(patch_set.changes.len()); for change in &patch_set.changes { let new_bytes = if change.new_path.is_some() && is_regular_blob_mode(&change.new_mode) { Some( patch_set .blobs .get(&change.new_oid) .cloned() .ok_or_else(|| format!("missing blob bytes for {}", change.new_oid))?, ) } else { None }; let kind = match change.status { 'A' => OperationKind::Add, 'M' => OperationKind::Modify, 'D' => OperationKind::Delete, 'R' => OperationKind::Rename, 'C' => OperationKind::Copy, other => { return Err(format!("unsupported diff status '{other}'").into()); } }; operations.push(FileOperation { kind, old_path: change.old_path.clone(), new_path: change.new_path.clone(), new_bytes, new_executable: change.new_mode == "100755", }); } Ok(operations) } fn normalize_snapshot_for_lix(files: &BTreeMap>) -> BTreeMap> { files .iter() .map(|(path, bytes)| (to_lix_path(path), bytes.clone())) .collect() } fn to_lix_path(path: &str) -> String { let trimmed = path.trim_start_matches('/'); let segments = trimmed .split('/') .filter(|segment| !segment.is_empty()) .map(encode_lix_path_segment) .collect::>(); format!("/{}", segments.join("/")) } fn encode_lix_path_segment(segment: &str) -> String { let mut encoded = String::new(); for byte in segment.as_bytes() { let ch = *byte as char; let allowed = ch.is_ascii_alphanumeric() || matches!(ch, '.' | '_' | '~' | '-'); if allowed { encoded.push(ch); } else { encoded.push_str(&format!("%{:02X}", byte)); } } encoded } fn first_unsupported_change_reason(changes: &[RawChange]) -> Option { changes.iter().find_map(unsupported_change_reason) } fn unsupported_change_reason(change: &RawChange) -> Option { match change.status { 'A' => { if !is_regular_blob_mode(&change.new_mode) { Some(format!( "added path {:?} uses unsupported mode {}", change.new_path, change.new_mode )) } else { None } } 'M' => { if !is_regular_blob_mode(&change.old_mode) || !is_regular_blob_mode(&change.new_mode) { return Some(format!( "modified path {:?} uses unsupported mode {} -> {}", change.new_path, change.old_mode, change.new_mode )); } if change.old_path == change.new_path && change.old_oid == change.new_oid && change.old_mode != change.new_mode { return Some(format!( "mode-only change on {:?} is not represented by lix_file", change.new_path )); } None } 'D' => { if !is_regular_blob_mode(&change.old_mode) { Some(format!( "deleted path {:?} uses unsupported mode {}", change.old_path, change.old_mode )) } else { None } } 'R' | 'C' => { if !is_regular_blob_mode(&change.old_mode) || !is_regular_blob_mode(&change.new_mode) { Some(format!( "rename/copy {:?} -> {:?} uses unsupported mode {} -> {}", change.old_path, change.new_path, change.old_mode, change.new_mode )) } else { None } } other => Some(format!("unsupported diff status '{other}'")), } } fn is_regular_blob_mode(mode: &str) -> bool { mode == "100644" || mode == "100755" } fn read_blobs(repo_path: &Path, blob_ids: &[String]) -> DynResult>> { if blob_ids.is_empty() { return Ok(HashMap::new()); } let input = format!("{}\n", blob_ids.join("\n")).into_bytes(); let output = run_git_bytes(repo_path, ["cat-file", "--batch"], Some(input))?; let mut blobs = HashMap::with_capacity(blob_ids.len()); let mut offset = 0usize; while offset < output.len() { let line_end = output[offset..] .iter() .position(|byte| *byte == b'\n') .map(|index| offset + index) .ok_or("invalid cat-file batch output")?; let header = std::str::from_utf8(&output[offset..line_end])?; offset = line_end + 1; let header_fields = header.split_whitespace().collect::>(); if header_fields.len() != 3 { return Err(format!("invalid cat-file header: {header}").into()); } let oid = header_fields[0].to_string(); let object_type = header_fields[1]; let size: usize = header_fields[2].parse()?; if object_type != "blob" { return Err(format!("expected blob for {oid}, got {object_type}").into()); } let body_end = offset + size; if body_end > output.len() { return Err(format!("truncated blob body for {oid}").into()); } blobs.insert(oid, output[offset..body_end].to_vec()); offset = body_end + 1; } Ok(blobs) } fn run_git_text(repo_path: &Path, args: I) -> DynResult where I: IntoIterator, S: AsRef, { let args_vec = args .into_iter() .map(|arg| arg.as_ref().to_string()) .collect::>(); let output = run_command( "git", args_vec.iter().map(String::as_str), Some(repo_path), None, )?; Ok(String::from_utf8(output)?) } fn run_git_bytes(repo_path: &Path, args: I, stdin: Option>) -> DynResult> where I: IntoIterator, S: AsRef, { let args_vec = args .into_iter() .map(|arg| arg.as_ref().to_string()) .collect::>(); run_command( "git", args_vec.iter().map(String::as_str), Some(repo_path), stdin, ) } fn run_command( program: &str, args: I, cwd: Option<&Path>, stdin: Option>, ) -> DynResult> where I: IntoIterator, S: AsRef, { let args_vec = args .into_iter() .map(|arg| arg.as_ref().to_string()) .collect::>(); let mut command = Command::new(program); command.args(&args_vec); if let Some(cwd) = cwd { command.current_dir(cwd); } if stdin.is_some() { command.stdin(Stdio::piped()); } command.stdout(Stdio::piped()); command.stderr(Stdio::piped()); let mut child = command.spawn()?; if let Some(stdin_bytes) = stdin { use std::io::Write; let mut child_stdin = child.stdin.take().ok_or("missing child stdin")?; child_stdin.write_all(&stdin_bytes)?; } let output = child.wait_with_output()?; if !output.status.success() { let stderr = String::from_utf8_lossy(&output.stderr); return Err(format!( "command failed: {} {}\n{}", program, args_vec.join(" "), stderr.trim() ) .into()); } Ok(output.stdout) } fn copy_directory(source: &Path, destination: &Path) -> DynResult<()> { if destination.exists() { fs::remove_dir_all(destination)?; } run_command( "cp", [ "-R", source.to_str().ok_or("invalid source path")?, destination.to_str().ok_or("invalid destination path")?, ], None, None, )?; Ok(()) } fn elapsed_ms(started: Instant) -> f64 { started.elapsed().as_secs_f64() * 1000.0 } ================================================ FILE: blog/001-introducing-lix/index.md ================================================ --- date: "2026-01-20" og:description: "Lix is a version control system you import as a library. It records semantic changes to enable diffs, reviews, rollback, and querying of edits." --- # Introducing Lix: An embeddable version control system Lix is an **embeddable version control system** that can be imported as a library. Use lix, for example, to enable human-in-the-loop workflows for AI agents like diffs and reviews. - **It's just a library** — Lix is a library you import. Get branching, diff, rollback in your existing stack - **Tracks semantic changes** — diffs, blame, and history are queryable via SQL - **Approval workflows for agents** — agents propose changes in isolated versions, humans review and merge ![AI agent changes need to be visible and controllable](./ai-agents-guardrails.png) > [!TIP] > Lix does not replace Git. [Read how Lix compares to Git →](https://lix.dev/docs/comparison-to-git) ## Semantic change tracking Lix doesn't track line-by-line text changes. It tracks **semantic changes** at the entity level via plugins. A plugin parses a format (or a piece of app state) into structured entities. Then Lix stores **what changed** — not just which bytes differ. **Before:** ```json {"theme":"light","notifications":true,"language":"en"} ``` **After:** ```json {"theme":"dark","notifications":true,"language":"en"} ``` **Git tracks:** ```diff -{"theme":"light","notifications":true,"language":"en"} +{"theme":"dark","notifications":true,"language":"en"} ``` **Lix tracks:** ```diff property theme: - light + dark ``` ### Excel file example With an XLSX plugin (not shipped yet), Lix can show a cell-level diff like: This is exactly the kind of semantic surface plugins define: cells vs formulas vs styling. **Before:** | order_id | product | status | | -------- | -------- | ------- | | 1001 | Widget A | shipped | | 1002 | Widget B | pending | **After:** | order_id | product | status | | -------- | -------- | ------- | | 1001 | Widget A | shipped | | 1002 | Widget B | shipped | **Git tracks:** ```diff -Binary files differ ``` **Lix tracks:** ```diff order_id 1002 status: - pending + shipped ``` The same approach extends to any other format your product cares about — **as long as there’s a plugin** that can interpret it. ## How does Lix work? Lix is **change-first**: it stores semantic changes as queryable data, not snapshots. That means audit trails, rollbacks, and “blame” become simple queries: ```sql SELECT * FROM state_history WHERE entity_id = 'settings.theme' ORDER BY depth ASC; ``` Lix uses existing SQL databases as both **query engine** and **persistence layer**. Plugins parse files (including binary formats) into "meaningful changes" e.g. cells, properties, whitespace, etc. Lix stores those changes as rows in virtual tables like `file`, `file_history`, and `state_history`. Why this matters: - **Doesn't reinvent databases** — durability, ACID, and recovery come from proven SQL engines. - **SQL API for changes** — query diffs, history, and audit trails directly. - **Portable** — runs on SQLite, Postgres, or other SQL databases. ``` ┌─────────────────────────────────────────────────┐ │ Lix │ │ │ │ ┌────────────┐ ┌──────────┐ ┌─────────┐ ┌─────┐ │ │ │ Filesystem │ │ Branches │ │ History │ │ ... │ │ │ └────────────┘ └──────────┘ └─────────┘ └─────┘ │ └────────────────────────┬────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────┐ │ SQL database │ │ (SQLite, Postgres, etc.) │ └─────────────────────────────────────────────────┘ ``` This means: no separate infrastructure to manage, and no “special” datastore just for version control. ## Plugins (format support) Lix’s format support depends on plugins. Here’s the current status: | Format | Plugin | Status | | ------ | ------ | ------ | | JSON | `@lix-js/plugin-json` | Stable | | CSV | `@lix-js/plugin-csv` | Stable | | Markdown | `@lix-js/plugin-md` | Beta | | ProseMirror | `@lix-js/plugin-prosemirror` | Stable | **Building your own plugin:** take an off-the-shelf parser for your format, map it to Lix’s entity/change schema, and you get semantic diffs + history for that format. [Plugin documentation →](https://lix.dev/docs/plugins) ## Why did we build Lix? Lix was developed alongside [inlang](https://inlang.com), open-source localization infrastructure. We needed version control **as a library**, not as an external tool. Git's architecture didn't fit: we needed database semantics (transactions, ACID), queryable history, and semantic diffing. [Read more →](https://samuelstroschein.com/blog/git-limitations) The result is Lix, now at over [90k weekly downloads on NPM](https://www.npmjs.com/package/@lix-js/sdk). ![Weekly npm downloads](./npm-downloads.png) ## Getting started

JavaScript JavaScript · Python Python · Rust Rust · Go Go

```bash npm install @lix-js/sdk ``` ```ts import { openLix, selectWorkingDiff } from "@lix-js/sdk"; const lix = await openLix({ environment: new InMemorySQLite() }); await lix.db.insertInto("file").values({ path: "/hello.txt", data: ... }).execute(); const diff = await selectWorkingDiff({ lix }).selectAll().execute(); ``` ## What's next The next version of Lix will be a refactor to be purely "preprocessor" based. This makes Lix easier to embed anywhere and enables: - **Fast writes** ([RFC 001](/rfc/001-preprocess-writes)) - **Any SQL database** (SQLite, Postgres, Turso, MySQL) - **SDKs for Python, Rust, Go** ([RFC 002](/rfc/002-rewrite-in-rust)) ``` ┌────────────────┐ SELECT * FROM ... │ Lix Engine │ SELECT * FROM ... ───────────────────▶ │ (Rust) │ ───────────────────▶ Database └────────────────┘ ``` ### Join the community - ⭐ [Star the lix repo on GitHub](https://github.com/opral/lix) - 💬 [Chat on Discord](https://discord.gg/gdMPPWy57R) ================================================ FILE: blog/002-modeling-a-company-as-a-repository/index.md ================================================ --- date: "2026-02-23" og:description: "Modeling a company as a filesystem is promising for AI agents, but binary files break the model. Lix turns binary formats into structured data agents can read and write." og:image: "./cover.jpg" og:image:alt: "Abstract illustration for Your Company should be a Repository for AI agents" --- # Your Company should be a Repository for AI agents The idea of modeling a company as a filesystem for maximum agent efficiency is gaining traction on X (Twitter). For example, [Eli Mernit](https://x.com/mernit/status/2021324284875153544) wrote that agents get better context if a company is modeled as files ("Your company is a filesystem"). The problem is modeling a company as filesystem doesn't work today because most files are binary formats that agents can't work with effectively. [Anvisha Pai](https://x.com/anvishapai/status/2022062725354967551) pointed that out in her response post "Your company is not a filesystem". But, what if a system exists that turns binary files into structured data agents can read and write to? ![Twitter discussion between Eli Mernit and Anvisha](./twitter-discussion-cards.webp) ## The case for the filesystem The "company as a filesystem because of agents" argument is compelling for two reasons: 1. Agents get full context. When company data lives in files, agents can inspect and reason across systems without brittle app integrations. 2. No third party API restrictions. Tools like Codex and Claude Code feel powerful because they can use direct filesystem primitives (`grep`, shell commands, scripts) instead of being constrained by third-party APIs. ![Example structure for modeling a company as a filesystem](./mernit-filesystem-example.jpg) ## But the filesystem is not enough A plain filesystem alone doesn't let agents work effectively: 1. Most file formats are not agent-friendly. Documents, spreadsheets, presentations, etc. are binary formats. Agents can parse some formats, but there is no universal semantic layer that enables round-trip editing. 2. Many files cannot be converted into text. A common workaround is to convert binary files to text. But, visual and structural media (for example CAD, PCB, or layered design files) lose critical information when reduced to text. That makes review and verification harder, the real bottleneck with the uprising of AI agents. ![Visual formats are not fully representable as plain text](./anvisha-visual-formats.jpg) ## A system that understands binary files A system that turns binary files into structured data agents can read and write to would enable modeling a company as filesystem. The implementation can be simple. Parse binary files into their schemas. After all, most binary files are structured data under the hood. For example, a docx file is a collection of paragraphs, tables, images, etc. All of those can be expressed as JSON that an agent can understand. ```text ┌─────────────────┐ ┌───────────────────────┐ │ contract.docx │────┬──► │ { type: "paragraph" } │ └─────────────────┘ ├──► │ { type: "table" } │ └──► │ { type: "image" } │ ┌─────────────────┐ ├───────────────────────┤ │ design.psd │────┬──► │ { type: "layer" } │ └─────────────────┘ └──► │ { type: "mask" } │ ├───────────────────────┤ ┌─────────────────┐ │ │ │ budget.xlsx │────┬──► │ { type: "row" } │ └─────────────────┘ └──► │ { type: "formula" } │ └───────────────────────┘ ▲ │ ▼ ┌──────────────┐ │ Agent │ │ read/write │ └──────────────┘ ``` ## Lix is that system A system that turns binary files into structured JSON agents can understand already exists; it's called **Lix**. Lix is a "universal" version control system. "Universal" because it can track changes in binary files by parsing files into JSON schemas. Otherwise, tracking changes in those binary files would not be possible. Lix also solves the problem of opaque binary files agents are now running into. Lix is in alpha, but you can already check out the repository on GitHub. [Lix on GitHub](https://github.com/opral/lix) ![Lix GitHub repository screenshot](./lix-github.jpg) ================================================ FILE: blog/003-february-2026-update/index.md ================================================ --- date: "2026-03-04" og:description: "The Rust rewrite is complete. 33x faster file writes, lix was trending on HackerNews, and what's next in March." og:image: "./cover.png" og:image:alt: "February 2026 update cover showing the Lix Rust rewrite milestone" --- # February 2026 Update: Rust Rewrite Complete **TL;DR** - 33x faster file writes - GitHub stars grew from 70 to over 500 - Real workload and AX (user) testing in March ## The Rust rewrite is complete [RFC 001](https://lix.dev/rfc/001-preprocess-writes) and [RFC 002](https://lix.dev/rfc/002-rewrite-in-rust) have been implemented in February, with two strong outcomes: ### 33x faster file writes The rewrite significantly improves heavy write paths, with the largest gain on realistic plugin-based JSON file inserts (**33x median, ~40x p95**). | Benchmark | `v0.5` | `next` | Speedup | | --------------------------------- | --------- | --------- | ---------- | | State single-row insert | 17.43 ms | 14.85 ms | 1.17x | | State 10-row insert | 57.33 ms | 46.53 ms | 1.23x | | State 100-row insert | 460.27 ms | 193.30 ms | **2.38x** | | JSON file insert (120 properties) | 889.81 ms | 26.90 ms | **33.08x** | ### Controlling the query planner The new architecture unlocks previously impossible optimizations. The SQL database is merely used as a storage and query execution layer. v0.5 and below could not optimize beyond what the vtable API of the database provides. Every write triggered per-row callbacks that crossed the JS-WASM boundary with ~10-25 internal SQL queries each. In SQLite's case, even batching mutations was not optimizable. Lix now intercepts and rewrites queries before they hit SQLite, batching what used to be per-row vtable callbacks into single bulk operations. For more information read [RFC 001](https://lix.dev/rfc/001-preprocess-writes). ```plain v0.5 next ────── ──── ┌───────┐ ┌───────┐ │ Query │ │ Query │ └───┬───┘ └───┬───┘ │ │ ▼ ▼ ┌──────────────┐ ┌─────────────┐ │ SQL Database │ │ Lix │ └──────┬───────┘ └──────┬──────┘ │ │ ▼ ▼ ┌───────┐ ┌──────────────┐ │ Lix │ │ SQL Database │ └───────┘ └──────────────┘ ``` ## GitHub stars and HackerNews Lix was trending on HackerNews in late January. The outcome was an instant jump in GitHub stars and inbound requests to try out lix. Most inbound interest is around AI agents operating on non-code files and formats Git can't handle (Excel, XML, SSIS packages) well. [https://news.ycombinator.com/item?id=46713387](https://news.ycombinator.com/item?id=46713387) ![HackerNews trending](./github-stars.png) ![GitHub stars growth](./hackernews.png) ## What's next in March People want to test lix. The major use case are AI agents that operate on non-code files (.docx, .pdf, etc.). We have two remaining things to do: ### 1. Real workload testing and bug fixing Real production workloads will surface performance issues and bugs that should be simple to solve with the completed refactor. After all, we control the query planner now. ### 2. AX (agent experience) testing and API iteration AX testing? Yes. That's a fundamental shift in 2026. The old way of discussing APIs and/or conducting user interviews are not needed anymore. Ask an agent to do a task, then follow up with "What friction points did you run into?" and fix the friction points. ![AX testing](./ax-testing.png) ================================================ FILE: blog/004-march-2026-update/index.md ================================================ --- date: "2026-04-03" og:description: "500 real commits replayed with no corruption bugs. Without the semantic layer, Lix is ~8x faster than Git, but semantic writes still bottleneck on write amplification." og:image: "./cover.svg" og:image:alt: "Lix March 2026 Update: 500 commits with zero corruption, blob commit in 5ms, semantic writes need fixing" --- # March 2026 Update: No Corruption Bugs, 8x Faster Than Git, Semantic Writes Still Too Slow ![Lix March 2026 Update](./cover.svg) **TL;DR** - Workload testing worked: 500 real commits replayed with no state corruption bugs - Semantic writes still hit a write-amplification bottleneck on large files (500ms+) - Without the semantic layer, the file-write-plus-commit workflow is ~8x faster than Git - April goal: sub 100ms for 10k entity inserts ## Workload testing [Last month](/blog/february-2026-update) we set out to do real workload testing in March to reveal performance bottlenecks and bugs that prevent production usage of lix. The test replays 500 real commits from the [paraglide-js](https://github.com/opral/paraglide-js) repo. For each commit, it sets up the "before" state outside the timer, applies the same file changes, and measures how long Lix takes to commit. The simulated scenario: "I edited some files, now I'm committing." Three findings came out of this. ### Finding 1: It works The best result from the workload replay is that it worked. Replaying 500 real commits did not reveal state corruption bugs. That matters more than the benchmark number because correctness is the prerequisite for everything else. ### Finding 2: Semantic writes still bottleneck on write amplification > [!NOTE] > **Refresher: What is the semantic layer?** > > Lix parses files into structured entities like paragraphs, tables, images so it can diff, merge, and sync at that level instead of treating files as opaque blobs. > > ``` > contract.docx > ↓ > paragraphs / tables / images > ↓ > diff / merge / history on those units > ``` The bottleneck is write amplification. A single file write fans out into many entity rows. Inserting a file with 10k entities means the engine has to process 10k entity rows. On the current path, semantic writes are multi-second operations. Any interaction above 100ms stops feeling instantaneous, so this needs to come down. ``` contract.docx Lix engine SQL database ┌──────────────────┐ ┌─────────────────────┐ ┌──────────────┐ │ Paragraph 1 │ │ process 10,000 │ │ │ │ Paragraph 2 │ │ entity rows │ │ INSERT row 1 │ │ Paragraph 3 │────►│ │──────►│ INSERT row 2 │ │ Table 1 │ │ validate, transform,│ │ ... │ │ Row 1 │ │ detect changes │ │ INSERT row │ │ Row 2 │ │ │ │ 10,000 │ │ Image 1 │ │ 💥 too slow │ │ │ │ ... │ └─────────────────────┘ └──────────────┘ │ Paragraph 4,291 │ └──────────────────┘ 1 file write N entities to process N SQL row inserts ``` The engine is not fast enough to handle these large batches. The goal for April is to get 10k entity inserts under 100ms. ### Finding 3: Without the semantic layer, the file-write-plus-commit workflow is ~8x faster than Git Unexpected good news. Without the semantic layer (treating files as blobs), Lix completes the same file-write-plus-commit workload in ~5 ms where Git takes ~39 ms.[^1] [^1]: Measured on a MacBook Pro M5 Pro (18-core), SQLite in WAL mode. | Phase | Git | Lix | | ----------- | ---------- | --------- | | File writes | ~0.2 ms | ~3.6 ms | | Commit | ~39 ms | ~1 ms | | **Total** | **~39 ms** | **~5 ms** | The difference comes down to architecture. Lix applies mutations inside an open SQLite transaction. Committing is closing that transaction (~1 ms). The comparison runs `git add -A` followed by `git commit`, which scans the working tree, updates the index, and writes tree and commit objects. This is encouraging, but it's the blob layer only. The semantic layer is what makes Lix useful for non-code files, and that's where the work is. ### Why not skip the semantic layer entirely? If Lix is already fast without the semantic layer, why not just store blobs and diff on the fly? This is really a source-of-truth decision, not a storage decision. Lix can keep both a blob and semantic state, but only one can be authoritative: ``` Option A: Blob is source of truth, diffs computed on the fly ┌──────────────┐ ┌──────────────┐ │ contract.docx│──────►│ re-parse │──────► diffs (computed every time) │ (blob) │ │ on every op │ └──────────────┘ └──────────────┘ Option B: Diffs are source of truth, blob derived on demand ┌──────────────┐ ┌──────────────┐ │ diffs │──────►│ serialize │──────► contract.docx (derived) │ (stored) │ │ on demand │ └──────────────┘ └──────────────┘ ``` If both are independently writable, they can drift. Git gets away with blob-first storage because its default diff and merge model is line-oriented and works well for ordinary text. For smaller structured text files like JSON, re-parsing on demand can still be acceptable. But as files grow, the cost per operation grows with them: | File type | Size | Rebuild cost per operation | | ------------------- | --------- | -------------------------- | | `.js` source file | ~0.005 MB | trivial | | Large JSON config | ~0.5 MB | acceptable | | `.docx` with images | ~5 MB | slow | | `.xlsx` spreadsheet | 5-20 MB | 💥 too slow | OOXML files like `.docx` and `.xlsx` are ZIP packages made of many XML parts, so rebuilding semantic state from the blob on every merge, history read, or sync means repeatedly paying unzip, parse, and tree-diff costs. A cache avoids repeated rebuilds, but now there are two representations to keep consistent — every write path must update both, and bugs in that synchronization are silent data corruption. So Lix makes semantic state canonical and materializes the blob on demand when someone actually needs the file bytes. The tradeoff is that blob writes pay an upfront parsing cost — which is the write-amplification bottleneck we're now fixing. Long term, most app and agent writes should bypass blob parsing entirely. They will write entities directly, so the hot path avoids both blob parsing and blob serialization. That means the semantic layer must be fast. ## Prolly trees for cheap versioning Solving write speed alone isn't enough — storage also needs to scale across versions. Without content deduplication, creating a new version means duplicating all entity data. A 10k-entity Word document across 5 versions = 50k rows stored. ``` Without deduplication: version: main version: draft ┌──────────────────┐ ┌──────────────────┐ │ 10,000 entities │ │ 10,000 entities │ ← full copy └──────────────────┘ └──────────────────┘ 💥 10,000 rows 💥 10,000 rows (copied) ``` [Prolly trees](https://docs.dolthub.com/architecture/storage-engine/prolly-tree) are the most promising fit for this. Entities are grouped into chunks with boundaries determined by content hashes. If one paragraph changes, only the chunk containing that paragraph is new. The rest is shared across versions. ``` With Prolly trees: version: main version: draft (original) (paragraph 3 edited) ┌──────────────────┐ ┌──────────────────┐ │ Paragraph 1 │ │ Paragraph 1 │ │ Paragraph 2 │ │ Paragraph 2 │ │ Paragraph 3 │ │ Paragraph 3 ✎ │ │ Table 1 │ │ Table 1 │ │ ... │ │ ... │ │ Paragraph 4,291 │ │ Paragraph 4,291 │ └──────────────────┘ └──────────────────┘ │ │ ▼ ▼ ┌──────────────┐ ┌──────────────┐ │ chunk A ──┼────────────────────┼── chunk A │ ← shared │ chunk B │ │ chunk B' │ ← different (contains edited paragraph 3) │ chunk C ──┼────────────────────┼── chunk C │ ← shared │ chunk D ──┼────────────────────┼── chunk D │ ← shared └──────────────┘ └──────────────┘ ✅ Creating a version = pointing to the same chunks ✅ Only changed chunks are stored separately ``` ## What's next in April **Goal: Make Lix ready for people to try out.** March proved the blob path works. April is about closing the gap so the semantic layer is fast enough and correct enough for real use. 1. **10k entity inserts under 100 ms.** SQLite can insert 10k rows in under 10 ms. That gives us ~90 ms of headroom to work with. 2. **Prolly trees for cheap branching.** Without content deduplication, every branch copies all entity data. Prolly trees share unchanged chunks across versions, so branching a 10k-entity document is nearly free. 3. **Workload testing with the semantic layer on.** March proved the blob path doesn't corrupt state across 500 real commits. April repeats that test with semantic writes enabled. ================================================ FILE: blog/005-april-2026-update/index.md ================================================ --- date: "2026-05-11" og:description: "The new DataFusion path runs the core Lix MVP flow. April did not hit the 10k inserts target, but it clarified why Lix needs control from incoming query down to storage." og:image: "./cover.svg" og:image:alt: "Lix April 2026 Update cover showing DataFusion planning queries, Lix owning the storage abstraction, and SQLite, RocksDB, S3/R2, and OPFS as backends" --- # April 2026 Update: Adopting DataFusion ![Lix April 2026 Update](./cover.svg) **TL;DR** - Benchmarking exposed that SQLite gives too little control over Lix's versioned storage model to keep improving incrementally. - Decision: move query execution to DataFusion while keeping SQLite as a possible physical storage backend. - May goal: Release `v0.6` MVP with focus on CRUD with branching and merging on the optimized semantic write path that the file API will use next. ## What works now The important April result is that the core API works on the new path. The shape is the MVP API: ```ts import { openLix } from "@lix-js/sdk"; import { createBetterSqlite3Backend } from "@lix-js/sdk/sqlite"; const lix = await openLix({ backend: createBetterSqlite3Backend({ path: "app.lix" }), // Later: swap this for a RocksDB/S3/OPFS backend // without changing the Lix API below. }); await lix.createVersion({ name: "draft" }); await lix.execute("INSERT INTO markdown_paragraph (id, text) VALUES ($1, $2)", [ "paragraph_1", "Ship CRUD MVP", ]); await lix.switchVersion({ name: "main" }); await lix.mergeVersion({ source: "draft" }); ``` The exact API names might still change. The important part is that the flow works: - open a Lix - create a version - write entities with CRUD operations - switch versions - merge a version That is the product surface for the MVP. Files are not in the `v0.6` MVP on purpose. A file write fans out into entity writes. A Word document, JSON file, or spreadsheet save can become thousands of inserts. That means the file API can only be as fast as the entity layer underneath it. The 10k inserts benchmark measures that layer. Most apps and agents should write entities directly anyway. They should update a paragraph, cell, or property, not re-serialize a whole document. The file API comes after CRUD because it is built on the same semantic write path. The first preview is published on npm: ```bash npm install @lix-js/sdk@0.6.0-preview.2 ``` [`@lix-js/sdk@0.6.0-preview.2`](https://www.npmjs.com/package/@lix-js/sdk/v/0.6.0-preview.2) is not the final `v0.6` MVP yet. It is the preview that proves the new path can be installed and tested. ## April goal [Last month](/blog/march-2026-update) we found the next bottleneck: semantic writes. The blob path was already fast. The semantic path was not. Writing one file can fan out into thousands of entities, and the April goal was to get **10k entity inserts under 100ms**. The number is not random. A semantic file is not one row: - a Word document becomes paragraphs, tables, comments, images, and relationships - a JSON file becomes hundreds or thousands of properties - a spreadsheet becomes cells, formulas, sheets, and metadata 10k inserts is the first useful proxy for "real file, real structure." 100ms is the interaction budget. Below that, the write still feels instant. Above that, Lix becomes something users and agents wait on. We did not hit the benchmark in April. We are not publishing a final April number because the benchmark target moved to the new DataFusion path. Optimizing the old SQLite-centered path further would measure the architecture we are replacing. The problem was not one slow query. The SQLite-centered path kept pushing Lix concepts like version roots, inherited rows, tombstones, and file projections into SQLite tables and views. Each optimization fixed one path, but the next feature needed another translation layer. ## Finding: too little control The recurring problem has been architecture confidence. Lix should ship an MVP and improve from there. But that only works if the architecture can be improved incrementally. In February, we wrote that the Rust rewrite gave Lix control over the query planner. That wording was too broad. Lix controlled the query before SQLite saw it. Lix could parse and rewrite SQL, batch operations, and avoid many vtable callbacks. April showed that this is not enough. SQLite still owns the final query planner and storage model. Lix can rewrite queries before SQLite sees them, but the result still has to fit into SQLite tables, indexes, views, and vtables. The 10k inserts work made the missing control clear. Lix needs control from the incoming query all the way down to raw storage. Every write touches current state, history, branch visibility, file projections, and later merge inputs. Those choices depend on the physical shape of the data. ```plain February / March architecture ┌───────────┐ │ SQL query │ └─────┬─────┘ │ ▼ ┌────────────────────────┐ │ Lix SQL parser/rewrite │ ← Lix controls this └─────┬──────────────────┘ │ ▼ ┌──────────────────────┐ │ SQLite query planner │ ← SQLite still controls this └─────┬────────────────┘ │ ▼ ┌───────────────────────────────┐ │ SQLite tables/views/vtables │ ← Lix concepts squeezed here └─────┬─────────────────────────┘ │ ▼ ┌────────────────┐ │ SQLite storage │ └────────────────┘ ``` ## Decision: adopt DataFusion DataFusion is an Apache Arrow SQL query engine. It gives Lix SQL parsing, planning, and execution while letting Lix provide the logic underneath. The decision is not "SQLite bad, custom database good." Reusing a query engine is still the right idea. The mistake would be building one from scratch when DataFusion exists. That is the control Lix needs: from incoming query, through `lix_state`, versions, history, branch visibility, merge inputs, and file projections, down to the raw storage backend. SQLite does not go away. It can still be the physical storage backend. The change is that SQLite no longer defines the query and storage shape of Lix state. ```plain DataFusion-centered architecture ┌───────────┐ │ SQL query │ └─────┬─────┘ │ ▼ ┌─────────────────────────┐ │ DataFusion query engine │ ← Lix controls query execution └─────┬───────────────────┘ │ ▼ ┌──────────────────────────────────┐ │ Lix logic + storage abstraction │ ← Lix controls this └─────┬────────────────────────────┘ │ ▼ ┌──────────────────────────────────┐ │ SQLite · RocksDB · S3/R2 · OPFS │ ← physical storage └──────────────────────────────────┘ ``` Lix does not need to invent physical storage. Existing systems should still handle durability, transactions, files, pages, object storage, and the other hard parts of persistence. The prolly-tree direction from March is now part of this storage abstraction work: make branching cheap by sharing unchanged state, while keeping CRUD operations fast enough for the MVP. This also changes the portability story. Earlier posts framed portability as "any SQL database." With DataFusion, portability moves one layer down: any backend that can satisfy Lix's storage abstraction. Postgres can still be a backend later, but not because Lix delegates SQL execution to Postgres. ## What happened to March's goals March had three April goals: 1. 10k entity inserts under 100ms 2. prolly trees for cheap branching 3. workload testing with the semantic layer on The first goal moved to May on the DataFusion path. Prolly trees moved into the broader physical storage abstraction work. The semantic workload replay should happen after the `v0.6` path is fast enough to be the path we intend to ship. ## What's next in May May goal: turn the preview into the Lix `v0.6` MVP. The acceptance criteria: 1. CRUD operations work through the new DataFusion path. 2. Branching and merging work on that path. 3. 10k semantic inserts are under 100ms. 4. The Lix physical storage abstraction is no more than 1.5x slower than a direct SQLite storage + query baseline for the same workload. The 1.5x number is the guardrail for the storage abstraction. It is not the final product latency target. It checks that the abstraction itself is not the bottleneck. If storage is close to SQLite's baseline, Lix can ship the MVP and keep optimizing query/runtime logic above it incrementally. Files follow after CRUD because file writes fan out into the same entity writes. Everything else is secondary. ================================================ FILE: blog/authors.json ================================================ { "samuelstroschein": { "name": "Samuel Stroschein", "avatar": "https://avatars.githubusercontent.com/u/35429197?v=4", "twitter": "https://x.com/samuelstroschei", "github": "https://github.com/samuelstroschein" } } ================================================ FILE: blog/table_of_contents.json ================================================ [ { "path": "./005-april-2026-update/index.md", "slug": "april-2026-update", "authors": ["samuelstroschein"] }, { "path": "./004-march-2026-update/index.md", "slug": "march-2026-update", "authors": ["samuelstroschein"] }, { "path": "./003-february-2026-update/index.md", "slug": "february-2026-update", "authors": ["samuelstroschein"] }, { "path": "./002-modeling-a-company-as-a-repository/index.md", "slug": "modeling-a-company-as-a-repository", "authors": ["samuelstroschein"] }, { "path": "./001-introducing-lix/index.md", "slug": "introducing-lix", "authors": ["samuelstroschein"] } ] ================================================ FILE: cla-signatures.json ================================================ { "signedContributors": [ { "name": "janfjohannes", "id": 110794494, "comment_id": 1711859828, "created_at": "2023-09-08T15:36:26Z", "repoId": 394757291, "pullRequestNo": 1319 }, { "name": "MaxKless", "id": 34165455, "comment_id": 1714026516, "created_at": "2023-09-11T14:39:53Z", "repoId": 394757291, "pullRequestNo": 1325 }, { "name": "felixhaeberle", "id": 34959078, "comment_id": 1717809210, "created_at": "2023-09-13T14:59:55Z", "repoId": 394757291, "pullRequestNo": 1339 }, { "name": "samuelstroschein", "id": 35429197, "comment_id": 1719038132, "created_at": "2023-09-14T08:55:18Z", "repoId": 394757291, "pullRequestNo": 1339 }, { "name": "NiklasBuchfink", "id": 59048346, "comment_id": 1719232555, "created_at": "2023-09-14T10:59:32Z", "repoId": 394757291, "pullRequestNo": 1347 }, { "name": "floriandwt", "id": 92092993, "comment_id": 1719439744, "created_at": "2023-09-14T13:20:15Z", "repoId": 394757291, "pullRequestNo": 1339 }, { "name": "NilsJacobsen", "id": 58360188, "comment_id": 1727541380, "created_at": "2023-09-20T11:30:29Z", "repoId": 394757291, "pullRequestNo": 1385 }, { "name": "misa1515", "id": 61636045, "comment_id": 1728275039, "created_at": "2023-09-20T18:59:25Z", "repoId": 394757291, "pullRequestNo": 1388 }, { "name": "BRGustavoRibeiro", "id": 34517016, "comment_id": 1728633275, "created_at": "2023-09-21T01:33:29Z", "repoId": 394757291, "pullRequestNo": 1390 }, { "name": "jannesblobel", "id": 72493222, "comment_id": 1729390653, "created_at": "2023-09-21T11:36:20Z", "repoId": 394757291, "pullRequestNo": 1393 }, { "name": "hecker", "id": 23746655, "comment_id": 1736918216, "created_at": "2023-09-27T08:14:41Z", "repoId": 394757291, "pullRequestNo": 1408 }, { "name": "openscript", "id": 1105080, "comment_id": 1738661818, "created_at": "2023-09-28T07:57:14Z", "repoId": 394757291, "pullRequestNo": 1412 }, { "name": "martin-lysk", "id": 113943358, "comment_id": 1772895783, "created_at": "2023-10-20T14:54:50Z", "repoId": 394757291, "pullRequestNo": 1504 }, { "name": "sunxyw", "id": 31698606, "comment_id": 1784985693, "created_at": "2023-10-30T11:25:07Z", "repoId": 394757291, "pullRequestNo": 1533 }, { "name": "ZerdoX-x", "id": 49815452, "comment_id": 1787270801, "created_at": "2023-10-31T13:55:59Z", "repoId": 394757291, "pullRequestNo": 1549 }, { "name": "WarningImHack3r", "id": 43064022, "comment_id": 1802507427, "created_at": "2023-11-08T19:21:15Z", "repoId": 394757291, "pullRequestNo": 1615 }, { "name": "albbus-stack", "id": 57916483, "comment_id": 1804883805, "created_at": "2023-11-10T00:24:59Z", "repoId": 394757291, "pullRequestNo": 1620 }, { "name": "JLAcostaEC", "id": 61467132, "comment_id": 1806107356, "created_at": "2023-11-10T17:13:33Z", "repoId": 394757291, "pullRequestNo": 1623 }, { "name": "rishi-raj-jain", "id": 46300090, "comment_id": 1810487483, "created_at": "2023-11-14T15:44:12Z", "repoId": 394757291, "pullRequestNo": 1638 }, { "name": "DanikVitek", "id": 25585136, "comment_id": 1811255169, "created_at": "2023-11-14T20:49:37Z", "repoId": 394757291, "pullRequestNo": 1640 }, { "name": "Min2who", "id": 127925465, "comment_id": 1813826899, "created_at": "2023-11-16T05:47:48Z", "repoId": 394757291, "pullRequestNo": 1643 }, { "name": "LorisSigrist", "id": 43482866, "comment_id": 1819259247, "created_at": "2023-11-20T15:15:25Z", "repoId": 394757291, "pullRequestNo": 1659 }, { "name": "KraXen72", "id": 21956756, "comment_id": 1825537784, "created_at": "2023-11-24T11:28:53Z", "repoId": 394757291, "pullRequestNo": 1732 }, { "name": "AdamTmHun", "id": 61880960, "comment_id": 1826420619, "created_at": "2023-11-25T21:10:20Z", "repoId": 394757291, "pullRequestNo": 1745 }, { "name": "KTibow", "id": 10727862, "comment_id": 1826423449, "created_at": "2023-11-25T21:27:57Z", "repoId": 394757291, "pullRequestNo": 1746 }, { "name": "thetarnav", "id": 24491503, "comment_id": 1833456333, "created_at": "2023-11-30T10:08:16Z", "repoId": 394757291, "pullRequestNo": 1785 }, { "name": "TajAlasfiyaa", "id": 87016999, "comment_id": 1856385866, "created_at": "2023-12-14T18:35:58Z", "repoId": 394757291, "pullRequestNo": 1893 }, { "name": "tomas-correia", "id": 20492365, "comment_id": 1862914722, "created_at": "2023-12-19T14:52:47Z", "repoId": 394757291, "pullRequestNo": 1919 }, { "name": "Gernii", "id": 54741529, "comment_id": 1863028528, "created_at": "2023-12-19T15:55:04Z", "repoId": 394757291, "pullRequestNo": 1921 }, { "name": "mr-islam", "id": 17675428, "comment_id": 1871469307, "created_at": "2023-12-28T20:26:39Z", "repoId": 394757291, "pullRequestNo": 1955 }, { "name": "jldec", "id": 849592, "comment_id": 1894298346, "created_at": "2024-01-16T18:28:34Z", "repoId": 394757291, "pullRequestNo": 2040 }, { "name": "oscard0m", "id": 2574275, "comment_id": 1895458003, "created_at": "2024-01-17T09:52:08Z", "repoId": 394757291, "pullRequestNo": 2047 }, { "name": "leonardoRocchini", "id": 62795461, "comment_id": 1924359871, "created_at": "2024-02-02T17:29:39Z", "repoId": 394757291, "pullRequestNo": 2169 }, { "name": "vytenisstaugaitis", "id": 30520456, "comment_id": 1925687967, "created_at": "2024-02-04T10:33:03Z", "repoId": 394757291, "pullRequestNo": 2172 }, { "name": "leonardsimonse", "id": 94551625, "comment_id": 1934074917, "created_at": "2024-02-08T12:59:12Z", "repoId": 394757291, "pullRequestNo": 2202 }, { "name": "mquandalle", "id": 1730702, "comment_id": 1987999010, "created_at": "2024-03-11T09:45:21Z", "repoId": 394757291, "pullRequestNo": 2354 }, { "name": "s24407-pj", "id": 92219340, "comment_id": 1996876750, "created_at": "2024-03-14T08:40:59Z", "repoId": 394757291, "pullRequestNo": 2382 }, { "name": "kevinccbsg", "id": 12685053, "comment_id": 2018688746, "created_at": "2024-03-25T18:55:10Z", "repoId": 394757291, "pullRequestNo": 2457 }, { "name": "revosw", "id": 19785016, "comment_id": 2034031770, "created_at": "2024-04-03T09:26:50Z", "repoId": 394757291, "pullRequestNo": 2501 }, { "name": "TheOnlyTails", "id": 65342367, "comment_id": 2045143036, "created_at": "2024-04-09T13:08:09Z", "repoId": 394757291, "pullRequestNo": 2539 }, { "name": "park-jemin", "id": 59681283, "comment_id": 2079694848, "created_at": "2024-04-26T16:15:12Z", "repoId": 394757291, "pullRequestNo": 2668 }, { "name": "NurbekGithub", "id": 24915724, "comment_id": 2091100922, "created_at": "2024-05-02T17:16:00Z", "repoId": 394757291, "pullRequestNo": 2694 }, { "name": "muhammedaksam", "id": 27314049, "comment_id": 2106263594, "created_at": "2024-05-12T14:21:20Z", "repoId": 394757291, "pullRequestNo": 2752 }, { "name": "nirtamir2", "id": 16452789, "comment_id": 2106333137, "created_at": "2024-05-12T18:10:07Z", "repoId": 394757291, "pullRequestNo": 2753 }, { "name": "altruity", "id": 937917, "comment_id": 2111954202, "created_at": "2024-05-15T09:01:15Z", "repoId": 394757291, "pullRequestNo": 2782 }, { "name": "LukasHechenberger", "id": 5802656, "comment_id": 2127453209, "created_at": "2024-05-23T15:41:43Z", "repoId": 394757291, "pullRequestNo": 2812 }, { "name": "Amerlander", "id": 3764089, "comment_id": 2142378616, "created_at": "2024-05-31T14:32:46Z", "repoId": 394757291, "pullRequestNo": 2861 }, { "name": "MrTwixxy", "id": 64733980, "comment_id": 2247457208, "created_at": "2024-07-24T10:02:38Z", "repoId": 394757291, "pullRequestNo": 3025 }, { "name": "jonathanschoonbroodt", "id": 33702771, "comment_id": 2263078256, "created_at": "2024-08-01T13:39:18Z", "repoId": 394757291, "pullRequestNo": 3038 }, { "name": "azezsan", "id": 79533966, "comment_id": 2272796191, "created_at": "2024-08-07T07:22:31Z", "repoId": 394757291, "pullRequestNo": 3047 }, { "name": "AlanBreck", "id": 1199820, "comment_id": 2276935342, "created_at": "2024-08-09T00:21:50Z", "repoId": 394757291, "pullRequestNo": 3053 }, { "name": "Unsleeping", "id": 45426001, "comment_id": 2294948180, "created_at": "2024-08-17T19:14:59Z", "repoId": 394757291, "pullRequestNo": 3021 }, { "name": "emma-sg", "id": 5727389, "comment_id": 2372439059, "created_at": "2024-09-24T21:40:59Z", "repoId": 394757291, "pullRequestNo": 3149 }, { "name": "axel-rock", "id": 3433205, "comment_id": 2396828867, "created_at": "2024-10-07T12:44:45Z", "repoId": 394757291, "pullRequestNo": 3155 }, { "name": "benmccann", "id": 322311, "comment_id": 2407495621, "created_at": "2024-10-11T14:08:21Z", "repoId": 394757291, "pullRequestNo": 3159 }, { "name": "Venmit", "id": 185773680, "comment_id": 2426988669, "created_at": "2024-10-21T15:18:34Z", "repoId": 394757291, "pullRequestNo": 3183 }, { "name": "alikia2x", "id": 87868889, "comment_id": 2438671259, "created_at": "2024-10-25T19:45:33Z", "repoId": 394757291, "pullRequestNo": 3187 }, { "name": "tconroy", "id": 1609336, "comment_id": 2439717856, "created_at": "2024-10-26T19:50:43Z", "repoId": 394757291, "pullRequestNo": 3188 }, { "name": "pikpok", "id": 1003568, "comment_id": 2443984192, "created_at": "2024-10-29T11:44:22Z", "repoId": 394757291, "pullRequestNo": 3194 }, { "name": "gerardmarquinarubio", "id": 106877422, "comment_id": 2453587495, "created_at": "2024-11-03T21:45:21Z", "repoId": 394757291, "pullRequestNo": 3199 }, { "name": "SrGeneroso", "id": 5541794, "comment_id": 2466403868, "created_at": "2024-11-09T18:30:55Z", "repoId": 394757291, "pullRequestNo": 3204 }, { "name": "SrGeneroso", "id": 5541794, "comment_id": 2466404296, "created_at": "2024-11-09T18:32:21Z", "repoId": 394757291, "pullRequestNo": 3204 }, { "name": "hyp3rflow", "id": 49385012, "comment_id": 2467805886, "created_at": "2024-11-11T10:29:29Z", "repoId": 394757291, "pullRequestNo": 3205 }, { "name": "half2me", "id": 6759894, "comment_id": 2476088199, "created_at": "2024-11-14T11:20:14Z", "repoId": 394757291, "pullRequestNo": 3210 }, { "name": "TazorDE", "id": 30119708, "comment_id": 2485839915, "created_at": "2024-11-19T14:15:54Z", "repoId": 394757291, "pullRequestNo": 3224 }, { "name": "jacoblukewood", "id": 1590014, "comment_id": 2509661561, "created_at": "2024-12-01T09:48:18Z", "repoId": 394757291, "pullRequestNo": 3243 }, { "name": "IhsenBouallegue", "id": 48621967, "comment_id": 2515181762, "created_at": "2024-12-03T17:32:04Z", "repoId": 394757291, "pullRequestNo": 3248 }, { "name": "onyedikachi-david", "id": 51977119, "comment_id": 2534334589, "created_at": "2024-12-11T07:49:27Z", "repoId": 394757291, "pullRequestNo": 3260 }, { "name": "tbjers", "id": 1117052, "comment_id": 2563886502, "created_at": "2024-12-27T17:16:04Z", "repoId": 394757291, "pullRequestNo": 3306 }, { "name": "aboqasem", "id": 62098043, "comment_id": 2585142579, "created_at": "2025-01-11T08:15:37Z", "repoId": 394757291, "pullRequestNo": 3339 }, { "name": "Secreto31126", "id": 46955459, "comment_id": 2585573586, "created_at": "2025-01-12T03:51:15Z", "repoId": 394757291, "pullRequestNo": 3340 }, { "name": "dmsynge", "id": 19330240, "comment_id": 2603651874, "created_at": "2025-01-21T04:51:57Z", "repoId": 394757291, "pullRequestNo": 3363 }, { "name": "ampcpmgp", "id": 13173632, "comment_id": 2606229755, "created_at": "2025-01-22T03:51:55Z", "repoId": 394757291, "pullRequestNo": 3362 }, { "name": "pzerelles", "id": 66033561, "comment_id": 2608198066, "created_at": "2025-01-22T20:27:11Z", "repoId": 394757291, "pullRequestNo": 3365 }, { "name": "oskar-gmerek", "id": 53402105, "comment_id": 2614005746, "created_at": "2025-01-25T15:38:28Z", "repoId": 394757291, "pullRequestNo": 3373 }, { "name": "dallyh", "id": 6968534, "comment_id": 2614578709, "created_at": "2025-01-26T20:26:12Z", "repoId": 394757291, "pullRequestNo": 3374 }, { "name": "shivan-s", "id": 51132467, "comment_id": 2645818987, "created_at": "2025-02-08T16:23:21Z", "repoId": 394757291, "pullRequestNo": 3382 }, { "name": "Carlos-err406", "id": 81443707, "comment_id": 2646615258, "created_at": "2025-02-09T21:47:31Z", "repoId": 394757291, "pullRequestNo": 3387 }, { "name": "filips-alpe", "id": 2479702, "comment_id": 2649080317, "created_at": "2025-02-10T19:48:59Z", "repoId": 394757291, "pullRequestNo": 3386 }, { "name": "huynhducduy", "id": 12293622, "comment_id": 2657104436, "created_at": "2025-02-13T16:17:58Z", "repoId": 394757291, "pullRequestNo": 3396 }, { "name": "miikakokkonen", "id": 14804847, "comment_id": 2659389385, "created_at": "2025-02-14T13:50:39Z", "repoId": 394757291, "pullRequestNo": 3399 }, { "name": "juliomuhlbauer", "id": 53458125, "comment_id": 2679937770, "created_at": "2025-02-24T23:34:03Z", "repoId": 394757291, "pullRequestNo": 3430 }, { "name": "dvdzara", "id": 116791973, "comment_id": 2689095808, "created_at": "2025-02-27T20:57:28Z", "repoId": 394757291, "pullRequestNo": 3452 }, { "name": "axekan", "id": 50769262, "comment_id": 2727306184, "created_at": "2025-03-16T09:56:11Z", "repoId": 394757291, "pullRequestNo": 3507 }, { "name": "sialex-net", "id": 91857463, "comment_id": 2729038711, "created_at": "2025-03-17T10:46:24Z", "repoId": 394757291, "pullRequestNo": 3512 }, { "name": "tecoad", "id": 2627749, "comment_id": 2731020924, "created_at": "2025-03-17T21:54:14Z", "repoId": 394757291, "pullRequestNo": 3513 }, { "name": "nukosuke", "id": 17716649, "comment_id": 2739073011, "created_at": "2025-03-20T04:04:08Z", "repoId": 394757291, "pullRequestNo": 3521 }, { "name": "aloker", "id": 140714, "comment_id": 2741744604, "created_at": "2025-03-20T21:47:43Z", "repoId": 394757291, "pullRequestNo": 3525 }, { "name": "MathiasWP", "id": 48158184, "comment_id": 2742659697, "created_at": "2025-03-21T08:25:21Z", "repoId": 394757291, "pullRequestNo": 3522 }, { "name": "seriousm4x", "id": 23456686, "comment_id": 2748770077, "created_at": "2025-03-24T16:44:59Z", "repoId": 394757291, "pullRequestNo": 3532 }, { "name": "ooopus", "id": 107778929, "comment_id": 2751384467, "created_at": "2025-03-25T14:06:30Z", "repoId": 394757291, "pullRequestNo": 3534 }, { "name": "adrian-budau", "id": 1350273, "comment_id": 2755095949, "created_at": "2025-03-26T16:54:44Z", "repoId": 394757291, "pullRequestNo": 3538 }, { "name": "vbatoufflet", "id": 598433, "comment_id": 2761156262, "created_at": "2025-03-28T11:57:32Z", "repoId": 394757291, "pullRequestNo": 3543 }, { "name": "yverek", "id": 6050728, "comment_id": 2776472258, "created_at": "2025-04-03T17:24:03Z", "repoId": 394757291, "pullRequestNo": 3553 }, { "name": "fetsorn", "id": 12858105, "comment_id": 2838020902, "created_at": "2025-04-29T09:04:52Z", "repoId": 394757291, "pullRequestNo": 3574 }, { "name": "derian-cordoba", "id": 74283575, "comment_id": 2848240171, "created_at": "2025-05-02T22:53:16Z", "repoId": 394757291, "pullRequestNo": 3579 }, { "name": "jezikk", "id": 7671531, "comment_id": 2859345652, "created_at": "2025-05-07T16:58:16Z", "repoId": 394757291, "pullRequestNo": 3582 }, { "name": "philippviereck", "id": 105976309, "comment_id": 2859794456, "created_at": "2025-05-07T18:24:55Z", "repoId": 394757291, "pullRequestNo": 3583 }, { "name": "alexbehl", "id": 38441444, "comment_id": 2909851461, "created_at": "2025-05-26T13:58:15Z", "repoId": 394757291, "pullRequestNo": 3588 }, { "name": "Le0Developer", "id": 40232557, "comment_id": 2912434681, "created_at": "2025-05-27T13:02:13Z", "repoId": 394757291, "pullRequestNo": 3589 }, { "name": "HokkaidoInu", "id": 78092452, "comment_id": 2920317405, "created_at": "2025-05-29T19:04:05Z", "repoId": 394757291, "pullRequestNo": 3590 }, { "name": "akkie", "id": 307006, "comment_id": 2925713735, "created_at": "2025-05-31T20:49:00Z", "repoId": 394757291, "pullRequestNo": 3593 }, { "name": "shivan-eyespace", "id": 129010893, "comment_id": 2956763825, "created_at": "2025-06-09T19:28:18Z", "repoId": 394757291, "pullRequestNo": 3598 }, { "name": "PlusA2M", "id": 18495330, "comment_id": 3094609678, "created_at": "2025-07-20T15:41:48Z", "repoId": 394757291, "pullRequestNo": 3654 }, { "name": "GauBen", "id": 48261497, "comment_id": 3191439665, "created_at": "2025-08-15T12:56:01Z", "repoId": 394757291, "pullRequestNo": 3674 }, { "name": "uiolee", "id": 22849383, "comment_id": 3209343168, "created_at": "2025-08-21T07:27:25Z", "repoId": 394757291, "pullRequestNo": 3677 }, { "name": "selimhex", "id": 42006922, "comment_id": 3218307152, "created_at": "2025-08-24T19:01:39Z", "repoId": 394757291, "pullRequestNo": 3681 }, { "name": "MaJoel01", "id": 64578696, "comment_id": 3325360213, "created_at": "2025-09-23T20:01:39Z", "repoId": 394757291, "pullRequestNo": 3703 }, { "name": "cocoliliace", "id": 38874004, "comment_id": 3394310433, "created_at": "2025-10-12T12:30:17Z", "repoId": 394757291, "pullRequestNo": 3716 }, { "name": "stonith404", "id": 58886915, "comment_id": 3455668671, "created_at": "2025-10-28T10:16:50Z", "repoId": 394757291, "pullRequestNo": 3721 }, { "name": "mehmetozguldev", "id": 91568457, "comment_id": 3560530255, "created_at": "2025-11-20T23:06:06Z", "repoId": 394757291, "pullRequestNo": 3755 }, { "name": "sallustfire", "id": 565618, "comment_id": 3609533828, "created_at": "2025-12-04T01:29:04Z", "repoId": 394757291, "pullRequestNo": 3782 } ] } ================================================ FILE: docs/api-reference.md ================================================ --- description: Reference for the @lix-js/sdk public API: openLix, execute, version and merge methods, result shapes, and the built-in SQL tables and functions. --- # API Reference ## `openLix(options?)` ```ts function openLix(options?: { backend?: LixBackend }): Promise; ``` Open a Lix instance. With no `backend`, returns an in-memory Lix. See [Persistence](./persistence.md). Returns a `Lix` with the following methods. ## `Lix` ### `execute(sql, params?)` ```ts lix.execute(sql: string, params?: LixRuntimeValue[]): Promise; ``` Run one DataFusion SQL statement. Use numbered placeholders (`$1`, `$2`); bare `?` is rejected. Use `lix_json($1)` when binding a JSON-typed parameter. ```ts type ExecuteResult = { columns: string[]; rows: Row[]; rowsAffected: number; notices: { code: string; message: string; hint?: string }[]; }; ``` `SELECT` populates `columns` and `rows`. `INSERT` / `UPDATE` / `DELETE` set `rowsAffected` and usually return `rows: []`. ### `Row` ```ts class Row { columns: string[]; value(name): Value; // typed accessor tryValue(name): Value | undefined; valueAt(index): Value; get(name): LixNativeValue; // plain JS tryGet(name): LixNativeValue | undefined; getAt(index): LixNativeValue; toObject(): Record; toValueMap(): Record; } ``` Use `value(name)` for a `Value` with typed accessors: | Method | Returns | For | | --- | --- | --- | | `asText()` | `string \| undefined` | text columns | | `asBoolean()` | `boolean \| undefined` | booleans | | `asInteger()` | `number \| undefined` | integers | | `asReal()` | `number \| undefined` | decimals | | `asJson()` | `JsonValue \| undefined` | JSON / objects / arrays | | `asBlob()` | `Uint8Array \| undefined` | bytes | Accessors return `undefined` when the cell kind doesn't match. Branch on `value.kind` (`"null" | "boolean" | "integer" | "real" | "text" | "json" | "blob"`) for polymorphic columns. `row.toObject()` is the convenience shortcut to a plain JS object. ### `activeVersionId()` ```ts lix.activeVersionId(): Promise; ``` Returns the id of the currently active version. Capture this on startup instead of hard-coding `"main"`. ### `createVersion(options)` ```ts lix.createVersion(options: { name: string; id?: string; fromCommitId?: string; }): Promise<{ id: string; name: string; hidden: boolean }>; ``` Create a new version. Pass `fromCommitId` to fork from a specific commit; otherwise it forks from the active version's head. ### `switchVersion(options)` ```ts lix.switchVersion(options: { versionId: string }): Promise; ``` Make the given version the active one for this Lix instance. Subsequent SQL goes against it. ### `mergeVersionPreview(options)` ```ts lix.mergeVersionPreview(options: { sourceVersionId: string }): Promise<{ outcome: "alreadyUpToDate" | "fastForward" | "mergeCommitted"; targetVersionId: string; sourceVersionId: string; baseCommitId: string; targetHeadCommitId: string; sourceHeadCommitId: string; changeStats: { total: number; added: number; modified: number; removed: number }; conflicts: MergeConflict[]; }>; ``` Reports the same merge decision as `mergeVersion()` without touching state. Returns row-level `conflicts`. Always merges into the active version; switch first if you want a different target. ### `mergeVersion(options)` ```ts lix.mergeVersion(options: { sourceVersionId: string }): Promise<{ outcome: "alreadyUpToDate" | "fastForward" | "mergeCommitted"; targetVersionId: string; sourceVersionId: string; baseCommitId: string; createdMergeCommitId: string | null; changeStats: { total; added; modified; removed }; }>; ``` Throws a `LixError` on conflicts. Wrap in `try/catch` whenever conflicts are possible. ### `close()` ```ts lix.close(): Promise; ``` Always close in scripts and tests. ## Built-in tables | Table | Purpose | | --- | --- | | `lix_registered_schema` | App schemas (and built-ins). Insert into `value` to register. See [Schemas](./schemas.md). | | `lix_change` | Immutable global change journal. Columns: `id`, `entity_id`, `schema_key`, `schema_version`, `file_id`, `metadata`, `snapshot_content`, `created_at`. No version filter; `lix_change` is global. | | `lix_state` / `lix_state_by_version` / `lix_state_history` | Schema-agnostic JSON state. Active version, cross-version, and time-travel respectively. See [SQL Surfaces](./surfaces.md). | | `lix_version` | Writable version surface: `id`, `name`, `hidden`, `commit_id`. | | `lix_file` / `lix_file_by_version` / `lix_file_history` | Versioned files (with `data` bytes), cross-version reads/writes, and history. | | `lix_directory` / `lix_directory_by_version` / `lix_directory_history` | Directory tree, cross-version, and history. | Every registered schema `X` produces three typed surfaces: - `X`: the active-version view, used for plain `INSERT`/`SELECT`/`UPDATE`/`DELETE`. - `X_by_version`: cross-version view with `lixcol_version_id`. See [Versions & Merging](./versions.md). - `X_history`: typed time-travel through this schema's history with `lixcol_start_commit_id`, `lixcol_depth`, `lixcol_observed_commit_id`. For the full grid of state / per-entity / file / directory surfaces and how they compose, see [SQL Surfaces](./surfaces.md). ## Built-in SQL functions | Function | What it does | | --- | --- | | `lix_active_version_commit_id()` | Commit id at the active version's tip. Use to scope `_history` queries (the planner rejects subqueries on `start_commit_id`). | | `lix_json(text)` | Parse JSON text into a JSON-typed value. Use when binding JSON parameters. | | `lix_json_get(json, path...)` | Project a JSON-typed value out of a JSON column. | | `lix_json_get_text(json, path...)` | Project a value out of a JSON column as text. | | `lix_uuid_v7()` | Generate a UUIDv7 string. | | `lix_timestamp()` | Current ISO-8601 timestamp string. | | `lix_text_decode(blob[, encoding])` | Decode a `BLOB` to text (default `utf-8`). | | `lix_text_encode(text[, encoding])` | Encode text to a `BLOB`. | | `lix_empty_blob()` | Zero-byte `BLOB` literal. | See [SQL Functions](./sql-functions.md) for examples and signatures. ## Errors `mergeVersion()` and write paths throw `LixError`. `notices` on `ExecuteResult` carry non-fatal codes with `code`, `message`, and an optional `hint`. ## SQL dialect Lix runs on a DataFusion-backed engine. SQL is mostly Postgres-compatible. SQLite-specific catalog tables (`sqlite_master`, etc.) are not available; use `lix_registered_schema` and `lix_version` instead. ================================================ FILE: docs/backend.md ================================================ --- description: Lix's storage is pluggable. Implement the LixBackend interface (a synchronous, transactional, namespaced key-value store) and Lix runs on top of it. --- # Backends Lix's engine is independent of where the bytes live. Storage is exposed through a single interface, `LixBackend`, that any transactional key-value store can implement. Open a Lix with a different backend and the rest of the API (`openLix`, `execute`, `createVersion`, `mergeVersion`, …) is unchanged. ## What ships today | Backend | Module | Use for | | ------------------------------ | ------------------------------- | ------------------------------------ | | In-memory | default (no `backend` argument) | tests, demos, ephemeral work | | SQLite file (`better-sqlite3`) | `@lix-js/sdk/sqlite` | persistent, single-process Node apps | ```ts import { openLix } from "@lix-js/sdk"; import { createBetterSqlite3Backend } from "@lix-js/sdk/sqlite"; const lix = await openLix({ backend: createBetterSqlite3Backend({ path: "/var/data/app.lix" }), }); ``` Anything beyond these two is not shipped by the Lix team. Implement the `LixBackend` interface yourself and pass it to `openLix({ backend })`. This page is the contract. ## Sync today, async on the roadmap > The current `LixBackend` contract is **synchronous**. All methods return values directly, not promises. The JS SDK runs the engine inside WebAssembly and calls backend methods through synchronous wasm imports. That makes synchronous JS bindings the natural fit (`better-sqlite3` is sync; an in-memory `Map` is sync; native sync KV bindings work). Async-only Node libraries (`pg`, the AWS S3 SDK, IndexedDB, Cloudflare Durable Objects' storage) cannot drive the contract directly today. Practical paths today: - **Synchronous bindings.** `better-sqlite3`, in-memory data structures, sync OPFS access (`createSyncAccessHandle`), Neon-binding RocksDB, `node:sqlite` in newer Node versions. - **Sync-over-async bridges.** Worker threads with `Atomics.wait`, `deasync`, or similar approaches. These add operational complexity and are best avoided for production workloads. An async backend variant (where methods return `Promise`) is on the roadmap so Postgres, IndexedDB, S3, and Durable Objects become first-class. Until then, treat the substrate list below as guidance for what *will* fit, not what's possible from the JS SDK today. ## The full TypeScript contract These are the actual exported types from `@lix-js/sdk`: ```ts type LixBackend = { beginReadTransaction(): LixBackendReadTransaction; beginWriteTransaction(): LixBackendWriteTransaction; close?(): void; }; type LixBackendReadTransaction = { getValues(request: BackendKvGetRequest): BackendKvValueBatch; existsMany(request: BackendKvGetRequest): BackendKvExistsBatch; scanKeys(request: BackendKvScanRequest): BackendKvKeyPage; scanValues(request: BackendKvScanRequest): BackendKvValuePage; scanEntries(request: BackendKvScanRequest): BackendKvEntryPage; rollback(): void; }; type LixBackendWriteTransaction = LixBackendReadTransaction & { writeKvBatch(batch: BackendKvWriteBatch): BackendKvWriteStats; commit(): void; }; // ── Scan ranges ──────────────────────────────────────────────────────────── type BackendKvScanRange = | { kind: "prefix"; prefix: Uint8Array } | { kind: "range"; start: Uint8Array; end: Uint8Array }; // ── Get / exists ─────────────────────────────────────────────────────────── type BackendKvGetRequest = { groups: BackendKvGetGroup[]; }; type BackendKvGetGroup = { namespace: string; keys: Uint8Array[]; }; type BackendKvValueBatch = { groups: BackendKvValueGroup[]; }; type BackendKvValueGroup = { namespace: string; values: Array; // null = key not present }; type BackendKvExistsBatch = { groups: BackendKvExistsGroup[]; }; type BackendKvExistsGroup = { namespace: string; exists: boolean[]; }; // ── Scan ─────────────────────────────────────────────────────────────────── type BackendKvScanRequest = { namespace: string; range: BackendKvScanRange; after?: Uint8Array | null; // exclusive cursor; returns keys strictly greater limit: number; }; type BackendKvKeyPage = { keys: Uint8Array[]; resumeAfter?: Uint8Array | null; }; type BackendKvValuePage = { values: Uint8Array[]; resumeAfter?: Uint8Array | null; }; type BackendKvEntryPage = { keys: Uint8Array[]; values: Uint8Array[]; resumeAfter?: Uint8Array | null; }; // ── Write ────────────────────────────────────────────────────────────────── type BackendKvWriteBatch = { groups: BackendKvWriteGroup[]; }; type BackendKvWriteGroup = { namespace: string; puts: BackendKvPut[]; deletes: Uint8Array[]; }; type BackendKvPut = { key: Uint8Array; value: Uint8Array; }; type BackendKvWriteStats = { puts: number; deletes: number; bytesWritten: number; }; ``` ### Operations | Method | Purpose | | ----------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `getValues` | Batch fetch values by exact key, grouped by namespace. Missing keys come back as `null` in the same position. | | `existsMany` | Same request shape as `getValues`, returns booleans. Used when Lix only needs to know whether a key is present. | | `scanKeys` / `scanValues` / `scanEntries` | Range or prefix scan within one namespace, with `limit` and a resumable `after` cursor. | | `writeKvBatch` | Atomic batch of `puts` and `deletes`, grouped by namespace. Either all of it lands or none of it does. Within a single batch, Lix does not put + delete the same key; the engine never produces such a batch. | | `commit` / `rollback` | Transaction control. After either, the transaction object is finished; do not call further methods on it. | | `close()` / `destroy()` (on the backend) | Lifecycle. `close()` releases handles without affecting durability. `destroy()` (optional, not in the type signature above for backends that don't own their target) removes the entire storage target: file plus WAL/SHM, the OPFS target, the schema, the bucket. | ### Scan semantics - **Order.** Keys come back in ascending lexicographic order on bytes. - **Range.** Half-open: `start <= key < end`. - **Prefix.** Equivalent to `range = { start: prefix, end: incrementLastByteWithCarry(prefix) }`. - **Cursor.** `after` is **exclusive**: the next page returns keys strictly greater than `after`. `resumeAfter` is the last returned key; pass it back as `after` for the next page. `null` `resumeAfter` means no more pages. ### Namespaces Every batch operation is grouped by `namespace: string`. Treat namespaces as logical tables; implementations typically map them to separate column families, prefixes, tables, or buckets. The engine creates namespaces lazily as it writes; backends that require upfront declaration (IndexedDB) need a known namespace list (see below). ## Required guarantees 1. **Atomic write batches.** `writeKvBatch` either applies all puts/deletes across all namespaces, or none of them. A partial failure must roll back the batch. 2. **Read isolation within a transaction.** A read transaction sees a consistent snapshot for its lifetime; concurrent commits do not bleed in. 3. **Read-your-writes within a write transaction.** Reads after a put in the same write transaction see the new value; reads after a delete see `null`. 4. **Durable commits.** When `commit()` returns on a write transaction, the changes survive process restart (for persistent backends). 5. **Byte-ordered scans.** Keys come back in ascending lexicographic order of bytes. Stable pagination: the same `after` cursor returns the same next page if no writes happened in between. ## Concurrency model - **One write transaction at a time.** The engine serializes write transactions itself; you don't need to queue them. A backend may still want a process-wide lock for safety. - **Read transactions are concurrent with writes.** Multiple read transactions can be open while a write transaction is in flight. Reads must see the snapshot from when they were opened, not the in-progress write. - **Transactions are short.** The engine doesn't hold transactions across user awaits; treat `beginReadTransaction()` → operations → `commit()`/`rollback()` as a tight sequence. ## Implementation notes by storage type The contract is small enough that **any transactional KV store with a synchronous binding can host Lix today**. The substrates below are good fits in principle; ones marked async-only require either a sync-over-async bridge or the upcoming async backend variant. **Synchronous, ready today.** `better-sqlite3` (shipping), `node:sqlite` (Node 22+, sync), in-memory `Map`, OPFS via `createSyncAccessHandle` (web workers only), Neon/NAPI bindings to RocksDB or LMDB that expose sync APIs. **Relational (Postgres, MySQL, SQLite-elsewhere)** (*async-only Node bindings*. One table per namespace, or a shared `(namespace, key)` PK table. Wrap each Lix transaction in a SQL transaction. Use repeatable-read isolation for reads, serializable or `SELECT ... FOR UPDATE` for writes. Postgres `bytea` matches Lix's byte-ordered scan requirement. **Object storage (S3, R2, GCS)** (*async-only*, plus not natively transactional. Coordinate writes via a manifest object plus conditional PUT (`If-Match`). For atomic multi-key batches: stage chunks → upload → swap the manifest pointer in one CAS. **Cloudflare.** *async-only*. D1 fits the relational pattern. Durable Objects give you a single-writer mailbox per object, a natural fit for a per-tenant Lix. Cloudflare KV is eventually consistent without transactions; not enough on its own. **Browser.** *async-only* for IndexedDB, *sync if used in a worker* for OPFS. IndexedDB needs object stores declared at `onupgradeneeded`, so the namespace set must be known up front. The auto-commit-on-event-loop trap means buffered-write strategies are the only safe path. **Embedded KV (RocksDB, LMDB, sled)** fit varies by binding. The closest-shaped substrates; map namespaces to column families or key prefixes. Native ranged iterators map directly to `scanKeys`. Sync via Neon binding or N-API works today; async-only bindings will need the future async backend. **Distributed KV (DynamoDB, FoundationDB, TiKV)** (*async-only* in JS. Native transactional semantics. Redis with `MULTI`/`EXEC` is workable for single-instance setups, but its weak isolation makes multi-writer risky. ## Testing your backend A conformance test suite is the right way to validate an implementation: - Round-trip puts and gets within and across namespaces. - **Atomicity.** A batch with one rejected write leaves everything unchanged. - **Isolation.** A read transaction opened before a write commits does not see the writer's changes. - **Read-your-writes.** A write transaction reads the values it just wrote (and not values from concurrent writers). - **Scan ordering.** Keys come back byte-lex; the same `after` cursor yields the same next page absent writes. - **Durability.** Close and reopen; committed data is still there. Run the same suite against the in-memory and `better-sqlite3` backends as a baseline. ## Why this design The engine that implements branches, merge, schemas, change journals, and SQL queries is one piece of code. The storage is another. Keeping the contract small (synchronous, namespaced, transactional KV) is what makes it tractable to put Lix on a SQLite file today and on Postgres, S3, or Durable Objects once the async variant lands, without forking the engine. Same shape DuckDB takes with its readers: one engine, many places to read bytes from. Lix takes it for writes too. ================================================ FILE: docs/comparison-to-git.md ================================================ --- description: Git versions text files line-by-line. Lix versions any file format (DOCX, XLSX, CAD, etc.) semantically per entity. --- # Comparison to Git > **Git versions text files line-by-line. Lix versions any file format, semantically per entity.** Use Git for source code: text in a working tree, edited by developers, reviewed via pull requests. Use Lix when the artifacts you're versioning are anything else (DOCX, XLSX, CAD, PDF, structured app data) and the diff needs to be semantic to be useful. | | Git | Lix | | :--------------- | :----------------- | :------------------------------------ | | Where it runs | Separate process | In-process, as a library | | What it versions | Text files | Any file format, plus structured data | | Diff model | Line-by-line text | Per-entity semantic | | History | `git log` | `SELECT * FROM lix_change` | | Driven by | Developer at a CLI | Code: app, service, agent, CLI | Both can coexist: Git for source code, Lix for the files and data your product, service, or tool versions at runtime. ## Snapshots vs changes Git stores snapshots and computes text diffs between them. That works for code, where lines are the unit of change. For spreadsheets, documents, CAD, and PDFs, the line-based diff doesn't surface meaningful changes, which is exactly the kind of file where end users want version control. Lix stores changes as data, parsed into entities by format-specific plugins (XLSX → cells, DOCX → clauses, CAD → parts). The plugin API itself is on the [roadmap](https://github.com/opral/lix#roadmap); once it lands, plugins are written by the people who know each format. Product- and tool-level questions become direct queries: - Which cells / clauses / parts changed? - Who or what made this edit? - What would happen if we merged this version? That's why Lix's history surface is a SQL table, not a `git log` parser. See [Change History](./history.md). ## What this looks like ### JSON **Before:** ```json { "theme": "light", "notifications": true, "language": "en" } ``` **After:** ```json { "theme": "dark", "notifications": true, "language": "en" } ``` **Git sees:** ```diff -{ "theme": "light", "notifications": true, "language": "en" } +{ "theme": "dark", "notifications": true, "language": "en" } ``` **Lix sees:** ```diff property theme: - light + dark ``` ### Excel **Before:** | order_id | product | status | | -------- | -------- | ------- | | 1001 | Widget A | shipped | | 1002 | Widget B | pending | **After:** | order_id | product | status | | -------- | -------- | ------- | | 1001 | Widget A | shipped | | 1002 | Widget B | shipped | **Git sees:** ```diff -Binary files differ ``` **Lix sees:** ```diff order_id 1002 status: - pending + shipped ``` ================================================ FILE: docs/getting-started.md ================================================ --- description: Install Lix, open an in-memory repository, register a schema, write rows, and inspect a change in under 30 lines of JavaScript. --- # Getting Started This walks through opening Lix, registering a schema, writing a row, isolating a change in a separate version, previewing the merge, and merging. ## Install ```bash npm install @lix-js/sdk ``` `openLix()` with no arguments opens an in-memory Lix, enough for tests and demos. For persistent storage see [Persistence](./persistence.md). ## Open Lix ```ts import { openLix } from "@lix-js/sdk"; const lix = await openLix(); ``` ## Register a schema Lix stores application state as typed entities. Register a schema once, then read and write through the generated SQL table named after `x-lix-key`. ```ts await lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", [ JSON.stringify({ $schema: "https://json-schema.org/draft/2020-12/schema", "x-lix-key": "task", "x-lix-version": "1", "x-lix-primary-key": ["/id"], type: "object", required: ["id", "title", "done"], properties: { id: { type: "string" }, title: { type: "string" }, done: { type: "boolean" }, }, additionalProperties: false, }), ], ); ``` `lix_json($1)` parses the JSON text into the JSON-typed `value` column. Schema details (the `x-lix-*` fields, primary keys, uniqueness) are covered in [Schemas](./schemas.md). ## Write and read state ```ts await lix.execute("INSERT INTO task (id, title, done) VALUES ($1, $2, $3)", [ "task-1", "Review agent changes", false, ]); const result = await lix.execute( "SELECT id, title, done FROM task WHERE id = $1", ["task-1"], ); const row = result.rows[0]!; console.log(row.value("title").asText(), row.value("done").asBoolean()); ``` `execute()` returns `{ columns, rows, rowsAffected, notices }`. Use `row.value(name).asText() | .asBoolean() | .asInteger() | .asJson()` for typed access, or `row.toObject()` for a plain JS object. See [API Reference](./api-reference.md). ## Isolate a change in a version A version is an isolated line of state. Create one for the change, switch into it, and edit: ```ts const main = await lix.activeVersionId(); const draft = await lix.createVersion({ name: "Agent draft" }); await lix.switchVersion({ versionId: draft.id }); await lix.execute("UPDATE task SET done = $1 WHERE id = $2", [true, "task-1"]); await lix.switchVersion({ versionId: main }); ``` The active version is now `main` again, and `task-1` is still `done = false` here. The draft change is isolated until you merge. ## Preview and merge ```ts const preview = await lix.mergeVersionPreview({ sourceVersionId: draft.id }); console.log(preview.outcome, preview.changeStats); // fastForward { total: 1, added: 0, modified: 1, removed: 0 } if (preview.conflicts.length === 0) { await lix.mergeVersion({ sourceVersionId: draft.id }); } ``` `mergeVersionPreview()` reports the same merge decision as `mergeVersion()` without advancing refs. It returns the per-row conflict list when both sides changed the same entity. See [Versions & Merging](./versions.md). ## The loop 1. Open Lix. 2. Register schemas for the entities you want to version. 3. Write and read through generated tables. 4. Create versions for isolated work. 5. Preview, then merge or discard. 6. Query [`lix_change`](./history.md) for audit and undo. ================================================ FILE: docs/history.md ================================================ --- description: Lix journals every change. Query lix_change for global per-entity history, lix_state_history for what's reachable from a version, and _by_version for current per-version state. --- # Change History Lix gives you three SQL surfaces for history. Pick the one that matches the question you're asking. For the full grid of state, version, and history surfaces see [SQL Surfaces](./surfaces.md). | Surface | What you ask it | | --- | --- | | `lix_change` | "What happened to this entity, ever?" Global, immutable journal of every write across every schema and version. | | `lix_state_history` | "What did this version see?" State walked back from a commit, with `depth` for time-travel. | | `_by_version` | "What's in this version right now?" Current rows in each version. Documented in [Versions & Merging](./versions.md). | Versions don't filter `lix_change` directly; `lix_change` is the raw write log, and versions are pointers in the commit graph. To scope history to a version, use `lix_state_history` with the version's `commit_id`. ## `lix_change` columns | Column | What it is | | ------------------ | ------------------------------------------------------------------------------------------------------- | | `id` | Unique change id. | | `entity_id` | Primary key of the changed row. For composite keys, an encoded form (`pk:v1:`). | | `schema_key` | Which schema (`x-lix-key`). | | `schema_version` | Schema contract version at the time of the change. | | `file_id` | The file the change belongs to, or `null` for entity-only changes. | | `metadata` | JSON metadata attached to the change. | | `snapshot_content` | JSON snapshot of the row after the change, or `null` for deletions (tombstones). | | `created_at` | ISO timestamp. | Read JSON cells with `row.value("snapshot_content").asJson()` or `row.get("snapshot_content")`. Don't `JSON.parse` it as text, and handle `null` for tombstones. ## `lix_state_history` columns | Column | What it is | | -------------------- | --------------------------------------------------------------------------------------- | | `entity_id` | Primary key of the row. | | `schema_key` | Which schema. | | `file_id` | The file the row belongs to, or `null`. | | `snapshot_content` | JSON snapshot at this depth. | | `metadata` | JSON metadata. | | `schema_version` | Schema contract version. | | `change_id` | The `lix_change.id` that produced this state. | | `observed_commit_id` | The commit where this state was recorded. | | `commit_created_at` | When the commit was created. | | `start_commit_id` | The commit the walk started from (typically the version's tip, `lix_version.commit_id`). | | `depth` | `0` = current state at `start_commit_id`. Higher values walk back through history. | ## Recipes ### Per-entity history (across all versions) ```sql SELECT created_at, snapshot_content FROM lix_change WHERE schema_key = $1 AND entity_id = $2 ORDER BY created_at; ``` ### Latest activity for a schema ```sql SELECT created_at, entity_id, snapshot_content FROM lix_change WHERE schema_key = $1 ORDER BY created_at DESC LIMIT 20; ``` ### What's in this version right now Use the schema's `_by_version` surface (see [Versions & Merging](./versions.md)): ```sql SELECT entity_id, snapshot_content FROM acme_section_by_version WHERE lixcol_version_id = $1; ``` ### What did this version see, walked back through history ```sql SELECT entity_id, schema_key, snapshot_content, depth, observed_commit_id FROM lix_state_history WHERE start_commit_id = lix_active_version_commit_id() AND depth >= 0 ORDER BY depth, schema_key, entity_id; ``` `depth = 0` is the current state of that version. Higher depths walk back through earlier commits. Filter by `schema_key` or `entity_id` to narrow. ### Diff one entity between two versions ```sql SELECT v.id AS version_id, v.name, s.snapshot_content FROM acme_section_by_version s JOIN lix_version v ON v.id = s.lixcol_version_id WHERE s.id = $1 AND s.lixcol_version_id IN ($2, $3); ``` Compare the two `snapshot_content` JSON values field-by-field in your code to render a per-field diff. ### Undo the last change to an entity ```ts const prev = await lix.execute( `SELECT snapshot_content FROM lix_change WHERE schema_key = $1 AND entity_id = $2 AND snapshot_content IS NOT NULL ORDER BY created_at DESC LIMIT 1 OFFSET 1`, ["acme_section", "s1"], ); const snapshot = prev.rows[0]?.value("snapshot_content").asJson(); // then UPDATE acme_section with the snapshot fields ``` The `snapshot_content IS NOT NULL` filter skips tombstones (deletions). ## Tombstones A deletion produces a `lix_change` row with `snapshot_content = null`. Branch on null when rendering or replaying history. ================================================ FILE: docs/lix-for-ai-agents.md ================================================ --- description: Route agent writes through Lix to get isolated workspaces, previewable changes, and approve-or-discard review for every agent task. --- # Lix for AI Agents Agent review is one of Lix's headline use cases, but the same primitives ([Versions](./versions.md), [Change History](./history.md)) power any product where end users review proposed changes. If you're building knowledge-work tools, the patterns here apply to humans drafting changes too. Agents make fast, useful, and sometimes wrong changes. Lix gives each agent task its own isolated version of state so a human or a policy can review it before it lands. ## The pattern 1. Create a version for the agent task. 2. Switch the agent's writes into that version. 3. Run the agent. All writes are isolated. 4. Preview the merge: `changeStats` for the count, `conflicts` for collisions. 5. Approve, request changes, or discard. ```ts const main = await lix.activeVersionId(); const task = await lix.createVersion({ name: "Agent task 123" }); await lix.switchVersion({ versionId: task.id }); // run the agent; every lix.execute is now isolated to `task` await lix.switchVersion({ versionId: main }); const preview = await lix.mergeVersionPreview({ sourceVersionId: task.id }); if (preview.conflicts.length === 0) { await lix.mergeVersion({ sourceVersionId: task.id }); } ``` ## Why versions matter for agents - Run multiple agents in parallel without stepping on each other. - Compare proposed outcomes side by side. - Keep the main state stable while work is in progress. - Discard a bad attempt with no manual cleanup. ## Showing the work The point of routing agent writes through Lix is that you can ask SQL what the agent did: ```sql SELECT entity_id, schema_key, snapshot_content, depth, observed_commit_id FROM lix_state_history WHERE start_commit_id = lix_active_version_commit_id() AND depth >= 0 ORDER BY depth, schema_key, entity_id; ``` This is the data your review UI renders. See [Change History](./history.md) for more recipes (per-entity history, who-changed-what, diffs between versions). ## Conflicts Merge is per-entity today: two versions editing different rows merge cleanly; two versions editing the same row produce a `sameEntityChanged` conflict. Wrap `mergeVersion()` and handle the conflict in your review flow. Don't reshape your schemas around this. Conflict semantics are an active roadmap item; design entities for how your code reads them, not around today's merge granularity. See [Versions & Merging](./versions.md#dont-shape-entities-around-merge). ## Next - [Getting Started](./getting-started.md): the basic loop. - [Versions & Merging](./versions.md): preview shape, conflicts, side-by-side reads. - [Change History](./history.md): the SQL surface for review and undo. ================================================ FILE: docs/persistence.md ================================================ --- description: Open Lix in memory for tests, or persist to a .lix SQLite file via the better-sqlite3 backend. For other storage targets, implement the backend interface. --- # Persistence `openLix()` with no arguments opens an in-memory Lix that vanishes when the process exits. For anything that should survive a restart, pass a backend. ## In-memory (tests, demos) ```ts import { openLix } from "@lix-js/sdk"; const lix = await openLix(); // ... use it ... await lix.close(); ``` ## SQLite file (Node.js) Persist a Lix as a single `.lix` file using the `better-sqlite3` backend. Install `better-sqlite3` as a peer dependency: ```bash npm install @lix-js/sdk better-sqlite3 ``` ```ts import { openLix } from "@lix-js/sdk"; import { createBetterSqlite3Backend } from "@lix-js/sdk/sqlite"; const lix = await openLix({ backend: createBetterSqlite3Backend({ path: "/var/data/app.lix" }), }); // ... use it ... await lix.close(); ``` Reopening the same path resumes existing state. Don't open the file with raw SQLite tools; Lix manages its own schema and transactions. For tests, point at a temp directory so each run is isolated: ```ts import { mkdtempSync } from "node:fs"; import { tmpdir } from "node:os"; import path from "node:path"; const dir = mkdtempSync(path.join(tmpdir(), "lix-")); const lix = await openLix({ backend: createBetterSqlite3Backend({ path: path.join(dir, "demo.lix") }), }); ``` ## Closing Always `await lix.close()` in scripts and tests. Long-lived servers can hold a single Lix instance for the lifetime of the process. ## Other storage targets Postgres, S3, Cloudflare D1 / Durable Objects, IndexedDB, OPFS, RocksDB (anything transactional and key-value-shaped) are not shipped by the Lix team. The storage interface is public and small enough to implement yourself. See [Backends](./backend.md) for the contract. ================================================ FILE: docs/schemas.md ================================================ --- description: Define the entity types Lix tracks for you. The x-lix-* JSON Schema extensions control the SQL table name, primary keys, uniqueness, and foreign keys. --- # Schemas Schemas describe the entities Lix tracks. You declare each entity type as a JSON Schema with a few `x-lix-*` extensions, and Lix exposes a SQL table for it. Schemas are also the foundation file-format plugins build on: a plugin parses a file format (XLSX, DOCX, CAD, …) into entities described by a schema. Today you register schemas yourself; once the plugin API lands, plugin authors register theirs. > [!NOTE] > **For agents.** Lix is self-documenting. When operating against a Lix repository, query `lix_registered_schema` to discover every schema currently in effect (including Lix's own internal schemas `lix_*`) rather than relying on a snapshot of these docs. The schemas you read back are authoritative and current. > > ```sql > SELECT value FROM lix_registered_schema; > ``` ## Register a schema ```sql INSERT INTO lix_registered_schema (value) VALUES (lix_json('{ "$schema": "https://json-schema.org/draft/2020-12/schema", "x-lix-key": "acme_section", "x-lix-primary-key": ["/id"], "type": "object", "required": ["id", "title", "body"], "properties": { "id": { "type": "string" }, "title": { "type": "string" }, "body": { "type": "string" } }, "additionalProperties": false }')); ``` After registration, `acme_section` is a SQL table you can `INSERT`, `SELECT`, `UPDATE`, and `DELETE` against. A sibling table `acme_section_by_version` exposes the same rows across all versions (see [Versions & Merging](./versions.md)). ## The `x-lix-*` extensions | Field | Purpose | | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `x-lix-key` | Required. Becomes the SQL table name and the durable identity of the relation. Use stable, lowercase, prefixed keys: `acme_section`, not `section`. See [Prefix your schema keys](#prefix-your-schema-keys). | | `x-lix-primary-key` | Required for table-style INSERTs. Array of JSON Pointer paths into the entity. Column order is semantic. | | `x-lix-unique` | Optional. Array of unique constraints, each itself an array of JSON Pointer paths. | | `x-lix-foreign-keys` | Optional. Array of foreign keys to other registered schemas. See [Foreign keys](#foreign-keys). | Without `x-lix-primary-key` you'll get an error like `requires lixcol_entity_id because the schema has no x-lix-primary-key`. Schema identity is `x-lix-key` alone. There is no version field. Evolution is governed by the [amendment rules](#schema-amendment-rules). ### JSON Pointer paths Primary-key, unique, and foreign-key paths are [JSON Pointer](https://datatracker.ietf.org/doc/html/rfc6901) strings: leading slash, slash-separated segments, pointing into the entity. For most schemas this is just `["/id"]`, but it works for nested fields: ```ts "x-lix-primary-key": ["/owner/email"] ``` ### Composite primary keys and uniqueness ```ts "x-lix-primary-key": ["/order_id", "/line_no"], "x-lix-unique": [["/sku"], ["/order_id", "/sku"]], ``` Uniqueness is **not** inferred from JSON Schema metadata. If a non-primary-key field must be unique, declare it with `x-lix-unique`. ### Foreign keys Foreign keys reference another registered schema by `x-lix-key`: ```ts "x-lix-foreign-keys": [ { "properties": ["/author_id"], "references": { "schemaKey": "acme_author", "properties": ["/id"] } } ] ``` The reference is **identity-only**: there is no `schemaVersion` on the right-hand side. A foreign key points at a schema by its stable `x-lix-key` and trusts that the referenced schema evolves under the same compatibility rules described below. This keeps cross-plugin references sane: a markdown plugin can FK into an author plugin without tracking which revision of the author schema is currently registered. ### `additionalProperties: false` Always include `additionalProperties: false`. Lix validates writes against the schema, and accidental fields will fail fast instead of silently writing garbage. It's also required by the amendment rules below: schemas that don't set it cannot be safely amended. ## Schema amendment rules A registered schema's `x-lix-key` is the relation's durable identity. You can re-register the same `x-lix-key` to amend the schema, but Lix only accepts changes that keep existing data valid. The rules are mechanical: a diff of old vs new must satisfy every constraint below or the amendment is rejected. ### Why amendments must be backward compatible Lix is a version-controlled repository. Every change is immutable. Once a row has been written under a schema, that historical change cannot be rewritten. A Lix repository may hold years of changes spread across many versions and many authors' schemas, and all of it must remain readable. This makes retroactive schema migrations impossible. There is no point in time at which Lix could "convert all existing rows from the old shape to the new one"; the old rows are part of history, and history doesn't change. ``` schema grows forward (additive only) ──────────────► v1: {id, body} v2: {id, body, tag?} time ──●──────●──────●─────────●──────●──────────────► c1 c2 c3 c4 c5 └─ written under v1 ────┘└─ under v2 ─┘ │ └─ immutable; reading c1 must still succeed after the v1 → v2 amendment. ``` The only safe direction of evolution is therefore additive: a schema can grow in ways that leave existing rows valid, but it cannot tighten, rename, or remove anything that already exists. This is what the rules below enforce. If a schema author truly needs a breaking change, they mint a new `x-lix-key` (e.g. `md_block_v2`), leave the old key's data untouched in history, and write any plugin-level migration code at their own pace. Old data stays valid under the old key; new data lives under the new key. ### What you can change - **Add a new optional property.** It must not appear in `required`, and it must not be referenced by any existing primary-key, unique, or foreign-key constraint. Existing rows simply lack the field. - **Edit doc-only fields** anywhere in the schema: `description`, `title`, `$comment`, `deprecated`. These never affect storage or validation, so you can iterate on them freely. ### What you cannot change - **`x-lix-key`.** Renaming creates a new relation; it is not an amendment. - **`additionalProperties`.** Must remain `false`. - **Existing properties.** Type, default, format, nested schema, enum: all frozen. Once a property has shipped, its semantics are permanent. - **`required`.** The required set is frozen. Neither additions nor removals. - **Constraints (`x-lix-primary-key`, `x-lix-unique`, `x-lix-foreign-keys`).** Frozen. You can reorder list elements cosmetically (Lix normalizes the comparison), but you can't add, remove, or modify a constraint. Primary-key column order is semantic and cannot be reordered. - **Top-level keywords** like `type`, `examples`, `patternProperties`. Frozen. - **Nested object schemas.** A property whose `type` is `object` is frozen as a unit: you cannot add subproperties to it. Recursive schema evolution is intentionally a later, explicit feature. - **`x-lix-version`.** Rejected if present on either side. ### What to do when you really need a breaking change Mint a new `x-lix-key`. Ship `acme_section_v2` as a separate schema, write migration code in your plugin to move data from `acme_section` to `acme_section_v2`, and let the two coexist while consumers cut over. Foreign keys pointing at the old key keep working; new ones point at the new key. This is how protobuf, GraphQL, RDF, and OpenAPI all handle hard breaks: the new identity _is_ the version bump, and it cascades through references naturally. ## Prefix your schema keys `x-lix-key` is the global identifier for an entity type inside a Lix instance. It's also the SQL table name. Pick a prefix tied to your app, plugin, or organization, and put every schema you own behind it: | Good | Bad | | :--------------------------- | :---------------- | | `acme_task`, `acme_section` | `task`, `section` | | `xlsx_cell`, `xlsx_sheet` | `cell`, `sheet` | | `figma_layer`, `figma_frame` | `layer`, `frame` | Why it matters: a single Lix can hold many files and many schemas at once. App-level entities, file-format plugins (XLSX, DOCX, CAD, …), and Lix's own internal schemas all share the `lix_registered_schema` namespace. An unprefixed `task` collides the moment a second source registers the same name. The `lix_*` prefix is reserved for Lix-internal schemas; don't use it for your own. Treat `x-lix-key` like a package name: lowercase, stable, namespaced. Once data is written, the key is permanent (see the amendment rules above). ## Best practices ### Don't store lifecycle timestamps You don't need `created_at` or `updated_at` on app schemas. Lix already records lifecycle in [`lix_change`](./history.md). Add timestamp fields only when they're domain data, like `due_at` or `published_at`. ### Inspecting registered schemas ```sql SELECT lixcol_entity_id, value FROM lix_registered_schema ORDER BY lixcol_entity_id; ``` ### Design for querying, not for merging Shape your entities the way your reads want them. Document blocks, spreadsheet cells, line items: model whatever's natural for the questions your code asks. Don't shrink rows just to avoid merge conflicts. Lix's conflict detection is row-level today (two versions editing different fields of the same row still conflict), but conflict semantics and resolution are an active roadmap item; designs that bend around today's limitation will look strange once that lands. See the [roadmap](https://github.com/opral/lix#roadmap). If two collaborators are likely to edit the same logical thing concurrently and your domain naturally splits it (a document into blocks, an invoice into line items), split it because the _data_ makes sense that way. Don't split a single record into ten just because a future merge might collide. ================================================ FILE: docs/sql-functions.md ================================================ --- description: Built-in scalar SQL functions provided by the Lix engine. Covers JSON parsing and projection, ID and timestamp generation, text/blob coercion, and the active-version commit id helper used to scope history queries. --- # SQL Functions Lix's DataFusion-backed engine registers a small set of scalar functions for use inside `lix.execute()`. They cover the gaps between standard SQL and Lix's own conventions: parsing JSON parameters, producing IDs and timestamps, coercing between text and bytes, and resolving the active version's commit id for history queries. ## At a glance | Function | Returns | Use for | | :-- | :-- | :-- | | `lix_active_version_commit_id()` | text | Scoping `_history` queries to the active version. | | `lix_json(text)` | JSON | Parse a JSON string parameter into a JSON-typed value. | | `lix_json_get(json, path...)` | JSON | Project a value out of a JSON column, preserving JSON type. | | `lix_json_get_text(json, path...)` | text | Project a value out of a JSON column as plain text. | | `lix_uuid_v7()` | text | Generate a UUIDv7 string. | | `lix_timestamp()` | text | Current ISO-8601 timestamp string. | | `lix_text_decode(blob[, encoding])` | text | Decode a `BLOB` to text (default `utf-8`). | | `lix_text_encode(text[, encoding])` | blob | Encode text into a `BLOB` (default `utf-8`). | | `lix_empty_blob()` | blob | Zero-byte `BLOB` literal. | All functions are scalar; call them anywhere a SQL expression is allowed. ## Version & history ### `lix_active_version_commit_id()` Returns the commit id at the tip of the **currently active** version, as resolved when the SQL statement was planned. History surfaces (`lix_state_history`, `_history`, `lix_file_history`, `lix_directory_history`) require a literal or bound-parameter equality on `start_commit_id` (or `lixcol_start_commit_id`). A correlated subquery against `lix_version` is rejected by the planner. `lix_active_version_commit_id()` is the canonical way to scope history to the active version in a single statement: ```sql -- Walk one entity's history from the active version's tip SELECT depth, observed_commit_id, snapshot_content FROM lix_state_history WHERE schema_key = 'task' AND entity_id = 't1' AND start_commit_id = lix_active_version_commit_id() ORDER BY depth; ``` For an arbitrary version, resolve the commit id with one query and pass it as a parameter: ```ts const { rows } = await lix.execute( "SELECT commit_id FROM lix_version WHERE id = $1", [versionId], ); const commitId = rows[0].value("commit_id").asText(); await lix.execute( `SELECT depth, snapshot_content FROM lix_state_history WHERE start_commit_id = $1 AND schema_key = $2 AND entity_id = $3 ORDER BY depth`, [commitId, "task", "t1"], ); ``` ## JSON ### `lix_json(text)` Parses a JSON string into a JSON-typed value. Use this when binding a JSON parameter, since DataFusion otherwise treats the bound value as plain text: ```ts await lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", [JSON.stringify(schema)], ); ``` ### `lix_json_get(json, path...)` Returns the value at a JSON path, **preserving JSON type** (objects, arrays, numbers, booleans, strings stay as JSON). Variadic path: pass each segment as a separate argument. ```sql SELECT lix_json_get(snapshot_content, 'tags') FROM lix_state WHERE schema_key = 'task'; -- returns ["urgent","draft"] as JSON ``` ### `lix_json_get_text(json, path...)` Same as `lix_json_get` but returns the value as plain text. Useful for filtering or display: ```sql SELECT entity_id FROM lix_state WHERE schema_key = 'task' AND lix_json_get_text(snapshot_content, 'priority') = 'high'; ``` Both return `NULL` if the path is missing or the underlying value is `null`. ## IDs & time ### `lix_uuid_v7()` Generates a fresh RFC 9562 UUIDv7 string. Useful in `INSERT` defaults and CEL `default` expressions in JSON Schema: ```sql INSERT INTO task (id, title, done) VALUES (lix_uuid_v7(), 'New task', false); ``` ### `lix_timestamp()` Returns the current time as an ISO-8601 string. ```sql INSERT INTO event (id, occurred_at) VALUES (lix_uuid_v7(), lix_timestamp()); ``` ## Text & bytes ### `lix_text_decode(blob[, encoding])` Decodes a `BLOB` to text. The optional second argument is the encoding name (`"utf-8"` is the default and currently the only supported encoding): ```sql SELECT lix_text_decode(data) FROM lix_file WHERE path = '/notes/readme.md'; ``` ### `lix_text_encode(text[, encoding])` Inverse of `lix_text_decode`. Encodes text into a `BLOB`: ```sql INSERT INTO lix_file (id, path, data) VALUES (lix_uuid_v7(), '/notes/hello.txt', lix_text_encode('hello world')); ``` ### `lix_empty_blob()` Returns a zero-length `BLOB`. Handy for creating an empty file: ```sql INSERT INTO lix_file (id, path, data) VALUES (lix_uuid_v7(), '/empty.bin', lix_empty_blob()); ``` ## Notes - Functions are pure scalars; they do not consume rows or take aggregates. - Bound parameters use `$1`, `$2`, … (not `?`); see [API Reference](./api-reference.md#executesql-params). - `lix_active_version_commit_id()`, `lix_uuid_v7()`, and `lix_timestamp()` reflect the engine's current view at planning/execution time and are stable across the rows of a single statement. ================================================ FILE: docs/surfaces.md ================================================ --- description: The SQL surfaces in Lix at a glance. State surfaces are JSON-shaped and schema-agnostic; per-entity, file, and directory surfaces are typed sugar over the same data. One grid, eleven tables. --- # SQL Surfaces Lix exposes the same underlying state through several SQL surfaces so you can query it the way that fits the question you're asking. Two ergonomic axes: - **Grain.** Typed columns for one schema vs. raw JSON across all schemas vs. file bytes. - **Scope.** The active version, all versions side-by-side, or history walked through commits. A third surface, `lix_change`, sits outside the grid as the immutable global change journal: every write across every schema and every version, ordered by `created_at`. ## The grid | | Active (current state) | Cross-version (side-by-side) | History (time-travel) | | :--------------------------- | :----------------------------- | :---------------------------------------- | :------------------------------------- | | **Per-entity, typed** | `` | `_by_version` | `_history` | | **State, raw JSON, all schemas** | `lix_state` | `lix_state_by_version` | `lix_state_history` | | **Files (bytes)** | `lix_file` | `lix_file_by_version` | `lix_file_history` | | **Directories** | `lix_directory` | `lix_directory_by_version` | `lix_directory_history` | Plus: `lix_change`, the global change journal (no version filter). Pick the row by what you're querying; pick the column by which version(s) and which time. Same data underneath, different ergonomics. ## State surfaces Schema-agnostic, JSON-shaped reads across every registered schema. | Surface | Use for | | :-- | :-- | | `lix_state` | Current state of every entity in the active version. | | `lix_state_by_version` | Same, but with a `version_id` column so you can read across versions. | | `lix_state_history` | State walked back through the commit graph from a given commit. | Common columns (`lix_state` and `lix_state_by_version`): `entity_id`, `schema_key`, `file_id`, `snapshot_content` (JSON), `metadata` (JSON), `schema_version`, `change_id`, `commit_id`. `lix_state_by_version` adds `version_id`. `lix_state_history` shares `entity_id`, `schema_key`, `file_id`, `snapshot_content`, `metadata`, `schema_version`, `change_id`, and instead of `commit_id` exposes `start_commit_id`, `observed_commit_id`, `commit_created_at`, and `depth` (commit-graph distance from `start_commit_id`; `0` is the freshest observation, higher values walk back, and intermediate commits that didn't touch the entity are skipped). > **History queries require a literal filter on `start_commit_id`.** A correlated subquery against `lix_version` is rejected by the planner. Use `lix_active_version_commit_id()` for the active version, or resolve the commit id with one query and pass it as a parameter. See [`lix_active_version_commit_id()`](./sql-functions.md#lix_active_version_commit_id). ```sql -- Every entity in the active version, raw JSON SELECT entity_id, schema_key, snapshot_content FROM lix_state; -- Same entity in two versions, side by side SELECT version_id, snapshot_content FROM lix_state_by_version WHERE schema_key = 'task' AND entity_id = 't1' AND version_id IN ($a, $b); -- Walk history of one entity from a version's tip SELECT depth, observed_commit_id, snapshot_content FROM lix_state_history WHERE schema_key = 'task' AND entity_id = 't1' AND start_commit_id = lix_active_version_commit_id() ORDER BY depth; ``` ## Per-entity sugar For each registered schema `X`, Lix generates three typed surfaces named after `x-lix-key`: | Surface | Use for | | :-- | :-- | | `` | `INSERT` / `SELECT` / `UPDATE` / `DELETE` against the active version with typed columns. | | `_by_version` | Read or write across versions; INSERTs require `lixcol_version_id`. | | `_history` | Time-travel through one schema's history with typed columns. | Per-entity surfaces project user columns directly (`id`, `title`, `done`, …) plus `lixcol_*`-prefixed system columns. The set varies by scope: - `` (active): `lixcol_change_id`, `lixcol_commit_id`, `lixcol_created_at`, `lixcol_updated_at`, plus bookkeeping. **No `lixcol_version_id`**; the active surface is implicitly the active version. - `_by_version`: adds `lixcol_version_id`. INSERT/UPDATE require it. - `_history`: `lixcol_start_commit_id`, `lixcol_observed_commit_id`, `lixcol_depth`, `lixcol_snapshot_content`, `lixcol_change_id` (no `lixcol_commit_id` here; commits in history are addressed via `lixcol_observed_commit_id`). Note the prefix asymmetry between grains: state surfaces use **bare** column names (`start_commit_id`, `depth`, `observed_commit_id`); per-entity, file, and directory surfaces wear `lixcol_` on the same columns. ```sql -- Current rows of one schema, typed columns SELECT id, title, done FROM task; -- Compare one entity across two versions, typed SELECT lixcol_version_id, title, done FROM task_by_version WHERE id = 't1' AND lixcol_version_id IN ($a, $b); -- History of one entity, typed SELECT lixcol_depth, title, done FROM task_history WHERE id = 't1' AND lixcol_start_commit_id = lix_active_version_commit_id() ORDER BY lixcol_depth; ``` When you need the typed columns, reach for the per-entity sugar. When you're querying across schemas, drop down to `lix_state*`. Same data either way. ## Files `lix_file` versions byte content alongside path metadata. Each file gets the same three views as a registered schema, plus a `data BLOB` column for bytes. | Surface | Use for | | :-- | :-- | | `lix_file` | Current files in the active version. Read bytes via `data`. | | `lix_file_by_version` | Read or write files across versions. | | `lix_file_history` | Walk previous versions of a file's bytes through the commit graph. | User columns: `id`, `path`, `directory_id`, `name`, `hidden`, `data`. System columns are `lixcol_*` (`lixcol_version_id` on `_by_version`; `lixcol_start_commit_id`, `lixcol_depth`, `lixcol_observed_commit_id` on `_history`). ```sql -- Current bytes of a file SELECT data FROM lix_file WHERE path = '/orders.xlsx'; -- Bytes of the same file in two versions SELECT lixcol_version_id, data FROM lix_file_by_version WHERE path = '/orders.xlsx' AND lixcol_version_id IN ($a, $b); -- Every previous version of a file's bytes SELECT lixcol_depth, lixcol_observed_commit_id, data FROM lix_file_history WHERE path = '/orders.xlsx' AND lixcol_start_commit_id = lix_active_version_commit_id() ORDER BY lixcol_depth; ``` Read `data` with `row.value("data").asBlob()`. ## Directories Same shape as files, minus the `data` column. | Surface | Use for | | :-- | :-- | | `lix_directory` | Current directories in the active version. | | `lix_directory_by_version` | Cross-version directory reads/writes. | | `lix_directory_history` | Directory history walked through commits. | User columns: `id`, `path`, `parent_id`, `name`, `hidden`. Same `lixcol_*` system columns as files. Directory paths must end with a trailing slash (`/data/`, not `/data`). Inserting a `lix_file` at `/a/b/c.txt` auto-creates `lix_directory` rows for `/a/` and `/a/b/` if they don't already exist; you only need to insert directories explicitly when you want them to exist before any file does. ```sql -- List children of a directory SELECT name, path FROM lix_directory WHERE parent_id = ( SELECT id FROM lix_directory WHERE path = '/data/' ); ``` ## `lix_change`: the global journal Outside the grid because it isn't scoped to a version: every write across every schema, every version, every file, in commit order. Columns: `id`, `entity_id`, `schema_key`, `schema_version`, `file_id`, `metadata`, `snapshot_content`, `created_at`. Use `lix_change` for cross-cutting questions where neither version nor schema scopes the answer: ```sql -- Last 20 application-level changes across the entire repo SELECT created_at, schema_key, entity_id, snapshot_content FROM lix_change WHERE schema_key NOT LIKE 'lix_%' ORDER BY created_at DESC LIMIT 20; ``` Without the `schema_key NOT LIKE 'lix_%'` filter the feed is dominated by Lix's own bookkeeping entities (`lix_commit`, `lix_binary_blob_ref`, `lix_file_descriptor`). Per-version history goes through the commit graph, not `lix_change` directly. See [Change History](./history.md). ## Naming conventions | Surface family | System column prefix | Version column | | :-- | :-- | :-- | | `lix_state*` | bare (no prefix) | `version_id` | | `*`, `lix_file*`, `lix_directory*` | `lixcol_*` | `lixcol_version_id` | | `lix_change` | bare | (none, global) | State surfaces are projection-friendly raw views. Per-entity, file, and directory surfaces wear `lixcol_*` to keep your user columns (`id`, `title`, `path`, …) cleanly separated from Lix bookkeeping. ## Composition recap - One row in **`lix_change`** per write, ever. Global, version-blind, immutable. - **State surfaces** (`lix_state*`) project that journal as JSON snapshots, scoped by version (`_by_version`) or walked through commits (`_history`). - **Per-entity surfaces** (`*`) and **file/directory surfaces** are typed projections of the same state, with user columns extracted into native SQL types. Reach for typed surfaces when you know the schema. Drop to `lix_state*` for cross-schema reads. Drop to `lix_change` for raw activity feeds. ================================================ FILE: docs/table_of_contents.json ================================================ { "Overview": [ { "path": "./what-is-lix.md", "label": "What is Lix?" }, { "path": "./getting-started.md", "label": "Getting Started" }, { "path": "./lix-for-ai-agents.md", "label": "Lix for AI Agents" }, { "path": "./comparison-to-git.md", "label": "Comparison to Git" } ], "Concepts": [ { "path": "./schemas.md", "label": "Schemas" }, { "path": "./versions.md", "label": "Versions & Merging" }, { "path": "./history.md", "label": "Change History" }, { "path": "./surfaces.md", "label": "SQL Surfaces" } ], "Guides": [ { "path": "./persistence.md", "label": "Persistence" }, { "path": "./backend.md", "label": "Backends" } ], "Reference": [ { "path": "./api-reference.md", "label": "API Reference" }, { "path": "./sql-functions.md", "label": "SQL Functions" } ] } ================================================ FILE: docs/versions.md ================================================ --- description: Versions are isolated lines of state. Create them, switch into them, read across them with _by_version tables, and merge with conflict-aware preview. --- # Versions & Merging A **version** in Lix is what Git calls a branch: an isolated line of state that can diverge from main and be merged back. Lix uses "version" because product UIs don't say "branch." ## Create and switch ```ts const main = await lix.activeVersionId(); const draft = await lix.createVersion({ name: "Marketing edit" }); await lix.switchVersion({ versionId: draft.id }); // writes here are isolated to `draft` await lix.execute( "UPDATE acme_section SET title = $1 WHERE id = $2", ["Sharper launch copy", "s1"], ); await lix.switchVersion({ versionId: main }); ``` `createVersion()` returns `{ id, name, hidden }`. `switchVersion()` is per-Lix-instance state; it changes which version subsequent SQL goes against. Use names that match your callers' vocabulary. For an end-user product that's domain language: `"Marketing edit"`, `"Q3 pricing draft"`. For a CLI or infrastructure tool, developer terms like `"feature/x"` or `"staging"` are fine; Lix doesn't prescribe. ## Side-by-side reads with `_by_version` Every registered schema `X` gets a sibling table `X_by_version` with a `lixcol_version_id` column. (Files and directories have the same shape: `lix_file_by_version`, `lix_directory_by_version`. For the full surface map see [SQL Surfaces](./surfaces.md).) Use it to read or write across versions without switching: ```ts const sideBySide = await lix.execute( `SELECT v.name, s.title FROM acme_section_by_version s JOIN lix_version v ON v.id = s.lixcol_version_id WHERE s.id = $1 AND s.lixcol_version_id IN ($2, $3) ORDER BY v.name`, ["s1", main, draft.id], ); ``` Rules for `_by_version`: - `SELECT`: filter by `lixcol_version_id`, or omit the filter to scan all versions. - `INSERT`: must include `lixcol_version_id`. - `UPDATE` / `DELETE`: must include `lixcol_version_id` in the `WHERE` clause. - The plain (non-suffixed) table is the active-version view. Prefer `_by_version` for review UIs, sync, and any side-by-side rendering; it avoids the cost and risk of switching the active version. ## Preview a merge `mergeVersionPreview()` reports the same merge decision as `mergeVersion()` without touching state. ```ts const preview = await lix.mergeVersionPreview({ sourceVersionId: draft.id }); // preview shape: // { // outcome: "alreadyUpToDate" | "fastForward" | "mergeCommitted", // targetVersionId, sourceVersionId, // baseCommitId, targetHeadCommitId, sourceHeadCommitId, // changeStats: { total, added, modified, removed }, // conflicts: MergeConflict[], // } ``` Outcomes: - `alreadyUpToDate`: source has no commits the target lacks. - `fastForward`: target advances to source without a merge commit. - `mergeCommitted`: a new merge commit will be created. `mergeVersion()` always merges into the **active** version. If you want a different target, switch to it first. ## Conflicts If both versions modified the same entity since their merge base, `mergeVersionPreview()` returns them in `conflicts`, and `mergeVersion()` throws a `LixError`. Each conflict has the shape: ```ts { kind: "sameEntityChanged", schemaKey: "acme_section", entityId: "s1", fileId: null, target: { kind: "added" | "modified" | "removed", beforeChangeId, afterChangeId }, source: { kind: "added" | "modified" | "removed", beforeChangeId, afterChangeId }, } ``` Conflict detection is row-level today, not field-level: two versions editing different fields of the same row still conflict. Conflict semantics and resolution are an active roadmap item (see [Roadmap](https://github.com/opral/lix#roadmap)). **Don't reshape your schemas to avoid this**; design entities around how your code reads them, not around today's merge granularity. Always wrap `mergeVersion()` when conflicts are possible: ```ts try { const result = await lix.mergeVersion({ sourceVersionId: draft.id }); console.log(result.outcome, result.changeStats.total); } catch (error) { // resolve conflicts in calling code, then retry } ``` ## Don't shape entities around merge It's tempting to split rows finely to dodge the row-level conflict rule. **Don't.** Schema design should follow how your code reads, writes, and joins data, not how today's merge engine resolves conflicts. Conflict semantics will improve; data models that work today should still work then. If a domain naturally splits (a document into blocks, an invoice into line items, a translation set into per-key messages), split it because the *reads* want it that way. If the natural shape is one row with several fields, write it that way and handle conflicts in calling code when they happen. See [Schemas](./schemas.md#design-for-querying-not-for-merging). ## Hiding and deleting versions `lix_version` is a writable system table. Hide a version from the active set without deleting it: ```ts await lix.execute("UPDATE lix_version SET hidden = true WHERE id = $1", [draft.id]); ``` Delete a version with SQL: ```ts await lix.execute("DELETE FROM lix_version WHERE id = $1", [draft.id]); ``` The engine refuses to delete the global version or the active version. ================================================ FILE: docs/what-is-lix.md ================================================ --- description: Lix is an embeddable version control system for files of any format. Diffs are semantic and per entity (which cells changed in a spreadsheet, which clauses moved in a contract), exposed as SQL, all in-process. --- # What is Lix? Lix is an **embeddable version control system for files of any format**. A spreadsheet diff tells you which cells changed. A contract diff tells you which clauses moved. A CAD diff tells you which parts changed. Lix diffs files **semantically, per entity**, across DOCX, XLSX, CAD, PDF, JSON, and any format with a parser plugin. Branches, merge, and an immutable change history, exposed as SQL, all running in-process inside your program. > Lix is to version control what DuckDB is to analytics: an embeddable engine with pluggable support for file formats. [See what a semantic diff looks like →](./comparison-to-git.md#what-this-looks-like) ```ts import { openLix } from "@lix-js/sdk"; const lix = await openLix(); // every change is journaled into lix_change, queryable as SQL ``` ## How it works Each file format is parsed into **entities**: cells in a spreadsheet, clauses in a document, parts in a CAD drawing. Lix versions those entities. Per-row branch, merge, and history fall out for free. **Status:** the entity foundation ships today. Register a JSON Schema, write rows through SQL, version structured data end-to-end. A plugin API for file formats is on the [roadmap](https://github.com/opral/lix#roadmap); once it lands, anyone can author a plugin that turns a format (XLSX, DOCX, CAD, PDF, anything else) into entities, and the same primitives apply. ## Three shapes The same `openLix()` powers three different shapes: **A library inside an end-user product.** Lawyers redlining a contract, analysts iterating on a forecast, engineers updating a BOM, designers exploring a layout: give them Git-like drafts, review, and rollback inside your product UI, no terminal in sight. **A library inside an AI agent platform.** Every agent task gets an isolated workspace; humans or policies review the diff and merge or discard. See [Lix for AI Agents](./lix-for-ai-agents.md). **The engine of an infrastructure product.** Build a versioned filesystem, an artifact or model registry, a configuration service, a Git-style branchable database, or a domain-specific CLI. Lix is the version-control core; you ship the surface. ## Why embed it Git's diff model is line-based on text, so it doesn't surface meaningful changes for binary or structured files (DOCX, XLSX, CAD). Git is also CLI-driven and operates outside your process, which makes it awkward for runtime data, programmatic edits, or end-user workflows that aren't a developer at a terminal. Lix is the opposite shape: - A **library** you import; call it from an app, a service, a CLI, or another database engine. - **Pluggable storage.** Run in-memory, persist to a `.lix` SQLite file, or implement the [backend interface](./backend.md) to put Lix on Postgres, S3, Cloudflare, IndexedDB, OPFS, or anything transactional and key-value-shaped. - **SQL** as the query interface, for application code, AI agents, and tools. - **ACID** transactions across files and entities. No daemon, no protocol, no remote. ## The change-first model Lix stores changes as data, not snapshots. One immutable journal across every entity, every version: ```sql -- What does this version see right now? SELECT entity_id, schema_key, snapshot_content FROM lix_state_history WHERE start_commit_id = lix_active_version_commit_id() AND depth = 0 ORDER BY schema_key, entity_id; ``` Whether the entity is a spreadsheet cell, a document clause, a CAD part, or an application row, the surface is the same. Diffs, undo, audit, blame, and attribution are all SQL. See [Change History](./history.md). ## Examples of what Lix versions Once the plugin API lands and people start writing plugins: - DOCX contracts, with clause-level diffs and redlines - XLSX models, with cell-level history and conflict-aware merges - CAD drawings, with per-part revision tracking - PDFs and any other format behind a parser plugin Available today through the entity foundation: - Application state: tasks, line items, translations, CMS sections, model metadata, config keys - Anything you can describe with a JSON Schema ## Next - [Getting Started](./getting-started.md): install, register a schema, branch, merge. - [Comparison to Git](./comparison-to-git.md): when to reach for which. - [Lix for AI Agents](./lix-for-ai-agents.md): one shape, in depth. - [Schemas](./schemas.md), [Versions & Merging](./versions.md), [Change History](./history.md), [Persistence](./persistence.md), [API Reference](./api-reference.md). ================================================ FILE: nx.json ================================================ { "$schema": "./node_modules/nx/schemas/nx-schema.json", "tui": { "autoExit": true }, "namedInputs": { "default": ["{projectRoot}/**/*"], "publicEnv": [ { "runtime": "env | grep ^PUBLIC_" } ], "nodeVersion": [ { "runtime": "node --version" } ], "platform": [ { "runtime": " node -e 'console.log(process.platform)'" } ] }, "targetDefaults": { "production": { "dependsOn": ["^build"], "inputs": ["default", "^default", "publicEnv", "nodeVersion", "platform"] }, "build": { "dependsOn": ["^build"], "inputs": ["default", "^default", "publicEnv", "nodeVersion", "platform"], "cache": true }, "dev": { "dependsOn": ["^build"] }, "test": { "dependsOn": ["^build", "publicEnv", "nodeVersion", "platform"], "cache": true }, "lint": { "dependsOn": ["format"], "cache": true }, "format": { "cache": true } }, "useDaemonProcess": false, "__commentToken": "The token is supposed to be public", "nxCloudAccessToken": "ZjA2NzJhZGQtMTQ0NS00ODVlLTlmNzktYmQ5MWYwYTZmODhlfHJlYWQtd3JpdGU=" } ================================================ FILE: optimization_log6_crud.md ================================================ # Optimization Log 6: JSON Pointer CRUD Goal: make typed-table JSON pointer CRUD fast enough that Lix behaves like a normal embedded CRUD database for this workload. Target workload: ```text table: json_pointer columns: path TEXT primary-key shape, value JSON fixture: packages/engine/benches/fixtures/pnpm-lock.fixture.json rows: 1000 smoke rows from all JSON nodes, including containers query surface: INSERT INTO json_pointer (path, value) SELECT path, value FROM json_pointer SELECT path, value FROM json_pointer WHERE path = ? UPDATE json_pointer SET value = ... DELETE FROM json_pointer ``` No `lix_file` row is required for this scorecard. This is intentionally the plain CRUD path through a registered typed schema. ## Success Criteria Speed: ```text Lix with SQLite backend: <= 2.0x raw SQLite median Lix with RocksDB backend: <= 1.8x raw SQLite median ``` Storage: ```text Lix with SQLite backend: <= 2.0x raw SQLite bytes on disk Lix with RocksDB backend: <= 2.0x raw SQLite bytes on disk ``` The raw SQLite baseline uses the same fixture rows and an equivalent `json_pointer(path TEXT PRIMARY KEY, value TEXT) WITHOUT ROWID` table in a temp file. ## Baseline Commands: ```sh cargo bench -p lix_engine --bench json_pointer_crud --features storage-benches cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Current smoke speed baseline: | operation | raw SQLite median | Lix SQLite target | Lix SQLite current | status | Lix RocksDB target | Lix RocksDB current | status | | ----------------------- | ----------------: | ----------------: | -----------------: | ------ | -----------------: | ------------------: | ------ | | `insert_all_nodes` | 3.753 ms | <= 7.506 ms | 374.32 ms | fail | <= 6.755 ms | 319.73 ms | fail | | `select_all_path_value` | 1.197 ms | <= 2.394 ms | 14.95 ms | fail | <= 2.155 ms | 11.51 ms | fail | | `select_by_pk_path` | 1.517 ms | <= 3.034 ms | 3.51 s | fail | <= 2.731 ms | 3.34 s | fail | | `update_all_values` | 1.414 ms | <= 2.828 ms | 35.59 ms | fail | <= 2.545 ms | 24.18 ms | fail | | `delete_all_nodes` | 1.178 ms | <= 2.356 ms | 2.64 s | fail | <= 2.120 ms | 1.83 s | fail | Current 1000-row storage baseline: | backend | bytes on disk | target | status | | ----------- | ------------: | -----------: | --------- | | raw SQLite | 1,692,456 | reference | reference | | Lix SQLite | 1,075,136 | <= 3,384,912 | pass | | Lix RocksDB | 993,888 | <= 3,384,912 | pass | Baseline interpretation: ```text Storage already passes comfortably for both Lix backends. CRUD speed does not pass any target yet. The loudest bottlenecks are repeated primary-key lookups and bulk delete, both measured in seconds for 1000 rows. Insert is also far outside the target, while full scan and bulk update are closer but still roughly 10-25x over the raw SQLite reference. ``` ## Optimization Order Work the scorecard in this order: 1. `select_by_pk_path` 2. `delete_all_nodes` 3. `insert_all_nodes` 4. `update_all_values` 5. `select_all_path_value` Rationale: ```text Primary-key reads reveal per-query planning/provider overhead. Bulk delete reveals write/delete transaction machinery. Insert is the main mutation hot path. Update and full scan are still failing, but their current numbers are closer to the target than PK reads and delete. ``` ## Entry Template Use one entry per kept optimization. ```text ## Optimization N: Commit: or uncommitted on Target operation: insert_all_nodes | select_all_path_value | select_by_pk_path | update_all_values | delete_all_nodes | storage Change: What changed? Why should this reduce CRUD overhead? What invariant is preserved? Results: Include raw SQLite, Lix SQLite, and Lix RocksDB rows for every impacted CRUD operation. Include 1000-row storage if the change can affect bytes on disk. Verification: Exact commands run. ``` ================================================ FILE: optimization_log7.md ================================================ # Optimization Log 7: Physical Layout for CRUD + Branch/Merge Goal: find the optimal physical storage layout for Lix's core tracked-state workflow as quickly as possible. This log uses JSON-pointer shaped data as the shared workload because it looks like real `plugin-json-v2` output: many small entities keyed by JSON pointer, including container nodes and leaves. ## Core Workflow The layout must prove itself across the operations Lix users actually compose: ```text CRUD: INSERT INTO json_pointer (path, value) SELECT path, value FROM json_pointer SELECT path, value FROM json_pointer WHERE path = ? UPDATE json_pointer SET value = ... DELETE FROM json_pointer Branching: create_version over an existing tracked state Merge / diff: merge_version after source-only edits merge_version after divergent target/source edits Storage: bytes on disk after insert bytes on disk after create_version bytes on disk after fast-forward merge bytes on disk after divergent merge ``` The purpose is not to win a single CRUD microbenchmark. The purpose is to learn which physical layout lets Lix cheaply answer the three core tracked-state questions: ```text What exists at this version? What changed between these versions? What is the current value for these exact entity identities? ``` ## Fixture ```text fixture: packages/engine/benches/fixtures/pnpm-lock.fixture.json source: checked-in JSON conversion of the repo pnpm-lock.yaml rows: all JSON nodes flattened to json_pointer rows smoke: first 1000 rows scale: first 10000 rows table: json_pointer identity: path value: JSON node value file_id: NULL ``` The fixture intentionally does not require a real `lix_file` row. The benchmark registers the `plugin-json-v2` `json_pointer` schema and treats Lix as the normal typed-table CRUD and versioned-state database. ## Scorecard Speed is measured for both backends: ```text Lix with SQLite backend Lix with RocksDB backend ``` Raw SQLite remains a reference for simple CRUD machine limits, but it is not the goal. Large gaps must be explained by Lix semantics or by an intentional layout tradeoff. Gaps caused by accidental scans, repeated delta decoding, unbatched point reads, or avoidable write amplification are optimization targets. Storage is measured on disk for the same 1000-row fixture and workflow stages. The initial guardrail is that Lix should stay compact while adding branching and merge metadata; storage growth should be structural and explainable. ## Current Benchmark Surface Command: ```sh cargo bench -p lix_engine --bench json_pointer_crud --features storage-benches ``` Benchmark groups: ```text json_pointer_crud/raw_sqlite/baseline json_pointer_crud/raw_sqlite/smoke json_pointer_crud/raw_sqlite/scale json_pointer_crud/raw_storage_sqlite/baseline json_pointer_crud/raw_storage_sqlite/smoke json_pointer_crud/raw_storage_sqlite/scale json_pointer_crud/raw_storage_rocksdb/baseline json_pointer_crud/raw_storage_rocksdb/smoke json_pointer_crud/raw_storage_rocksdb/scale json_pointer_crud/lix_sqlite/baseline json_pointer_crud/lix_sqlite/smoke json_pointer_crud/lix_sqlite/scale json_pointer_crud/lix_rocksdb/baseline json_pointer_crud/lix_rocksdb/smoke json_pointer_crud/lix_rocksdb/scale ``` Raw Storage API timings: ```text write_root_all_rows/{100,1k,10k} get_many_exact_keys/{100,1k,10k} get_many_missing_keys/{100,1k,10k} exists_many_exact_keys/{100,1k,10k} scan_keys_only/{100,1k,10k} scan_headers_only/{100,1k,10k} scan_full_rows/{100,1k,10k} prefix_scan_schema/{100,1k,10k} prefix_scan_schema_file_null/{100,1k,10k} write_delta_10pct_updates/{100,1k,10k} write_tombstone_10pct_deletes/{100,1k,10k} changed_keys_update_10pct/{100,1k,10k} changed_keys_delta_chain_10x1pct/{100,1k,10k} materialize_delta_chain_10x1pct/{100,1k,10k} ``` E2E workflow timings: ```text insert_all_rows/{100,1k,10k} select_all_path_value/{100,1k,10k} select_one_by_pk/{100,1k,10k} update_all_values/{100,1k,10k} update_one_by_pk/{100,1k,10k} delete_all_rows/{100,1k,10k} delete_one_by_pk/{100,1k,10k} create_version/{100,1k,10k} merge_version_fast_forward_10pct_updates/{100,1k,10k} merge_version_divergent_10pct_updates/{100,1k,10k} ``` Storage command: ```sh cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Storage rows: ```text raw SQLite inserted Lix SQLite / inserted Lix SQLite / after create_version Lix SQLite / after fast-forward merge Lix SQLite / after divergent merge Lix RocksDB / inserted Lix RocksDB / after create_version Lix RocksDB / after fast-forward merge Lix RocksDB / after divergent merge ``` ## First Optimization Axis Optimize exact-key and changed-key access through live/tracked state. Rationale: ```text CRUD insert needs committed identity checks. SELECT ... WHERE path = ? needs exact-key lookup. UPDATE and DELETE need current-row lookup by identity. create_version should stay bounded over large tracked states. merge_version needs changed-key discovery, not full-state hydration. ``` The latest insert profile showed the hot path dominated by validation loading committed identity rows through scan/delta materialization: ```text validate_prepared_writes -> load_committed_constraint_row -> scan_committed_constraint_rows -> TrackedStateStoreReader::scan_rows_at_commit -> delta_commit_ids_since_projection_root -> load_delta_pack -> decode_delta_pack ``` That makes the first physical-layout question concrete: ```text Can the storage layout and reader APIs answer batched exact-key lookups and changed-key queries without broad scans or repeated delta-pack decoding? ``` ## Baseline: 2026-05-10 Commands: ```sh cargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/raw_storage_sqlite/baseline|json_pointer_crud/raw_storage_rocksdb/baseline|json_pointer_crud/lix_sqlite/baseline|json_pointer_crud/lix_rocksdb/baseline' cargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/raw_sqlite/baseline|json_pointer_crud/raw_sqlite/smoke|json_pointer_crud/raw_storage_sqlite/smoke|json_pointer_crud/raw_storage_rocksdb/smoke|json_pointer_crud/lix_sqlite/smoke|json_pointer_crud/lix_rocksdb/smoke' cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Raw Storage API scoreboard: | operation | SQLite 100 | SQLite 1k | SQLite x | RocksDB 100 | RocksDB 1k | RocksDB x | | ---------------------------------- | ---------: | --------: | -------: | ----------: | ---------: | --------: | | `write_root_all_rows` | 3.1846 ms | 8.1773 ms | 2.57x | 3.8257 ms | 6.1633 ms | 1.61x | | `get_many_exact_keys` | 1.5000 ms | 4.6368 ms | 3.09x | 1.1740 ms | 3.5542 ms | 3.03x | | `get_many_missing_keys` | 0.8035 ms | 2.5875 ms | 3.22x | 0.7961 ms | 1.5991 ms | 2.01x | | `exists_many_exact_keys` | 1.4055 ms | 6.0771 ms | 4.32x | 1.2785 ms | 4.0307 ms | 3.15x | | `scan_keys_only` | 0.8837 ms | 3.9795 ms | 4.50x | 0.6067 ms | 2.0074 ms | 3.31x | | `scan_headers_only` | 0.9404 ms | 3.3325 ms | 3.54x | 0.6108 ms | 2.0698 ms | 3.39x | | `scan_full_rows` | 1.4135 ms | 5.9223 ms | 4.19x | 1.1202 ms | 3.3596 ms | 3.00x | | `prefix_scan_schema` | 1.3817 ms | 4.9885 ms | 3.61x | 1.0670 ms | 3.3352 ms | 3.13x | | `prefix_scan_schema_file_null` | 1.4228 ms | 4.6936 ms | 3.30x | 1.0670 ms | 3.6555 ms | 3.43x | | `write_delta_10pct_updates` | 0.9485 ms | 3.1167 ms | 3.29x | 0.5760 ms | 1.5234 ms | 2.64x | | `write_tombstone_10pct_deletes` | 0.9206 ms | 2.8465 ms | 3.09x | 0.5681 ms | 1.4518 ms | 2.55x | | `changed_keys_update_10pct` | 2.6219 ms | 72.513 ms | 27.66x | 2.1347 ms | 68.299 ms | 32.00x | | `changed_keys_delta_chain_10x1pct` | 1.5310 ms | 10.956 ms | 7.16x | 1.2778 ms | 8.9643 ms | 7.02x | | `materialize_delta_chain_10x1pct` | 1.1405 ms | 5.7006 ms | 5.00x | 0.9339 ms | 3.1625 ms | 3.39x | `exists_many_exact_keys` currently uses the tracked-state row-loading path as the semantic equivalent. It is a named scoreboard slot for a future lighter exists-only primitive. E2E workflow scoreboard: | axis | operation | raw SQLite 100 | raw SQLite 1k | raw x | Lix SQLite 100 | Lix SQLite 1k | Lix SQLite x | Lix RocksDB 100 | Lix RocksDB 1k | Lix RocksDB x | | ------------ | ------------------------------------------ | -------------: | ------------: | ----: | -------------: | ------------: | -----------: | --------------: | -------------: | ------------: | | CRUD | `insert_all_rows` | 1.4715 ms | 2.5578 ms | 1.74x | 21.690 ms | 382.34 ms | 17.63x | 19.807 ms | 317.34 ms | 16.02x | | CRUD | `select_all_path_value` | 0.8791 ms | 1.2311 ms | 1.40x | 5.8882 ms | 13.336 ms | 2.26x | 5.5689 ms | 11.019 ms | 1.98x | | CRUD | `select_one_by_pk` | 0.8001 ms | 1.1339 ms | 1.42x | 2.0720 ms | 6.1576 ms | 2.97x | 2.0085 ms | 3.8542 ms | 1.92x | | CRUD | `update_all_values` | 0.8417 ms | 1.4807 ms | 1.76x | 9.2526 ms | 30.266 ms | 3.27x | 8.2602 ms | 22.054 ms | 2.67x | | CRUD | `update_one_by_pk` | 0.8527 ms | 1.2591 ms | 1.48x | 4.4169 ms | 10.040 ms | 2.27x | 3.6020 ms | 7.3052 ms | 2.03x | | CRUD | `delete_all_rows` | 0.9204 ms | 1.2384 ms | 1.35x | 40.927 ms | 2.4630 s | 60.18x | 38.043 ms | 1.7949 s | 47.18x | | CRUD | `delete_one_by_pk` | 0.8174 ms | 1.2215 ms | 1.49x | 5.6983 ms | 12.400 ms | 2.18x | 4.3247 ms | 8.9218 ms | 2.06x | | Branch | `create_version` | n/a | n/a | n/a | 4.0152 ms | 8.0948 ms | 2.02x | 3.8455 ms | 6.1184 ms | 1.59x | | Merge / diff | `merge_version_fast_forward_10pct_updates` | n/a | n/a | n/a | 45.680 ms | 995.44 ms | 21.79x | 44.270 ms | 900.68 ms | 20.35x | | Merge / diff | `merge_version_divergent_10pct_updates` | n/a | n/a | n/a | 77.602 ms | 2.0777 s | 26.77x | 81.869 ms | 1.9656 s | 24.01x | `raw SQLite reference` applies only to plain CRUD over the equivalent `json_pointer(path TEXT PRIMARY KEY, value TEXT) WITHOUT ROWID` table. Branch and merge are Lix semantic operations, so they have no raw SQLite equivalent in this table. Storage scoreboard: | backend / workflow | 100 bytes | 100 bytes/row | 1k bytes | 1k bytes/row | bytes x | | -------------------------------------- | --------: | ------------: | --------: | -----------: | ------: | | raw SQLite / inserted | 936,584 | 9,365.8 | 1,692,456 | 1,692.5 | 1.81x | | Lix SQLite / inserted | 337,656 | 3,376.6 | 1,075,136 | 1,075.1 | 3.18x | | Lix SQLite / after create_version | 345,896 | 3,459.0 | 1,087,496 | 1,087.5 | 3.14x | | Lix SQLite / after fast-forward merge | 588,976 | 5,889.8 | 5,287,488 | 5,287.5 | 8.98x | | Lix SQLite / after divergent merge | 1,268,776 | 12,687.8 | 5,615,168 | 5,615.2 | 4.43x | | Lix RocksDB / inserted | 280,077 | 2,800.8 | 993,888 | 993.9 | 3.55x | | Lix RocksDB / after create_version | 281,943 | 2,819.4 | 995,754 | 995.8 | 3.53x | | Lix RocksDB / after fast-forward merge | 298,593 | 2,985.9 | 1,160,310 | 1,160.3 | 3.89x | | Lix RocksDB / after divergent merge | 337,030 | 3,370.3 | 1,528,244 | 1,528.2 | 4.53x | Baseline interpretation: ```text The Raw Storage API rows now separate layout capability from E2E machinery. Direct tracked-state `get_many` and full scan are low single-digit milliseconds, while changed-key discovery for 10% updates scales far worse than the scan/read primitives. The E2E CRUD rows show the current pressure from the typed-table surface: inserts are hundreds of milliseconds at 1000 rows and bulk deletes are seconds, with much steeper 100-to-1000 growth than raw SQLite. Single-row PK operations are measured as one row selected, updated, or deleted from a populated table. create_version is already bounded enough to use as a guardrail, but merge/diff is also seconds for only 10% changed rows over a 1000-row JSON-pointer state. Storage after plain insert is compact for both backends. create_version adds very little storage, which matches the desired branch shape. SQLite-backed Lix grows sharply after fast-forward/divergent merge, while RocksDB grows much more gradually. That backend split is a useful signal for the physical-layout work: merge/diff layout and checkpoint/packing policy need to be evaluated across both backends, not just through CRUD timings. ``` ## Entry Template Use one entry per kept layout or access-path change. Every kept optimization must run the full baseline + smoke scoreboard for raw storage, E2E workflows, and storage accounting. Do not record only the row that the optimization was expected to improve; the point of the log is to catch regressions and tradeoffs across the whole tracked-state workflow. ```text ## Optimization N: Commit: or uncommitted on Hypothesis: What physical layout or access-path change is being tested? Raw Storage API scoreboard: Include all raw storage rows for SQLite and RocksDB at 100 and 1k. E2E Workflow scoreboard: Include all CRUD, create_version, and merge_version rows at 100 and 1k. Include raw SQLite reference where the operation has one. Storage scoreboard: Include all workflow storage rows for raw SQLite, Lix SQLite, and Lix RocksDB. Decision: Keep, revert, or follow-up. ``` ## Optimization 1: Batched Committed State-FK Delete Validation Change: ```text Group committed state-surface FK delete checks by source schema/domain and scan the source rows once per group instead of once per tombstone. ``` Commands: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_crud -- 'json_pointer_crud/raw_sqlite/baseline|json_pointer_crud/raw_sqlite/smoke|json_pointer_crud/lix_sqlite/baseline|json_pointer_crud/lix_sqlite/smoke|json_pointer_crud/lix_rocksdb/baseline|json_pointer_crud/lix_rocksdb/smoke' cargo bench -p lix_engine --features storage-benches --bench json_pointer_crud -- 'json_pointer_crud/raw_storage_sqlite/baseline|json_pointer_crud/raw_storage_sqlite/smoke|json_pointer_crud/raw_storage_rocksdb/baseline|json_pointer_crud/raw_storage_rocksdb/smoke' cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Raw Storage API scoreboard: | operation | SQLite 100 | SQLite 1k | SQLite x | RocksDB 100 | RocksDB 1k | RocksDB x | | ---------------------------------- | ---------: | --------: | -------: | ----------: | ---------: | --------: | | `write_root_all_rows` | 2.6462 ms | 6.5703 ms | 2.48x | 3.5919 ms | 6.5048 ms | 1.81x | | `get_many_exact_keys` | 1.5251 ms | 4.5971 ms | 3.01x | 1.1383 ms | 3.9691 ms | 3.49x | | `get_many_missing_keys` | 877.1 us | 2.7680 ms | 3.16x | 521.3 us | 1.5815 ms | 3.03x | | `exists_many_exact_keys` | 1.6827 ms | 4.8231 ms | 2.87x | 1.1503 ms | 5.8704 ms | 5.10x | | `scan_keys_only` | 964.5 us | 3.4420 ms | 3.57x | 650.7 us | 3.4885 ms | 5.36x | | `scan_headers_only` | 875.2 us | 3.1618 ms | 3.61x | 651.9 us | 3.1302 ms | 4.80x | | `scan_full_rows` | 1.3692 ms | 5.3299 ms | 3.89x | 1.1031 ms | 5.5565 ms | 5.04x | | `prefix_scan_schema` | 1.3468 ms | 5.0666 ms | 3.76x | 1.1146 ms | 5.5296 ms | 4.96x | | `prefix_scan_schema_file_null` | 1.3722 ms | 8.2142 ms | 5.99x | 1.1041 ms | 4.8788 ms | 4.42x | | `write_delta_10pct_updates` | 1.0960 ms | 3.5204 ms | 3.21x | 584.8 us | 2.2813 ms | 3.90x | | `write_tombstone_10pct_deletes` | 866.7 us | 3.2042 ms | 3.70x | 851.8 us | 1.4712 ms | 1.73x | | `changed_keys_update_10pct` | 2.4239 ms | 82.554 ms | 34.06x | 2.0819 ms | 64.202 ms | 30.84x | | `changed_keys_delta_chain_10x1pct` | 1.5810 ms | 12.378 ms | 7.83x | 1.2480 ms | 8.6607 ms | 6.94x | | `materialize_delta_chain_10x1pct` | 1.1211 ms | 6.4255 ms | 5.73x | 728.2 us | 2.7755 ms | 3.81x | E2E workflow scoreboard: | axis | operation | raw SQLite 100 | raw SQLite 1k | raw x | Lix SQLite 100 | Lix SQLite 1k | Lix SQLite x | Lix RocksDB 100 | Lix RocksDB 1k | Lix RocksDB x | | ------------ | ------------------------------------------ | -------------: | ------------: | ----: | -------------: | ------------: | -----------: | --------------: | -------------: | ------------: | | CRUD | `insert_all_rows` | 1.6556 ms | 2.8391 ms | 1.71x | 19.763 ms | 376.49 ms | 19.05x | 20.686 ms | 310.38 ms | 15.00x | | CRUD | `select_all_path_value` | 0.9165 ms | 1.5530 ms | 1.69x | 5.4193 ms | 12.881 ms | 2.38x | 7.0629 ms | 11.096 ms | 1.57x | | CRUD | `select_one_by_pk` | 0.8185 ms | 1.6028 ms | 1.96x | 2.0424 ms | 5.9130 ms | 2.90x | 2.2091 ms | 3.6054 ms | 1.63x | | CRUD | `update_all_values` | 0.9650 ms | 1.8194 ms | 1.89x | 7.9570 ms | 32.141 ms | 4.04x | 7.5921 ms | 21.915 ms | 2.89x | | CRUD | `update_one_by_pk` | 0.8041 ms | 1.1965 ms | 1.49x | 4.4163 ms | 10.185 ms | 2.31x | 3.5553 ms | 7.6278 ms | 2.15x | | CRUD | `delete_all_rows` | 0.8595 ms | 1.2117 ms | 1.41x | 8.3845 ms | 32.180 ms | 3.84x | 8.1399 ms | 23.674 ms | 2.91x | | CRUD | `delete_one_by_pk` | 0.7526 ms | 1.4439 ms | 1.92x | 3.5757 ms | 10.669 ms | 2.98x | 3.6691 ms | 8.2295 ms | 2.24x | | Branch | `create_version` | n/a | n/a | n/a | 3.4980 ms | 8.0771 ms | 2.31x | 4.5554 ms | 5.5557 ms | 1.22x | | Merge / diff | `merge_version_fast_forward_10pct_updates` | n/a | n/a | n/a | 47.167 ms | 987.99 ms | 20.95x | 44.720 ms | 953.19 ms | 21.31x | | Merge / diff | `merge_version_divergent_10pct_updates` | n/a | n/a | n/a | 77.340 ms | 2.0947 s | 27.08x | 110.32 ms | 2.7013 s | 24.49x | Storage scoreboard: | backend / workflow | 100 bytes | 100 bytes/row | 1k bytes | 1k bytes/row | bytes x | | -------------------------------------- | --------: | ------------: | --------: | -----------: | ------: | | raw SQLite / inserted | 936,584 | 9,365.8 | 1,692,456 | 1,692.5 | 1.81x | | Lix SQLite / inserted | 337,656 | 3,376.6 | 1,075,136 | 1,075.1 | 3.18x | | Lix SQLite / after create_version | 345,896 | 3,459.0 | 1,087,496 | 1,087.5 | 3.14x | | Lix SQLite / after fast-forward merge | 593,096 | 5,931.0 | 5,287,488 | 5,287.5 | 8.92x | | Lix SQLite / after divergent merge | 1,272,896 | 12,729.0 | 5,615,168 | 5,615.2 | 4.41x | | Lix RocksDB / inserted | 280,077 | 2,800.8 | 993,888 | 993.9 | 3.55x | | Lix RocksDB / after create_version | 281,943 | 2,819.4 | 995,754 | 995.8 | 3.53x | | Lix RocksDB / after fast-forward merge | 298,593 | 2,985.9 | 1,160,310 | 1,160.3 | 3.89x | | Lix RocksDB / after divergent merge | 337,030 | 3,370.3 | 1,528,244 | 1,528.2 | 4.53x | Result: ```text delete_all_rows/1k improved from 2.4630 s to 32.180 ms on Lix SQLite and from 1.7949 s to 23.674 ms on Lix RocksDB. The profile bottleneck moved away from repeated committed state-FK source scans; inserts and merge/diff remain the dominant physical-layout targets. ``` ## Optimization 2: Batched Committed Insert Identity Validation Commit: uncommitted on current branch Hypothesis: ```text INSERT spends most of its time checking whether each staged identity already exists in committed live state. Batch those checks by exact domain/schema and scan the committed rows once per group instead of once per inserted row. ``` Change: ```text Build the pending staged identity set once, group committed insert identity checks by `(Domain, schema_key)`, scan committed rows once per group with the full entity-id batch, and test the returned rows in memory. ``` Raw Storage API scoreboard: | operation | SQLite 100 | SQLite 1k | SQLite x | RocksDB 100 | RocksDB 1k | RocksDB x | | ---------------------------------- | ---------: | --------: | -------: | ----------: | ---------: | --------: | | `write_root_all_rows` | 2.8488 ms | 6.8410 ms | 2.40x | 4.0790 ms | 6.9094 ms | 1.69x | | `get_many_exact_keys` | 1.5314 ms | 4.7728 ms | 3.12x | 1.3573 ms | 4.3056 ms | 3.17x | | `get_many_missing_keys` | 835.3 us | 2.5124 ms | 3.01x | 846.8 us | 1.6843 ms | 1.99x | | `exists_many_exact_keys` | 1.4762 ms | 4.7909 ms | 3.25x | 1.5038 ms | 4.3722 ms | 2.91x | | `scan_keys_only` | 976.9 us | 3.1554 ms | 3.23x | 881.8 us | 2.4408 ms | 2.77x | | `scan_headers_only` | 1.1819 ms | 3.7566 ms | 3.18x | 1.1618 ms | 2.4719 ms | 2.13x | | `scan_full_rows` | 2.0658 ms | 5.9674 ms | 2.89x | 1.3391 ms | 4.6009 ms | 3.44x | | `prefix_scan_schema` | 1.5120 ms | 5.4635 ms | 3.61x | 1.5115 ms | 3.8891 ms | 2.57x | | `prefix_scan_schema_file_null` | 1.6492 ms | 4.7101 ms | 2.86x | 1.6867 ms | 3.9062 ms | 2.32x | | `write_delta_10pct_updates` | 1.3705 ms | 3.2507 ms | 2.37x | 782.8 us | 1.9059 ms | 2.43x | | `write_tombstone_10pct_deletes` | 1.2939 ms | 3.0927 ms | 2.39x | 884.6 us | 1.8954 ms | 2.14x | | `changed_keys_update_10pct` | 2.8615 ms | 78.074 ms | 27.28x | 2.2180 ms | 71.918 ms | 32.42x | | `changed_keys_delta_chain_10x1pct` | 1.7743 ms | 13.054 ms | 7.36x | 1.5709 ms | 9.6579 ms | 6.15x | | `materialize_delta_chain_10x1pct` | 1.6561 ms | 6.2290 ms | 3.76x | 903.8 us | 3.2262 ms | 3.57x | E2E workflow scoreboard: | axis | operation | raw SQLite 100 | raw SQLite 1k | raw x | Lix SQLite 100 | Lix SQLite 1k | Lix SQLite x | Lix RocksDB 100 | Lix RocksDB 1k | Lix RocksDB x | | ------------ | ------------------------------------------ | -------------: | ------------: | ----: | -------------: | ------------: | -----------: | --------------: | -------------: | ------------: | | CRUD | `insert_all_rows` | 1.4727 ms | 2.9440 ms | 2.00x | 14.928 ms | 61.763 ms | 4.14x | 15.420 ms | 56.365 ms | 3.66x | | CRUD | `select_all_path_value` | 795.3 us | 1.5314 ms | 1.93x | 5.4509 ms | 13.638 ms | 2.50x | 5.2695 ms | 12.294 ms | 2.33x | | CRUD | `select_one_by_pk` | 778.4 us | 1.9265 ms | 2.47x | 2.0537 ms | 6.4010 ms | 3.12x | 2.2049 ms | 4.3261 ms | 1.96x | | CRUD | `update_all_values` | 829.4 us | 1.8155 ms | 2.19x | 8.4927 ms | 31.990 ms | 3.77x | 8.1311 ms | 22.828 ms | 2.81x | | CRUD | `update_one_by_pk` | 914.5 us | 1.4295 ms | 1.56x | 4.0395 ms | 10.550 ms | 2.61x | 4.2523 ms | 7.2002 ms | 1.69x | | CRUD | `delete_all_rows` | 874.4 us | 1.4128 ms | 1.62x | 8.9559 ms | 36.544 ms | 4.08x | 8.5542 ms | 26.154 ms | 3.06x | | CRUD | `delete_one_by_pk` | 871.7 us | 1.3901 ms | 1.59x | 3.9506 ms | 12.560 ms | 3.18x | 3.8824 ms | 8.3111 ms | 2.14x | | Branch | `create_version` | n/a | n/a | n/a | 3.8345 ms | 9.7628 ms | 2.55x | 3.6737 ms | 5.6426 ms | 1.54x | | Merge / diff | `merge_version_fast_forward_10pct_updates` | n/a | n/a | n/a | 50.797 ms | 1.2370 s | 24.35x | 41.834 ms | 962.60 ms | 23.01x | | Merge / diff | `merge_version_divergent_10pct_updates` | n/a | n/a | n/a | 80.102 ms | 2.4801 s | 30.96x | 81.443 ms | 1.9468 s | 23.90x | Storage scoreboard: | backend / workflow | 100 bytes | 100 bytes/row | 1k bytes | 1k bytes/row | bytes x | | -------------------------------------- | --------: | ------------: | --------: | -----------: | ------: | | raw SQLite / inserted | 936,584 | 9,365.8 | 1,692,456 | 1,692.5 | 1.81x | | Lix SQLite / inserted | 337,656 | 3,376.6 | 1,075,136 | 1,075.1 | 3.18x | | Lix SQLite / after create_version | 345,896 | 3,459.0 | 1,087,496 | 1,087.5 | 3.14x | | Lix SQLite / after fast-forward merge | 588,976 | 5,889.8 | 5,291,608 | 5,291.6 | 8.98x | | Lix SQLite / after divergent merge | 1,268,776 | 12,687.8 | 5,619,288 | 5,619.3 | 4.43x | | Lix RocksDB / inserted | 280,077 | 2,800.8 | 993,888 | 993.9 | 3.55x | | Lix RocksDB / after create_version | 281,943 | 2,819.4 | 995,754 | 995.8 | 3.53x | | Lix RocksDB / after fast-forward merge | 298,593 | 2,985.9 | 1,157,131 | 1,157.1 | 3.88x | | Lix RocksDB / after divergent merge | 337,030 | 3,370.3 | 1,528,244 | 1,528.2 | 4.53x | Result: ```text insert_all_rows/1k improved from 376.49 ms to 61.763 ms on Lix SQLite and from 310.38 ms to 56.365 ms on Lix RocksDB. Raw Storage API timings and storage accounting stay within expected run-to-run noise because the change is above the storage primitive layer. Merge/diff remains the dominant 1k workflow cost. ``` Decision: ```text Keep. This removes an accidental per-row committed-state lookup from bulk inserts without changing the validation semantics. ``` ================================================ FILE: optimization_log8.md ================================================ # Optimization Log 8: JSON Pointer Physical Layout Decision Log Goal: nail the physical layout Lix uses for tracked logic: `packages/engine/src/tracked_state`, `packages/engine/src/commit_store`, and the backend/storage APIs they require. Lix has not shipped. Optimize for the best-shaped physical API, storage layout, and abstraction boundaries now. Prefer clean refactors over bolt-on fixes, adapter layers, compatibility shims, or special cases. If a change keeps a backwards shim, the entry must explicitly call that out and justify why it is temporary. The preferred refactor mode is: ```text first make the storage shape correct; then let the Rust compiler reveal upstream code that must move to the new API. ``` It is acceptable for an intermediate refactor entry to leave the tree temporarily non-compiling if the entry is clearly marked as a physical-layout cutover step and the next step is compiler-driven migration. Do not hide old behavior behind adapter layers just to keep call sites compiling. The desired end state is good abstractions, not a faster pile of special-case paths. If the current abstraction is the bottleneck, replace it cleanly. North-star target: ```text Large logical write batches through the tracked-state/commit-store path should leave enough time budget for the logical layer above storage. ``` Physical storage budget: ```text For 1k-operation physical rows, Lix SQLite and Lix RocksDB should be <= 1.5x raw SQLite for equivalent writes, exact reads, and scans. Raw SQLite is not a bare-metal KV baseline: it still goes through SQL statement execution, cursor/seek machinery, and SELECT/INSERT/UPDATE/DELETE paths. Lix physical rows use direct storage access, so exceeding this budget means Lix is likely paying avoidable layout, packing, materialization, batching, or backend abstraction costs. For storage size, post-vacuum Lix bytes/row should be <= 2x post-vacuum raw SQLite bytes/row for equivalent tracked storage states. Extra bytes beyond that must be explained by durable tracked history, commit facts, merge/conflict facts, or retained delta structure before a size-sensitive change is kept. ``` This log is not for SQL-provider ergonomics. SQL and CRUD benchmarks may point at problems, but every kept optimization must be explained at the physical storage boundary: backend operations, commit packs, delta packs, projection materialization, changed-key discovery, exact reads, scans, batching, zero-copy/low-copy behavior, or bytes. Criterion output is evidence, not the whole argument. Treat noise carefully: prefer structural wins that also move timings, and reject changes that only win one noisy row while worsening the physical design. ## Current State ```text branch: physical-layout-manual head: 11ff3a2e date: 2026-05-10 status: uncommitted benchmark/log setup ``` Setup changes for this log: - Added `packages/engine/benches/json_pointer_physical/main.rs`. - Added the `json_pointer_physical` bench target to `packages/engine/Cargo.toml`. - Added a raw SQLite reference group inside the physical benchmark so the SQLite-relative budgets have a measured baseline. - Kept the existing JSON-pointer storage fixture test as the bytes-on-disk guardrail. ## Layout Scope In scope: ```text commit_store canonical commit/change physical layout tracked_state delta-pack layout tracked_state projection/root materialization policy tracked_state exact-key lookup tracked_state scan/projection behavior changed-key discovery for diff/merge backend get_many / exists_many / prefix scan / write batch APIs backend zero-copy or low-copy read/write boundaries backend transaction/write-batch semantics shared by SQLite and RocksDB bytes on disk after insert/version/merge workflows ``` Out of scope unless a physical benchmark proves otherwise: ```text SQL/provider routing DataFusion planning overhead per-statement UPDATE ergonomics application-level batching above tracked_state/commit_store ``` Rule: ```text If a hot E2E benchmark points through SQL first, map it to json_pointer_physical before optimizing. Do not make SQL-layer changes in this log unless the physical rows are already inside budget and the remaining time is clearly above storage. ``` Tracked logic is the product path and the default mode in Lix. Optimizations must make tracked logic faster; they must not avoid tracked machinery by moving workloads, benchmarks, fixtures, changed-key logic, commit-store logic, or tracked-state behavior into untracked code. ## Refactor Policy Allowed: ```text change the storage/backend API when the current API forces bad physical layout; add or reshape backend/storage APIs, including namespacing-oriented APIs, when the shape materially improves both SQLite and RocksDB; change tracked_state and commit_store layouts when the new layout is cleaner; break old call sites and let the compiler drive the migration; delete legacy abstractions that only exist to preserve pre-ship compatibility; replace one-off fixes with a shared abstraction when the problem is systemic; remove bolt-on fast paths once the clean abstraction covers the same behavior. ``` Required when changing storage/backend APIs: ```text state the physical problem the old API caused; show how SQLite and RocksDB can both implement the new shape without hidden per-key loops or full-value hydration; show that both SQLite and RocksDB improve materially, or explain why the API change is still required for a later shared layout win; preserve transaction atomicity, durability, and hash/integrity checks; prefer batched, streaming, prefix/range, and projection-aware operations; avoid copy-heavy boundaries unless the entry explicitly measures and accepts the cost; explain how the layout can migrate again later without rewriting the whole logical layer. ``` Not allowed: ```text SQLite-only wins that silently regress RocksDB; RocksDB-only wins that silently regress SQLite; benchmark rewrites that change what is being measured; workarounds scoped only to the current hot row when the abstraction is wrong; bolt-on fast paths that leave the bad abstraction in place; adapter layers whose main purpose is avoiding the clean refactor; moving tracked logic, benchmarks, or benchmark workload into untracked paths; shifting cost out of tracked_state/commit_store to avoid tracked machinery; forcing full materialization to avoid designing the right index/layout; backwards shims unless the entry explicitly marks and justifies them. ``` ## Benchmark Surface Benchmark target: ```text packages/engine/benches/json_pointer_physical/main.rs ``` Command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- baseline cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke ``` Groups: ```text json_pointer_physical/raw_sqlite/baseline json_pointer_physical/raw_sqlite/smoke json_pointer_physical/sqlite/baseline json_pointer_physical/sqlite/smoke json_pointer_physical/rocksdb/baseline json_pointer_physical/rocksdb/smoke ``` Rows: ```text write_root_all_rows/{100,1k} get_many_exact_keys/{100,1k} get_many_missing_keys/{100,1k} exists_many_exact_keys/{100,1k} scan_keys_only/{100,1k} scan_headers_only/{100,1k} scan_full_rows/{100,1k} prefix_scan_schema/{100,1k} prefix_scan_schema_file_null/{100,1k} write_delta_10pct_updates/{100,1k} write_tombstone_10pct_deletes/{100,1k} changed_keys_update_10pct/{100,1k} changed_keys_delta_chain_10x1pct/{100,1k} materialize_delta_chain_10x1pct/{100,1k} ``` Fixture: ```text source: packages/engine/benches/fixtures/pnpm-lock.fixture.json shape: flattened JSON nodes, including containers and leaves identity: JSON pointer path value: JSON node value file_id: NULL sizes: baseline = 100 rows smoke = 1,000 rows ``` Why this fixture: ```text It mirrors plugin-json-v2 output: many small entities, stable path identities, container rows, leaf rows, and realistic nested JSON values. ``` Benchmark-surface intent: ```text This benchmark surface should stabilize the physical layout before logical-layer optimization begins. New physical rows should be added only when logical work reveals a genuinely new tracked access pattern, not to move the goalposts for an existing optimization. ``` ## Raw SQLite Reference The raw SQLite group is independent of Lix. It answers: what does plain primary-key physical storage cost for the same flattened JSON-pointer rows? Shape: ```text database: tempfile SQLite table: json_pointer(path TEXT PRIMARY KEY, value TEXT) WITHOUT ROWID pragmas: journal_mode=WAL, synchronous=NORMAL, temp_store=MEMORY, foreign_keys=ON write rows: INSERT/UPDATE/DELETE by path in one transaction exact reads: prepared point lookups by path scans: ordered path/value scans over the table ``` The raw SQLite prefix-scan rows are a fixture-equivalent approximation: the fixture uses one schema and `file_id = NULL`, so schema/file scope maps to the whole table. Reference interpretation: ```text Rows near raw SQLite are close to backend speed. Rows above 1.5x raw SQLite are likely dominated by Lix packing, projection, materialization, hashing, diff semantics, or backend abstraction overhead. ``` ## Success Criteria Every kept optimization must name one primary axis: ```text write exact-read scan diff/changed-key delta-chain materialization storage-size backend API ``` The primary axis should improve materially. Non-target axes are guardrails. Every kept optimization must also name its physical shape: ```text canonical fact layout read index / projection layout delta-pack layout changed-key index backend batch/read/write API materialization policy copy/serialization boundary ``` An optimization is not kept merely because one Criterion row improves. It must be a better shape for the tracked storage system and must not create hidden costs such as unbatched IO, accidental full-value hydration, extra copies across the backend boundary, or backend-specific behavior that another supported backend cannot implement well. ### 1.5x SQLite Runtime Budget This is an envelope, not an average. Passing writes does not compensate for failing reads, and passing reads does not compensate for failing writes. Write rows: ```text compare json_pointer_physical/{sqlite,rocksdb}/smoke/write_root_all_rows/1k to json_pointer_physical/raw_sqlite/smoke/write_root_all_rows/1k compare json_pointer_physical/{sqlite,rocksdb}/smoke/write_delta_10pct_updates/1k to json_pointer_physical/raw_sqlite/smoke/write_delta_10pct_updates/1k compare json_pointer_physical/{sqlite,rocksdb}/smoke/write_tombstone_10pct_deletes/1k to json_pointer_physical/raw_sqlite/smoke/write_tombstone_10pct_deletes/1k ``` Exact-read rows: ```text compare json_pointer_physical/{sqlite,rocksdb}/smoke/get_many_exact_keys/1k to json_pointer_physical/raw_sqlite/smoke/get_many_exact_keys/1k compare json_pointer_physical/{sqlite,rocksdb}/smoke/get_many_missing_keys/1k to json_pointer_physical/raw_sqlite/smoke/get_many_missing_keys/1k compare json_pointer_physical/{sqlite,rocksdb}/smoke/exists_many_exact_keys/1k to json_pointer_physical/raw_sqlite/smoke/exists_many_exact_keys/1k ``` Scan rows: ```text compare json_pointer_physical/{sqlite,rocksdb}/smoke/scan_keys_only/1k to json_pointer_physical/raw_sqlite/smoke/scan_keys_only/1k compare json_pointer_physical/{sqlite,rocksdb}/smoke/scan_headers_only/1k to json_pointer_physical/raw_sqlite/smoke/scan_headers_only/1k compare json_pointer_physical/{sqlite,rocksdb}/smoke/scan_full_rows/1k to json_pointer_physical/raw_sqlite/smoke/scan_full_rows/1k compare json_pointer_physical/{sqlite,rocksdb}/smoke/prefix_scan_schema/1k to json_pointer_physical/raw_sqlite/smoke/prefix_scan_schema/1k compare json_pointer_physical/{sqlite,rocksdb}/smoke/prefix_scan_schema_file_null/1k to json_pointer_physical/raw_sqlite/smoke/prefix_scan_schema_file_null/1k ``` Changed-key and delta-chain rows do not have a clean raw SQLite equivalent. Judge them by scaling shape: ```text changed_keys_update_10pct: should scale with changed keys, not full state hydration. changed_keys_delta_chain_10x1pct: should scale with changed keys and chain depth, not repeated broad materialization of full state. materialize_delta_chain_10x1pct: should avoid repeatedly decoding unrelated delta-pack content. ``` ### Regression Budgets ```text <= 5% slower: treat as possible Criterion noise unless repeated or structurally explained. 5-15% slower: acceptable only with a clear primary-axis win, a structural explanation, and no crossed 1.5x runtime budget. > 15% slower: fail unless explicitly accepted as a layout tradeoff. No change may make an axis that passes the 1.5x runtime budget start failing it. ``` Storage guardrail: ```text Post-vacuum bytes after inserted/create_version/fast-forward/divergent merge should stay <= 2x post-vacuum raw SQLite bytes/row for equivalent tracked storage states. Extra bytes must remain explainable. A speedup that causes unexplained storage growth is not kept. ``` ## Storage Fixture Guardrail Command: ```sh cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Rows to report when a change can affect storage size: ```text raw SQLite / inserted Lix SQLite / inserted Lix SQLite / after create_version Lix SQLite / after fast-forward merge Lix SQLite / after divergent merge Lix RocksDB / inserted Lix RocksDB / after create_version Lix RocksDB / after fast-forward merge Lix RocksDB / after divergent merge ``` ## Agent Rules 1. Optimize physical layout and backend APIs, not SQL surface shape. 2. Prefer clean, compiler-driven refactors and good abstractions over bolt-on fixes, adapter layers, or backwards shims. If a shim is kept, flag it. 3. Optimize one primary axis at a time and report guardrails for the other axes. 4. Compare against raw SQLite where there is an equivalent row. 5. Report SQLite and RocksDB physical rows before keeping backend-sensitive changes. 6. Prefer explicit batched APIs over hidden loops of single-key operations. 7. Backend/storage API changes are allowed when they materially improve both SQLite and RocksDB, including namespacing-oriented APIs. 8. Do not improve one backend by silently regressing the other. 9. Do not change benchmark measurements to make a change look better. 10. Do not move tracked logic, fixtures, benchmarks, or benchmark workload into untracked paths. Optimize tracked logic itself. 11. Do not shift cost out of tracked_state/commit_store to bypass tracked machinery. 12. Do not keep bolt-on fast paths when a clean abstraction should replace the old shape. 13. Do not improve writes by forcing broad projection-root materialization unless the entry is explicitly a materialization-policy experiment. 14. Do not make key/header-only scans hydrate full JSON values. 15. Do not introduce avoidable copies at the backend boundary without measuring and justifying them. 16. Do not remove hash verification, transaction atomicity, or durability semantics to win a benchmark. 17. Document rejected experiments if they teach something about the cost model. 18. Append one compact entry per optimization. ## Baseline Date: 2026-05-10 Commit: uncommitted on `11ff3a2e` Change: added the `json_pointer_physical` benchmark target and raw SQLite physical reference group. ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- get_many_exact_keys/100 ``` Result: passed. Accepted baseline run: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- baseline cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Result: ```text passed ``` ### Raw SQLite / Lix Smoke Check Command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- get_many_exact_keys/100 ``` | backend | group | row | low | median | high | | ----------- | ------------------------------------------- | ------------------------- | --------: | --------: | --------: | | raw SQLite | `json_pointer_physical/raw_sqlite/baseline` | `get_many_exact_keys/100` | 879.06 us | 924.31 us | 1.0023 ms | | Lix SQLite | `json_pointer_physical/sqlite/baseline` | `get_many_exact_keys/100` | 1.2941 ms | 1.3683 ms | 1.4479 ms | | Lix RocksDB | `json_pointer_physical/rocksdb/baseline` | `get_many_exact_keys/100` | 1.0164 ms | 1.0507 ms | 1.0952 ms | Interpretation: ```text The benchmark wiring works and the raw SQLite reference group appears beside the Lix physical backends. At 100 rows, exact reads are near the runtime envelope for both backends. This is only a smoke check. It is not the accepted baseline for optimization. The accepted baseline must include the 1k smoke rows. ``` ### Required Baseline Command Before the first optimization entry, run: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- baseline cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` ### Baseline Scoreboard The 1k smoke rows are the accepted optimization baseline. #### 1.5x Runtime Budget Rows, 1k | axis | row | raw SQLite median | Lix SQLite median | SQLite ratio | Lix RocksDB median | RocksDB ratio | status | | ---------- | ---------------------------------- | ----------------: | ----------------: | -----------: | -----------------: | ------------: | ----------------------- | | write | `write_root_all_rows/1k` | 2.4583 ms | 6.8347 ms | 2.78x | 6.1430 ms | 2.50x | SQLite and RocksDB fail | | write | `write_delta_10pct_updates/1k` | 1.5396 ms | 2.6272 ms | 1.71x | 1.3950 ms | 0.91x | SQLite fail | | write | `write_tombstone_10pct_deletes/1k` | 1.4156 ms | 2.4321 ms | 1.72x | 1.3632 ms | 0.96x | SQLite fail | | exact-read | `get_many_exact_keys/1k` | 2.2859 ms | 4.6055 ms | 2.01x | 3.4668 ms | 1.52x | SQLite and RocksDB fail | | exact-read | `get_many_missing_keys/1k` | 13.931 ms | 2.2822 ms | 0.16x | 1.4138 ms | 0.10x | pass | | exact-read | `exists_many_exact_keys/1k` | 2.0545 ms | 4.6519 ms | 2.26x | 3.4720 ms | 1.69x | SQLite and RocksDB fail | | scan | `scan_keys_only/1k` | 1.2374 ms | 3.2542 ms | 2.63x | 2.0822 ms | 1.68x | SQLite and RocksDB fail | | scan | `scan_headers_only/1k` | 1.2378 ms | 3.0692 ms | 2.48x | 2.0012 ms | 1.62x | SQLite and RocksDB fail | | scan | `scan_full_rows/1k` | 1.2920 ms | 4.3792 ms | 3.39x | 3.1884 ms | 2.47x | SQLite fail | | scan | `prefix_scan_schema/1k` | 1.2514 ms | 4.4623 ms | 3.57x | 3.2190 ms | 2.57x | SQLite and RocksDB fail | | scan | `prefix_scan_schema_file_null/1k` | 1.3817 ms | 4.3889 ms | 3.18x | 3.1497 ms | 2.28x | SQLite and RocksDB fail | #### Diff / Materialization Shape Rows | row | Lix SQLite median | Lix RocksDB median | expected shape | status | | ------------------------------------- | ----------------: | -----------------: | ---------------------------------------- | ------- | | `changed_keys_update_10pct/1k` | 68.399 ms | 67.192 ms | scales with changed keys | hotspot | | `changed_keys_delta_chain_10x1pct/1k` | 10.401 ms | 8.7436 ms | scales with changed keys and chain depth | watch | | `materialize_delta_chain_10x1pct/1k` | 5.7651 ms | 2.7741 ms | avoids unrelated delta-pack decoding | watch | #### Storage Fixture | backend / state | bytes on disk | bytes/row | status | | -------------------------------------- | ------------: | --------: | ------------------------------------------------------- | | raw SQLite / inserted | 1692456 | 1692.5 | baseline | | Lix SQLite / inserted | 1075136 | 1075.1 | baseline | | Lix SQLite / after create_version | 1087496 | 1087.5 | baseline | | Lix SQLite / after fast-forward merge | 5287488 | 5287.5 | growth to explain before keeping size-sensitive changes | | Lix SQLite / after divergent merge | 5615168 | 5615.2 | growth to explain before keeping size-sensitive changes | | Lix RocksDB / inserted | 993900 | 993.9 | baseline | | Lix RocksDB / after create_version | 995766 | 995.8 | baseline | | Lix RocksDB / after fast-forward merge | 1157143 | 1157.1 | baseline | | Lix RocksDB / after divergent merge | 1528256 | 1528.3 | baseline | ## Entries Append kept wins and rejected experiments below this line. ## Entry Template Copy this template for every optimization. ```text one kept win = one appended log entry + code changes measured by the entry ``` ## Optimization N: Commit: `` or `uncommitted on ` Target axis: ```text write | exact-read | scan | diff/changed-key | delta-chain materialization storage-size | backend API ``` Backend/API scope: ```text none | backend API plumbing | backend implementation | layout behavior | mixed ``` Physical shape: ```text canonical fact layout | read index / projection layout | delta-pack layout changed-key index | backend batch/read/write API | materialization policy copy/serialization boundary ``` Refactor stance: ```text clean cut | compiler-driven migration | temporary shim | local implementation only ``` Change: ```text What changed physically? What old shape/API is being removed? What invariant is preserved? Why should this help? Why is this a better whole-system abstraction than a workaround? Does this create or remove copies across the backend boundary? ``` ### Baseline Delta Compare against the log8 baseline and, if different, the immediately previous kept entry. #### 1.5x Runtime Budget Rows | axis | row | raw SQLite median | before median | after median | ratio after/raw | delta | status | | ---------- | ---------------------------------- | ----------------: | ------------: | -----------: | --------------: | ----: | ------ | | write | `write_root_all_rows/1k` | | | | | | | | write | `write_delta_10pct_updates/1k` | | | | | | | | write | `write_tombstone_10pct_deletes/1k` | | | | | | | | exact-read | `get_many_exact_keys/1k` | | | | | | | | exact-read | `get_many_missing_keys/1k` | | | | | | | | exact-read | `exists_many_exact_keys/1k` | | | | | | | | scan | `scan_keys_only/1k` | | | | | | | | scan | `scan_headers_only/1k` | | | | | | | | scan | `scan_full_rows/1k` | | | | | | | | scan | `prefix_scan_schema/1k` | | | | | | | | scan | `prefix_scan_schema_file_null/1k` | | | | | | | #### Diff / Materialization | row | before median | after median | delta | shape status | | ------------------------------------- | ------------: | -----------: | ----: | ------------ | | `changed_keys_update_10pct/1k` | | | | | | `changed_keys_delta_chain_10x1pct/1k` | | | | | | `materialize_delta_chain_10x1pct/1k` | | | | | #### Storage Storage fixture rows, required if bytes can change: | backend / state | before bytes | after bytes | delta | status | | -------------------------------------- | -----------: | ----------: | ----: | ------ | | raw SQLite / inserted | | | | | | Lix SQLite / inserted | | | | | | Lix SQLite / after create_version | | | | | | Lix SQLite / after fast-forward merge | | | | | | Lix SQLite / after divergent merge | | | | | | Lix RocksDB / inserted | | | | | | Lix RocksDB / after create_version | | | | | | Lix RocksDB / after fast-forward merge | | | | | | Lix RocksDB / after divergent merge | | | | | ### Unchanged Guardrails List guardrails that were not meaningfully impacted. Do not leave this blank. | guardrail | after value | status | | ------------------------------------------------- | ----------: | ------ | | physical write budget stays near backend speed | | | | physical write runtime <= 1.5x raw SQLite | | | | exact reads <= 1.5x raw SQLite | | | | scans <= 1.5x raw SQLite | | | | header-only scans do not hydrate full JSON values | | | | SQLite and RocksDB both reported | | | | storage growth explained | | | | post-vacuum storage <= 2x raw SQLite | | | | backend boundary copy cost explained | | | | tracked logic remains on the tracked path | | | | no workload shifted to untracked machinery | | | | no benchmark measurement changed | | | ### Interpretation ```text Keep/reject? Which axis improved? Which guardrail moved? Was the evidence structural, timing-based, or both? Is there a temporary shim? If yes, when should it be removed? What should the next agent try? ``` ## Optimization 1: tracked tombstone bit in projection value Commit: `uncommitted on 11ff3a2e` Target axis: ```text scan ``` Backend/API scope: ```text layout behavior ``` Physical shape: ```text read index / projection layout materialization policy copy/serialization boundary ``` Refactor stance: ```text clean cut ``` Change: ```text Tracked-state projection values now carry the durable tombstone bit directly. The bit is packed into the high bit of the existing value header byte, so the encoded value length stays unchanged. VALUE_VERSION is bumped to 5 without a backward decoder because Lix has not shipped. The old shape forced key/header-only scans to hydrate commit_store change packs just to learn whether a row was deleted. The new shape makes tracked_state scalar fields authoritative at the projection boundary; commit_store pack hydration is reserved for projections that need snapshot_content or metadata JSON refs. Tree scans are now physical-only: TrackedStateTreeScanRequest no longer carries tombstone visibility, and tracked scan limits are applied after delta overlay, materialization, and tombstone visibility. This matches the reference-system shape where delete/tombstone facts are carried through physical merge/scan stages and logical visibility/limit is applied above them. No backend API changed. SQLite and RocksDB both store the same byte-length value and benefit from avoiding unnecessary commit_pack reads for non-JSON projections. No tracked workload moved to untracked storage and no benchmark measurement changed. ``` ### Baseline Delta Compared against the log8 baseline. The full smoke run showed some noisy RocksDB scan intervals, so the RocksDB rows below use the targeted remeasure for the affected rows: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/rocksdb/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` #### 1.5x Runtime Budget Rows | axis | row | raw SQLite median | before SQLite | after SQLite | SQLite ratio | before RocksDB | after RocksDB | RocksDB ratio | status | | ---------- | ---------------------------------- | ----------------: | ------------: | -----------: | -----------: | -------------: | ------------: | ------------: | ---------------------------------------------------------- | | write | `write_root_all_rows/1k` | 2.4999 ms | 6.8347 ms | 6.5245 ms | 2.61x | 6.1430 ms | 5.6554 ms | 2.26x | still over budget, no structural regression | | write | `write_delta_10pct_updates/1k` | 1.3595 ms | 2.6272 ms | 3.3163 ms | 2.44x | 1.3950 ms | 1.4372 ms | 1.06x | SQLite noisy, RocksDB pass | | write | `write_tombstone_10pct_deletes/1k` | 1.3092 ms | 2.4321 ms | 3.1727 ms | 2.42x | 1.3632 ms | 1.4650 ms | 1.12x | SQLite noisy, RocksDB pass | | exact-read | `get_many_exact_keys/1k` | 2.1850 ms | 4.6055 ms | 4.4805 ms | 2.05x | 3.4668 ms | 3.6687 ms | 1.68x | still over budget | | exact-read | `get_many_missing_keys/1k` | 13.099 ms | 2.2822 ms | 2.2718 ms | 0.17x | 1.4138 ms | 1.9440 ms | 0.15x | pass | | exact-read | `exists_many_exact_keys/1k` | 2.2187 ms | 4.6519 ms | 4.5695 ms | 2.06x | 3.4720 ms | 5.5972 ms | 2.52x | RocksDB row noisy; semantic equivalent still uses get_many | | scan | `scan_keys_only/1k` | 1.1673 ms | 3.2542 ms | 2.4975 ms | 2.14x | 2.0822 ms | 1.4497 ms | 1.24x | primary win; RocksDB now in budget | | scan | `scan_headers_only/1k` | 1.3034 ms | 3.0692 ms | 3.0376 ms | 2.33x | 2.0012 ms | 1.8478 ms | 1.42x | RocksDB now in budget | | scan | `scan_full_rows/1k` | 1.2110 ms | 4.3792 ms | 4.7813 ms | 3.95x | 3.1884 ms | 3.2480 ms | 2.68x | still over budget | | scan | `prefix_scan_schema/1k` | 1.6941 ms | 4.4623 ms | 4.6607 ms | 2.75x | 3.2190 ms | 3.3677 ms | 1.99x | still over budget | | scan | `prefix_scan_schema_file_null/1k` | 1.2609 ms | 4.3889 ms | 4.8380 ms | 3.84x | 3.1497 ms | 3.3515 ms | 2.66x | still over budget | #### Diff / Materialization | row | before SQLite | after SQLite | before RocksDB | after RocksDB | shape status | | ------------------------------------- | ------------: | -----------: | -------------: | ------------: | --------------------------------------------------------- | | `changed_keys_update_10pct/1k` | 68.399 ms | 73.492 ms | 67.192 ms | 71.735 ms | still hotspot; movement within noisy structural guardrail | | `changed_keys_delta_chain_10x1pct/1k` | 10.401 ms | 11.167 ms | 8.7436 ms | 10.722 ms | watch | | `materialize_delta_chain_10x1pct/1k` | 5.7651 ms | 5.5134 ms | 2.7741 ms | 2.8888 ms | near neutral; value length is unchanged | #### Storage Storage fixture command: ```sh cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Result: passed. | backend / state | before bytes | after bytes | delta | status | | -------------------------------------- | -----------: | ----------: | ----: | --------------------------------------------- | | raw SQLite / inserted | 1692456 | 1692456 | 0 | unchanged | | Lix SQLite / inserted | 1075136 | 1075136 | 0 | unchanged | | Lix SQLite / after create_version | 1087496 | 1087496 | 0 | unchanged | | Lix SQLite / after fast-forward merge | 5287488 | 5291608 | +4120 | one SQLite page; acceptable page-layout noise | | Lix SQLite / after divergent merge | 5615168 | 5619288 | +4120 | one SQLite page; acceptable page-layout noise | | Lix RocksDB / inserted | 993900 | 993900 | 0 | unchanged | | Lix RocksDB / after create_version | 995766 | 995766 | 0 | unchanged | | Lix RocksDB / after fast-forward merge | 1157143 | 1157143 | 0 | unchanged | | Lix RocksDB / after divergent merge | 1528256 | 1528254 | -2 | unchanged | ### Unchanged Guardrails | guardrail | after value | status | | ------------------------------------------------- | ----------: | --------------------------------------------------------------------- | | physical write budget stays near backend speed | mixed | existing SQLite write budget failures remain | | physical write runtime <= 1.5x raw SQLite | mixed | RocksDB delta/tombstone pass; root writes still over | | exact reads <= 1.5x raw SQLite | mixed | missing reads pass; exact reads still over | | scans <= 1.5x raw SQLite | mixed | RocksDB keys/header pass; SQLite scans still over | | header-only scans do not hydrate full JSON values | yes | preserved and strengthened | | SQLite and RocksDB both reported | yes | full smoke plus RocksDB targeted rerun | | storage growth explained | yes | no value-length growth; only one SQLite page in merge states | | post-vacuum storage <= 2x raw SQLite | mixed | same pre-existing SQLite merge-state growth | | backend boundary copy cost explained | yes | no new backend copies; fewer commit_pack loads for scalar projections | | tracked logic remains on the tracked path | yes | no workload moved | | no workload shifted to untracked machinery | yes | unchanged | | no benchmark measurement changed | yes | benchmark untouched | ### Review Loop Reviewer pass 1: ```text HIGH: low-level tree matching filtered deleted delta entries before applying them over a materialized base root. Fixed by keeping tree matching physical and adding pending_tombstone_delta_hides_materialized_base_row. ``` Reviewer pass 2: ```text HIGH: none. MEDIUM: user limit could be applied before tombstone visibility. Fixed by not pushing tracked scan limits into TrackedStateTreeScanRequest and adding scan_limit_applies_after_tombstone_visibility. ``` Reviewer pass 3: ```text HIGH: by-file fast path still applied request.limit before visibility. Fixed by removing both by-file early-limit breaks and adding by_file_scan_limit_applies_after_tombstone_visibility. ``` Final reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. ``` Verification: ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- smoke cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/rocksdb/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep. Primary axis: scan, specifically key/header projections and tombstone visibility. Structural win: tombstone state now lives in the tracked projection value and non-JSON projections do not hydrate commit_store packs. Timing win: RocksDB scan_keys_only improved from 2.0822 ms to 1.4497 ms and scan_headers_only from 2.0012 ms to 1.8478 ms; SQLite scan_keys_only improved from 3.2542 ms to 2.4975 ms. Guardrails: encoded value length is unchanged, storage fixture passed, and no backend-specific API was introduced. Some full-smoke rows were noisy, so RocksDB scan/write guardrails were remeasured directly. Existing SQLite write, exact-read, full-row, and prefix-scan rows remain over the 1.5x budget. No temporary shim. Next optimization should attack the remaining scan/full-row and exact-read budget failures by adding a borrowed/header decode path for tracked-state leaf entries. The tombstone bit is now in the first value byte, so the next cut can filter visibility without allocating owned locators or full row values. ``` ## Optimization 2: Indexable Borrowed Leaf Nodes Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Changed tracked-state leaf node bytes from a sequential record stream to a v2 offset-table layout: ```text kind: u8 version: u8 entry_count: u32 entry_offsets: (entry_count + 1) * u32 payload: [key_len: u32, key, value_len: u32, value]* ``` The offset table lets exact reads binary-search leaf keys without first cloning every key/value pair in the leaf. Scans now borrow leaf entries out of the verified node byte buffer and decode only matching rows. Owned `decode_node` still exists for callers that need it, but it is built on the borrowed decoder. The leaf splitter now accounts for the exact v2 physical size: ```text leaf_size = 10 + entry_count * 12 + key_bytes + value_bytes entry_size = 12 + key_bytes + value_bytes ``` No backward compatibility shim was kept. Lix has not shipped, and this is a physical layout cutover. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------------- | -----------: | ---------------- | | `sqlite/get_many_exact_keys/1k` | 4.4327 ms | no change | | `sqlite/exists_many_exact_keys/1k` | 4.5704 ms | no change | | `sqlite/scan_keys_only/1k` | 2.7218 ms | no change | | `sqlite/scan_headers_only/1k` | 3.0616 ms | no change | | `sqlite/scan_full_rows/1k` | 4.4447 ms | no change | | `sqlite/prefix_scan_schema/1k` | 4.3002 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 4.2372 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 3.5170 ms | no change | | `rocksdb/exists_many_exact_keys/1k` | 3.5438 ms | improved | | `rocksdb/scan_keys_only/1k` | 1.5767 ms | no change | | `rocksdb/scan_headers_only/1k` | 2.0217 ms | no change | | `rocksdb/scan_full_rows/1k` | 3.3787 ms | no change | | `rocksdb/prefix_scan_schema/1k` | 3.2941 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 3.2749 ms | no change | ### Storage Storage fixture command: ```sh cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Result: passed. | backend / state | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | unchanged | | Lix SQLite / inserted | 1075136 | 1075.1 | unchanged | | Lix SQLite / after create_version | 1087496 | 1087.5 | unchanged | | Lix SQLite / after fast-forward merge | 5287488 | 5287.5 | unchanged | | Lix SQLite / after divergent merge | 5615168 | 5615.2 | unchanged | | Lix RocksDB / inserted | 993900 | 993.9 | unchanged | | Lix RocksDB / after create_version | 995766 | 995.8 | unchanged | | Lix RocksDB / after fast-forward merge | 1157143 | 1157.1 | unchanged | | Lix RocksDB / after divergent merge | 1528256 | 1528.3 | unchanged | ### Review Loop Reviewer pass 1: ```text HIGH: none. MEDIUM: leaf chunk sizing still estimated the old sequential format. Fixed by including the v2 offset directory in estimate_leaf_chunk_size and by feeding physical entry bytes into boundary_trigger. LOW: add direct codec regression tests for v2 leaf bytes and malformed offset tables. Fixed with indexable offset-table, empty-leaf, and malformed-offset tests. ``` Reviewer pass 2: ```text HIGH: none. The previous sizing concern appears addressed, borrowed decode paths do not carry leaf borrows across recursive awaits, and v2 offset validation/tests are present. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep. Primary axis: exact reads and scan decode overhead. Structural win: leaves now have a pointer/offset directory, matching the page-local indexing pattern used by reference storage engines, and scan/get_many no longer clone every leaf entry before discovering the row they need. Timing: mostly neutral in Criterion, with a measured RocksDB exists_many_exact_keys improvement from 4.0071 ms in the pre-sizing run to 3.5438 ms after the final fix. SQLite exact reads remain over budget, so this is a necessary layout foundation rather than the final performance win. Guardrails: storage fixture stayed unchanged at the 1k guardrail, tracked logic stays on the tracked path, no workload moved to untracked machinery, and no benchmark measurement changed. Next optimization should use the v2 leaf layout to decode tracked value headers directly from borrowed value bytes for scan visibility and exists-style reads, then attack exact-read value decode/allocation costs that remain above the 1.5x SQLite target. ``` ## Optimization 3: Header-Only Visibility And Exists Reads Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Added a live-row `rows_exist_at_commit` path for tracked-state readers and a physical `TrackedStateTree::exists_many` traversal. The tree reuses the v2 leaf offset table from Optimization 2, binary-searches borrowed leaf keys, and reads only the matched value header to reject tombstones. Scan visibility now also reads the value header before full value decode. `decode_visible_value` parses the header once, skips hidden tombstones without decoding locator/timestamp strings, and continues decoding live rows from the same cursor. `TrackedStateTreeScanRequest` now carries `include_tombstones`; its default keeps physical/internal tree scans tombstone-inclusive, while serving scans copy the user-facing filter. Pending delta overlay semantics were preserved: when tombstones are excluded, a pending tombstone removes a matching materialized base row instead of being ignored. Diff scans explicitly include tombstones. No backward compatibility shim was kept. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------------- | -----------: | ------------------------------------------- | | `sqlite/get_many_exact_keys/1k` | 4.4035 ms | no change | | `sqlite/exists_many_exact_keys/1k` | 2.4097 ms | improved vs pre-change get/materialize path | | `sqlite/scan_keys_only/1k` | 2.4736 ms | no change | | `sqlite/scan_headers_only/1k` | 3.0070 ms | no change | | `sqlite/scan_full_rows/1k` | 4.1861 ms | no change | | `sqlite/prefix_scan_schema/1k` | 4.1514 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 4.1977 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 3.4003 ms | no change | | `rocksdb/exists_many_exact_keys/1k` | 1.4389 ms | improved vs pre-change get/materialize path | | `rocksdb/scan_keys_only/1k` | 1.5966 ms | no change | | `rocksdb/scan_headers_only/1k` | 1.9876 ms | no change | | `rocksdb/scan_full_rows/1k` | 3.2413 ms | no change | | `rocksdb/prefix_scan_schema/1k` | 3.6050 ms | no change; noisy high interval | | `rocksdb/prefix_scan_schema_file_null/1k` | 3.3356 ms | no change | Final exists-only rerun after the tombstone semantic fix: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/exists_many_exact_keys/1k' ``` | row | final median | | ----------------------------------- | -----------: | | `sqlite/exists_many_exact_keys/1k` | 2.4097 ms | | `rocksdb/exists_many_exact_keys/1k` | 1.4389 ms | ### Storage Storage fixture command: ```sh cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Result: passed. | backend / state | bytes | status | | -------------------------------------- | ------: | ----------------------------------------------------------- | | raw SQLite / inserted | 1692456 | unchanged | | Lix SQLite / inserted | 1075136 | unchanged | | Lix SQLite / after create_version | 1087496 | unchanged | | Lix SQLite / after fast-forward merge | 5291608 | one SQLite page over the prior run; known page-layout noise | | Lix SQLite / after divergent merge | 5619288 | one SQLite page over the prior run; known page-layout noise | | Lix RocksDB / inserted | 993900 | unchanged | | Lix RocksDB / after create_version | 995766 | unchanged | | Lix RocksDB / after fast-forward merge | 1157143 | unchanged | | Lix RocksDB / after divergent merge | 1528256 | unchanged | ### Review Loop Reviewer pass 1: ```text HIGH: rows_exist_at_commit reported tombstones as existing. Fixed by checking the value header in tree.exists_many and by applying pending delta tombstones as false in projection_keys_exist_at_commit. ``` Reviewer pass 2: ```text HIGH: none. The fixed paths now return false for tombstones, pending delta tombstones clear existence, diff scans still include tombstones, and the benchmark uses the new existence API. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/exists_many_exact_keys/1k' ``` All commands passed. ### Interpretation ```text Keep. Primary axis: exists reads and tombstone visibility. Structural win: exists_many no longer piggybacks on full exact-row materialization, and visibility filtering can reject hidden tombstones from the value header before locator/string decode or commit_store materialization. Timing win: exists_many_exact_keys moves from the previous materializing path around 4.5 ms SQLite / 3.5 ms RocksDB to 2.4097 ms SQLite / 1.4389 ms RocksDB. The ordinary scan fixture is mostly live rows, so header visibility is neutral there rather than a tombstone-heavy win. Guardrails: storage shape is unchanged, hidden pending tombstones still remove base rows, diff keeps tombstones visible, tracked logic stays on the tracked path, and the benchmark row now measures the named exists API rather than a full materialized get. Next optimization should attack get_many_exact_keys itself: the exact-read path still decodes full locator/timestamp strings and materializes full rows even when the caller only needs the JSON payload, so the remaining budget is likely in value decode and commit/json materialization grouping. ``` ## Optimization 4: Store JSON Refs In Primary Tracked Values Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Changed the primary tracked-state value format from locator-only payload metadata to locator plus direct `snapshot_ref` / `metadata_ref` fields. `VALUE_VERSION` was bumped and no backward decode shim was kept. Before this cut, full materialization decoded tracked values, grouped commit store change-pack loads by `(source_commit_id, source_pack_id)`, decoded the referenced change just to recover its JSON refs, then grouped JSON loads. The tracked value is already the durable projection boundary, and both staging and root materialization already have the JSON refs at write time, so the extra commit-pack lookup was record-local metadata indirection. After this cut: - primary tracked values encode optional `snapshot_ref` and `metadata_ref`; - delta packs carry those refs too, so pending-delta reads can materialize payloads without commit-pack lookups; - by-file header-index values intentionally encode `None` refs so the secondary header index stays lean; - by-file scans that need payloads still fetch primary tracked values before materializing; - `materialize_index_entries` no longer takes `CommitStoreContext`. This follows the same physical principle as page/tuple formats in the reference systems: record-local metadata needed to materialize a tuple should live with the tuple/index entry, not require an unrelated side lookup. ### Benchmarks Standard focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. Final medians: | row | after median | criterion status | | ----------------------------------------- | -----------: | ---------------------------------------------- | | `sqlite/get_many_exact_keys/1k` | 4.0589 ms | no change in final rerun; initial run improved | | `sqlite/exists_many_exact_keys/1k` | 2.5128 ms | no change | | `sqlite/scan_keys_only/1k` | 2.5838 ms | no change | | `sqlite/scan_headers_only/1k` | 2.5942 ms | no change in final rerun; initial run improved | | `sqlite/scan_full_rows/1k` | 3.8172 ms | no change | | `sqlite/prefix_scan_schema/1k` | 3.8885 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 3.8453 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.9264 ms | improved | | `rocksdb/exists_many_exact_keys/1k` | 1.4271 ms | no change | | `rocksdb/scan_keys_only/1k` | 1.5068 ms | no change | | `rocksdb/scan_headers_only/1k` | 1.5683 ms | no change in final rerun; initial run improved | | `rocksdb/scan_full_rows/1k` | 2.8121 ms | no change in final rerun; initial run improved | | `rocksdb/prefix_scan_schema/1k` | 2.7684 ms | no change in final rerun; initial run improved | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.7350 ms | no change | Initial run immediately after the change showed the structural win before the final rerun reset Criterion's comparison baseline: ```text sqlite/get_many_exact_keys: 4.0738 ms, improved rocksdb/get_many_exact_keys: 3.1168 ms, improved rocksdb/scan_full_rows: 2.8681 ms, improved rocksdb/prefix_scan_schema: 2.7909 ms, improved ``` ### Storage Storage fixture command: ```sh cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 ``` Result: passed. | backend / state | bytes | delta vs Optimization 3 | status | | -------------------------------------- | ------: | ----------------------: | ------------------------------------ | | raw SQLite / inserted | 1692456 | 0 | unchanged | | Lix SQLite / inserted | 1112216 | +37080 | direct snapshot refs in primary tree | | Lix SQLite / after create_version | 1124576 | +37080 | direct snapshot refs in primary tree | | Lix SQLite / after fast-forward merge | 5324328 | +32720 | below previous noisy merge shape | | Lix SQLite / after divergent merge | 5652176 | +32888 | below previous noisy merge shape | | Lix RocksDB / inserted | 1028557 | +34657 | direct snapshot refs in primary tree | | Lix RocksDB / after create_version | 1030457 | +34691 | direct snapshot refs in primary tree | | Lix RocksDB / after fast-forward merge | 1195234 | +38091 | direct snapshot refs in primary tree | | Lix RocksDB / after divergent merge | 1576585 | +48329 | direct snapshot refs in primary tree | The inserted/create-version states remain below raw SQLite at 1k rows. The merge states were already above the storage-size north star before this cut; the additional bytes are explained by durable payload refs that remove a commit-pack read from exact/full materialization. ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: materialization.rs still described commit_store pack loads. Fixed the comment to describe direct tracked JSON refs and grouped json_store loads. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine --features storage-benches --test json_pointer_crud_storage -- --ignored --nocapture --test-threads=1 cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null|scan_headers_only|scan_keys_only)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|exists_many_exact_keys|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep. Primary axis: exact/full materialization. Structural win: payload refs now live at the tracked projection boundary, so materialization avoids loading and decoding commit_store change packs just to recover record-local JSON refs. Timing win: exact gets improved on both backends; RocksDB full/prefix scans also moved materially in the initial run. The final standard rerun still shows lower medians than Optimization 3 for exact/full rows, even when Criterion reports some rows as no-change because the comparison baseline had already included this cut. Storage tradeoff: roughly 35-37 KB extra at 1k inserted rows, with inserted and create_version states still below raw SQLite. By-file header index values stay lean by omitting payload refs, so the cost is restricted to primary tracked values and delta packs. No temporary shim. Next optimization should attack the remaining exact read overhead inside tracked value/key materialization: the read path still allocates full TrackedStateKey/TrackedStateIndexValue/MaterializedTrackedStateRow objects even for fixed-shape JSON-pointer reads, and SQLite full reads remain above the 1.5x target. ``` ## Optimization 5: Consume JSON Bytes Into Materialized Strings Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Changed tracked-state JSON materialization to consume each owned `Vec` payload slot with `String::from_utf8` instead of validating `&[u8]` and then copying with `to_string`. This does not change storage layout or APIs. It is a narrow ownership cleanup inside the read path after Optimization 4 removed commit-pack lookup from payload materialization. The implementation keeps the current invariant explicit: each row plan owns its projected JSON slots. If tracked-state materialization later deduplicates refs before row planning, duplicate consumers must clone intentionally. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------------- | -----------: | ---------------- | | `sqlite/get_many_exact_keys/1k` | 4.1139 ms | no change | | `sqlite/scan_full_rows/1k` | 3.8428 ms | no change | | `sqlite/prefix_scan_schema/1k` | 3.8457 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 3.8080 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.9443 ms | no change | | `rocksdb/scan_full_rows/1k` | 2.7510 ms | no change | | `rocksdb/prefix_scan_schema/1k` | 2.6865 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.7327 ms | no change | This is not a Criterion-proven timing win on the 1k fixture. It removes an avoidable allocation/copy in the payload-heavy path and should matter more for larger JSON payloads than the small smoke rows. ### Storage No storage change. The storage fixture from Optimization 4 still describes the current byte shape. ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: remove test-only wrapper around materialized_json_string. Fixed. LOW: document one-shot JSON slot invariant for .take(). Fixed. Recommendation: keep, but do not market it as a Criterion-proven optimization. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine materialized_json_string_consumes_owned_payload_bytes --features storage-benches cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a small ownership cleanup. Primary axis: full materialization allocation pressure. Structural win: materialization consumes owned JSON bytes directly into String, avoiding a validate-then-copy path. Timing: no measured Criterion win on the 1k smoke fixture, so this does not advance the budget by itself. It is low-risk, read-path-only, and keeps the payload materialization shape moving toward fewer copies. No temporary shim. Next optimization still needs a larger structural cut for full scans, likely avoiding full row object construction where callers only need counts or using a more borrowed/streamed row materialization path without changing benchmark semantics. ``` ## Optimization 6: Make By-File Roots a Concrete-File Partial Index Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Changed the tracked-state by-file secondary tree into an explicit partial index for concrete `file_id` values only. `ByFileIndex::should_use` now returns true only when every file filter is a concrete `NullableKeyFilter::Value(_)`. Null-only and mixed null/concrete scans use the primary tracked tree, whose key layout covers both null and concrete file ids. `stage_projection_root` now writes the primary root for every projected commit but stages a by-file root only when needed: - no parent by-file root and no concrete-file deltas: do not stage a by-file root; - parent by-file root and no concrete-file deltas: inherit the parent by-file root with zero chunk puts; - concrete-file deltas: apply only those deltas to the by-file root. This matches the physical predicate of the secondary index with the planner predicate that is allowed to use it. It also avoids carrying null-file entries in a secondary tree that the planner never uses for null-file filters. Added regression coverage for: - null-file rows not staging a by-file root; - a null-only parent plus concrete-file child scanned with mixed `[Null, Value(file)]` filters, which must use the primary tree and return both inherited null rows and concrete child rows. ### Benchmarks Focused command before the final concrete-only cleanup: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|prefix_scan_schema_file_null|scan_full_rows)/1k' ``` Result: passed. | row | after median | criterion status | | -------------------------------------------- | -----------: | ---------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.9524 ms | noisy baseline | | `raw_sqlite/scan_full_rows/1k` | 1.2119 ms | reference | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.4604 ms | reference | | `sqlite/write_root_all_rows/1k` | 6.2808 ms | no change | | `sqlite/scan_full_rows/1k` | 3.8271 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 4.0401 ms | no change | | `rocksdb/write_root_all_rows/1k` | 5.4735 ms | no change | | `rocksdb/scan_full_rows/1k` | 2.7509 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.7411 ms | no change | This is not a runtime win for the current JSON-pointer smoke rows. `write_root_all_rows` uses delta staging rather than projection-root staging, and the benchmark rows have `file_id = None`. ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. Final repeated 1k storage rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | ------------------------------------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1112216 | 1112.2 | unchanged | | Lix SQLite / after create_version | 1124576 | 1124.6 | unchanged | | Lix SQLite / after fast-forward merge | 5324328 | 5324.3 | unchanged from Optimization 4/5 shape | | Lix SQLite / after divergent merge | 5652176 | 5652.2 | unchanged from Optimization 4/5 shape | | Lix RocksDB / inserted | 1028557 | 1028.6 | unchanged | | Lix RocksDB / after create_version | 1030457 | 1030.5 | unchanged | | Lix RocksDB / after fast-forward merge | 1195234 | 1195.2 | unchanged | | Lix RocksDB / after divergent merge | 1576587 | 1576.6 | effectively unchanged | An earlier storage sample before the concrete-only write cleanup showed lower SQLite merge-state bytes, but repeated final runs returned to the prior committed SQLite shape. Treat this optimization as storage-neutral for the current JSON-pointer accounting fixture. ### Review Loop Reviewer pass 1: ```text HIGH: none. MEDIUM: none. LOW: scan_request_from_tracked still looked more general than the all-concrete planner contract. Fixed with debug assertion and Value-only mapping. LOW: add the mixed Null + Value regression case. Fixed. LOW: once a by-file root exists, null-file rows were still indexed. Fixed by making by-file writes concrete-file-only and inheriting unchanged roots. Recommendation: keep. ``` Reviewer pass 2: ```text HIGH: none. MEDIUM: none. LOW: encode_key_ref could still encode file_id = None. Fixed with a debug assertion at the helper boundary. Recommendation: keep. The result is a coherent partial secondary index: concrete-only on writes, concrete-only on reads, with safe parent-root inheritance. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|prefix_scan_schema_file_null|scan_full_rows)/1k' ``` All commands passed. ### Interpretation ```text Keep as a physical-layout cleanup, not as a budget-moving benchmark win. Primary axis: secondary-index shape. Structural win: by-file roots now behave like a partial secondary index whose physical contents and planner predicate agree. This prevents null-file rows from being copied into a secondary tree that cannot answer null-file scans safely, and it removes the old missing-root empty-result behavior for projected reads. Timing/storage: neutral on the current JSON-pointer fixture. This does not move the remaining <= 1.5x runtime target or the SQLite merge-state storage issue. No temporary shim. Next optimization should return to budget-moving read/write costs: either the primary tracked-tree write path for full-root materialization, or row materialization/allocation in exact and scan reads. ``` ## Optimization 7: Skip JSON Planning for Header-Only Materialization Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Added a no-JSON fast path to tracked-state materialization. When a requested projection omits both `snapshot_content` and `metadata`, `materialize_index_entries` now directly maps tree entries into `MaterializedTrackedStateRow` values with payload columns omitted. This skips work that cannot affect the result for key-only and header-only projections: - no per-row payload plan allocation; - no `json_refs` / `json_ref_localities` vectors; - no pack-locality grouping map; - no empty JSON-store load path. Header semantics are still preserved. Identity fields come from the tracked key, and `deleted`, timestamps, `change_id`, and `commit_id` come from the tracked value. Tombstone filtering still uses `row.deleted`, not `snapshot_content`. No storage layout change. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------------- | -----------: | ------------------------------ | | `sqlite/scan_keys_only/1k` | 2.4932 ms | -6.0%, within noise threshold | | `sqlite/scan_headers_only/1k` | 2.5955 ms | no change | | `sqlite/scan_full_rows/1k` | 3.7797 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 3.7925 ms | improved, likely noisy control | | `rocksdb/scan_keys_only/1k` | 1.5304 ms | no change | | `rocksdb/scan_headers_only/1k` | 1.5769 ms | no change | | `rocksdb/scan_full_rows/1k` | 2.7634 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.6894 ms | improved, likely noisy control | The structural improvement is real for projections without payload columns, but Criterion does not show a strong win on the 1k smoke fixture. Full-row scans are included as controls because they still use the JSON hydration path. ### Storage No storage change. ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: infallible helper returned Result only to fit collect. Fixed by returning MaterializedTrackedStateRow directly and wrapping once at the call site. Recommendation: keep. This is an executor-style projection fast path: when no payload columns are requested, skip payload planning entirely. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine projected_scans_do_not_materialize_snapshot_when_snapshot_content_is_omitted --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a narrow projection fast path. Primary axis: key/header scans. Structural win: no-payload projections now avoid payload planning rather than constructing empty JSON work and discovering there is nothing to load. Timing: modest/noisy on the current 1k fixture. This does not solve the remaining full-row scan or exact-get gap, but it removes unnecessary executor work for projected scans and keeps the read path moving toward column-aware materialization. No temporary shim. Next optimization should target full payload materialization or exact get_many: the remaining expensive rows still hydrate JSON and build full MaterializedTrackedStateRow objects. ``` ## Optimization 8: Store JSON Locality as Row-Plan Indexes Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Changed full tracked-state payload materialization to keep JSON ref locality as compact row-plan indexes instead of cloning commit ids per projected JSON ref. Before this change, `materialize_index_entries` stored `json_ref_localities: Vec<(String, u32)>`. Each projected `snapshot_content` or `metadata` ref cloned `value.change_locator.source_commit_id` just so `load_projection_json_values` could group refs by commit pack. Row plans already own the same `commit_id`. The locality vector now stores a small `JsonRefLocality { row_index, pack_id }`, and the grouping step borrows `row_plans[row_index].commit_id.as_str()` while loading JSON values. No storage/API behavior change. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------------- | -----------: | ---------------- | | `sqlite/get_many_exact_keys/1k` | 3.9197 ms | no change | | `sqlite/scan_full_rows/1k` | 3.8695 ms | no change | | `sqlite/prefix_scan_schema/1k` | 3.7669 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 3.7631 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 3.0397 ms | no change | | `rocksdb/scan_full_rows/1k` | 2.7001 ms | no change | | `rocksdb/prefix_scan_schema/1k` | 2.7920 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.6921 ms | no change | SQLite exact gets moved lower in this sample than the previous committed log, but Criterion still reports no change. Treat this as an allocation cleanup, not a proven runtime win. ### Storage No storage change. ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: parallel arrays plus row_plans are correct but coupled; use a small JsonRefLocality struct to make the invariant clearer. Fixed. Recommendation: keep. Locality is now an index into already-owned row-plan data rather than repeated commit-id allocation. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a small allocation cleanup in the payload materialization path. Primary axis: full-row materialization allocation pressure. Structural win: JSON locality now uses compact indexes into existing row-plan ownership, which matches the broader direction of carrying offsets/indexes beside payload metadata instead of duplicating identifying strings. Timing: Criterion-neutral on the 1k fixture. This is not enough to close the remaining <= 1.5x exact/full read gap. No temporary shim. Next optimization still needs a larger cut in JSON hydration or row construction. The obvious remaining cost is that full reads still allocate a MaterializedTrackedStateRow per row and convert every JSON payload to String. ``` ## Optimization 9: Return Unique JSON Batch Payloads Without Cloning Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Changed `json_store::load_json_bytes_many_in_scope` to avoid cloning loaded JSON payload bytes when the request contains no duplicate refs. The loader already deduplicates requested refs into `unique_values`. Before this change it always rebuilt the result with: ```text requested_indexes.map(|index| unique_values[index].clone()) ``` That cloned every loaded `Vec` even when every ref was unique and `unique_values` was already in request order. Full tracked reads then consumed the cloned bytes into `String`, leaving the original decoded payload copy unused. The loader now tracks whether any duplicate ref was seen: - no duplicates: return `unique_values` directly; - duplicates: keep the old clone-to-request-order behavior so repeated refs still produce repeated result slots. This applies to both commit-pack and out-of-band JSON scopes. Missing refs keep their `None` slots in either path. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------------- | -----------: | ----------------------------- | | `sqlite/get_many_exact_keys/1k` | 3.8568 ms | -3.3%, within noise threshold | | `sqlite/scan_full_rows/1k` | 3.7141 ms | improved | | `sqlite/prefix_scan_schema/1k` | 3.6749 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 3.6774 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.9055 ms | no change | | `rocksdb/scan_full_rows/1k` | 2.5562 ms | no change | | `rocksdb/prefix_scan_schema/1k` | 2.7618 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.7406 ms | no change | The strongest measured signal is SQLite full scans. RocksDB and exact gets move in the right direction but remain Criterion-neutral in this run. ### Storage No storage change. ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: json_values_in_request_order depends on the has_duplicate_refs flag. Fixed with debug assertions that the no-duplicate path has request indexes 0..len and the same length as unique_values. Recommendation: keep. This is a real structural copy cut in the payload path, and the SQLite scan_full_rows improvement is plausible. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_store::store::tests::json_batch_load_roundtrips_in_request_order --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a payload-copy reduction. Primary axis: full-row materialization. Structural win: unique JSON batch loads now transfer ownership of decoded payload bytes directly to the caller instead of cloning them back into request order. This pairs with tracked materialization consuming those bytes with String::from_utf8. Timing: SQLite full scans improved in the focused run; other full/exact rows remain noisy but generally moved lower. The <= 1.5x target is still not met. No temporary shim. Next optimization should look below row materialization again: load_from_packs still decodes entire JSON packs for the requested refs, and tracked exact reads still construct full rows even when callers only check presence in the current bench harness. ``` ## Optimization 10: Encode Delta Packs From Borrowed Deltas Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Changed normal tracked-state delta staging to encode delta packs directly from borrowed `TrackedStateDeltaRef` values. Before this change, `TrackedStateWriter::stage_delta` cloned every borrowed delta into owned `TrackedStateDeltaEntry` objects, including schema/file/entity identity, source commit/change ids, and timestamp strings. It then immediately encoded those owned entries into the delta pack. The write path now uses: - `codec::encode_delta_pack_refs`; - `storage::stage_delta_pack_refs`; - `TrackedStateWriter::stage_delta` calling the borrowed staging path directly. The old owned-entry encode/stage helper and `delta_entries_from_refs` were removed. Decode still materializes owned `TrackedStateDeltaEntry` values because readers need owned entries after loading a persisted pack. No delta-pack format change: the encoder still writes the same `LXTD` magic/version/count and uses the same tracked key/value encoders. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. | row | after median | criterion status | | ------------------------------------------ | -----------: | -------------------------- | | `sqlite/write_root_all_rows/1k` | 6.2844 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 2.6592 ms | no change, noisy guardrail | | `sqlite/write_tombstone_10pct_deletes/1k` | 2.3671 ms | no change, noisy guardrail | | `rocksdb/write_root_all_rows/1k` | 5.3605 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 1.3421 ms | no change, noisy guardrail | | `rocksdb/write_tombstone_10pct_deletes/1k` | 1.2464 ms | no change | Root-write medians moved lower than several previous samples, especially RocksDB, but Criterion still reports no change. Treat this as a production write-path allocation cleanup, not a proven target-closing win. ### Storage No storage change. ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: remove stale #[allow(dead_code)] from TrackedStateDeltaRef. Fixed. LOW: add direct delta-pack codec regression coverage for the borrowed encoder. Fixed with delta_pack_ref_encoder_roundtrips_entries. Recommendation: keep. This is a clean production write-path allocation cut and removes an artificial owned staging API. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine delta_pack_ref_encoder_roundtrips_entries --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a borrowed-write cleanup. Primary axis: root and delta writes. Structural win: normal tracked-state commits no longer allocate a full owned delta-entry layer just to encode the same bytes into a delta pack. This follows the same shape used by reference systems that encode from stable in-memory views and materialize owned records only when reading back from storage. Timing: Criterion-neutral on the 1k fixture. This does not close the remaining write_root_all_rows budget gap, but it removes an obvious allocation layer from the production write path without changing storage semantics. No temporary shim. Next optimization needs a bigger write-side cut, likely in commit_store staging, JSON pack staging, or the transaction write-set path, because delta-pack encoding itself is no longer cloning the tracked projection rows first. ``` ## Optimization 11: Encode Change Packs From Existing Slices Date: 2026-05-10 Commit: this entry is committed with the optimization ### Change Changed `commit_store::codec::encode_change_pack` to accept `&[ChangeRef<'_>]` instead of a generic iterator that it immediately collected into a temporary `Vec`. The production caller already has authored changes in a `Vec`, so the encoder can read the count from the slice and encode refs directly in order. This removes one temporary collection from the commit-store write path. No storage format change. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates)/1k' ``` Result: passed. | row | after median | criterion status | | -------------------------------------- | -----------: | ---------------- | | `sqlite/write_root_all_rows/1k` | 6.0937 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 2.5978 ms | no change | | `rocksdb/write_root_all_rows/1k` | 5.4208 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 1.3267 ms | no change | ### Storage No storage change. ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. Recommendation: keep the code, but do not present it as a standalone budget-moving win. It is a clean write-path allocation cleanup with no measured Criterion win. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine commit_store:: --features storage-benches cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates)/1k' ``` All commands passed. ### Interpretation ```text Keep as a small encoder allocation cleanup. Primary axis: commit-store write packing. Structural win: encode from the already-shaped authored-change slice instead of materializing a second vector just to know the count. Timing: Criterion-neutral. This is not a budget-moving optimization by itself, but it composes with the borrowed tracked delta-pack encoder and keeps the write path moving away from temporary owned collections. No temporary shim. Next optimization needs a larger cut in JSON pack staging or transaction write-set application; the obvious per-row encoder clones in tracked and commit-store delta packing have now been reduced. ``` ## Optimization 12: Preserve JSON Pack Input Order Without Tree Sorting Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Changed `JsonStoreWriter::stage_batch` to keep unique encoded payloads in first-seen input order instead of inserting them into a `BTreeMap` sorted by hash. The writer still returns refs in request order and still deduplicates repeated payload hashes. The new shape is: - `order: Vec` for the caller-visible result; - `unique_encoded: Vec` for first-seen unique payloads; - `HashSet<[u8; 32]>` only for duplicate suppression. For commit-pack placement, pack-local entries are selected from `unique_encoded.iter()` in input order. Direct out-of-band writes iterate the same vector and skip pack-local payloads. This intentionally changes pack entry order from hash-sorted to input order. Pack lookup is hash-addressed and scans decoded entries by hash, so entry order is not part of the semantic contract. Lix has not shipped, and storage accounting stayed unchanged. Added regression coverage for duplicate writer input: `[A, A, B]` returns `[refA, refA, refB]`, stores only the pack-local payloads, and hydrates both unique refs from the commit pack. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. | row | after median | criterion status | | ------------------------------------------ | -----------: | -------------------------- | | `sqlite/write_root_all_rows/1k` | 5.9331 ms | improved | | `sqlite/write_delta_10pct_updates/1k` | 2.6203 ms | no change, noisy guardrail | | `sqlite/write_tombstone_10pct_deletes/1k` | 2.4790 ms | no change, noisy guardrail | | `rocksdb/write_root_all_rows/1k` | 5.3019 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 1.3004 ms | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 1.2178 ms | no change | SQLite root writes improved by Criterion. RocksDB root-write median moved lower than recent committed samples but remains Criterion-neutral. ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1112216 | 1112.2 | unchanged | | Lix SQLite / after create_version | 1124576 | 1124.6 | unchanged | | Lix SQLite / after fast-forward merge | 5324328 | 5324.3 | unchanged | | Lix SQLite / after divergent merge | 5652176 | 5652.2 | unchanged | | Lix RocksDB / inserted | 1028557 | 1028.6 | unchanged | | Lix RocksDB / after create_version | 1030457 | 1030.5 | unchanged | | Lix RocksDB / after fast-forward merge | 1195234 | 1195.2 | unchanged | | Lix RocksDB / after divergent merge | 1576587 | 1576.6 | unchanged | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: add duplicate writer-input coverage for [A, A, B]. Fixed. Recommendation: keep. This is a real hot-path structural improvement with a measured SQLite root-write win, no storage accounting regression, and acceptable pack-order semantics. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_store:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a JSON pack write-path improvement. Primary axis: root writes. Structural win: unique JSON-pointer payloads no longer pay hash-sorted tree-map insertion and sorted iteration before being packed into a commit-local JSON pack. Dedupe remains hash-based, while physical pack order follows deterministic input order. Timing: SQLite write_root_all_rows improved. RocksDB remains neutral but did not regress materially. The root-write target is still above 1.5x raw SQLite. No temporary shim. Next optimization should keep attacking write_root_all_rows, likely below the generic StorageWriteSet/backend batch application or by reducing JSON payload encoding work before commit-pack staging. ``` ## Optimization 13: Use fixed JSON hash lookup keys and single-pack projection loads Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Changed JSON-store read lookup tables from ordered `Vec` keys to fixed `[u8; 32]` hash keys: - `JsonRef::as_hash_array()` exposes the existing hash without conversion. - `load_json_bytes_many_in_scope` deduplicates requested refs with `HashMap<[u8; 32], usize>` while preserving first-seen backend get order in the side vectors. - `load_from_packs` matches decoded pack entries with `HashMap<[u8; 32], usize>` instead of allocating hash `Vec`s. Added a tracked-state materialization fast path for the common root-read case where all projected JSON refs are local to one `(commit_id, pack_id)`. The fast path calls `json_store.load_bytes_many` once with the original `json_refs` slice and returns values directly in request order. Mixed-pack reads keep the previous grouped fallback. The shortcut checks that JSON refs and locality indexes remain in lockstep before selecting the fast path. Added unit coverage for same-pack duplicate slots and mixed-pack rejection. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows)/1k' ``` Result: passed. | row | before median | after median | criterion status | | ----------------------------------- | ------------: | -----------: | ---------------------- | | `raw_sqlite/get_many_exact_keys/1k` | 2.0599 ms | 2.0580 ms | reference | | `raw_sqlite/scan_full_rows/1k` | 1.1594 ms | 1.1722 ms | reference | | `sqlite/get_many_exact_keys/1k` | 3.9323 ms | 3.8132 ms | no change | | `sqlite/scan_full_rows/1k` | 3.6356 ms | 3.5962 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.9911 ms | 2.8464 ms | no change | | `rocksdb/scan_full_rows/1k` | 2.5176 ms | 2.3906 ms | within noise threshold | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | -------------------------------------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1112216 | 1112.2 | unchanged | | Lix SQLite / after create_version | 1124576 | 1124.6 | unchanged | | Lix SQLite / after fast-forward merge | 5303776 | 5303.8 | accounting noise, lower than previous | | Lix SQLite / after divergent merge | 5721904 | 5721.9 | accounting noise, higher than previous | | Lix RocksDB / inserted | 1028557 | 1028.6 | unchanged | | Lix RocksDB / after create_version | 1030457 | 1030.5 | unchanged | | Lix RocksDB / after fast-forward merge | 1195234 | 1195.2 | unchanged | | Lix RocksDB / after divergent merge | 1576588 | 1576.6 | unchanged | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: add an explicit json_refs/localities length check before the fast path; add focused single-pack shortcut coverage. Fixed. Recommendation: keep. This is a clean read-side allocation/comparison cut with no storage-format change. Fixed hash keys are lookup-only and do not affect request/backend/result ordering. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine json_store:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo test -p lix_engine tracked_state::materialization:: --features storage-benches cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows)/1k' ``` All commands passed. ### Interpretation ```text Keep as a modest read-side locality cleanup. Primary axis: exact-key and full-row reads. The structural win is removing avoidable heap-key/order-map work from fixed-hash JSON lookup and avoiding tracked-state grouping allocations when all projected payloads are in one commit-local pack. Timing: medians moved in the intended direction for both Lix backends on the targeted read rows. Only RocksDB scan showed a statistically visible movement, and Criterion classified it within the noise threshold, so this should be treated as a small supporting optimization rather than a budget-moving step. No storage format change. No temporary shim. ``` ## Optimization 14: Reuse trusted JSON refs during payload staging Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Threaded precomputed JSON refs through JSON-store staging for callers that already own the normalized JSON/ref invariant: - `NormalizedJsonRef` now has private fields and two constructors: `new(normalized)` for ordinary callers and `trusted_prehashed(normalized, json_ref)` for the explicit trusted path. - `JsonStoreWriter::stage_batch` uses the supplied trusted ref to encode JSON without hashing the payload again, falling back to the existing hashing path for normal callers. - Transaction commit passes `StageJson` refs for snapshot/metadata payloads. `StageJson` computes the ref from the same normalized string during transaction staging. - The physical storage benchmark root writer pairs payload strings with refs from the already-built `Change` records so the benchmark no longer pays the same duplicate hash. Added direct JSON-store coverage for staging a trusted prehashed commit-pack payload, verifying the returned ref and hydrated bytes. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. | row | after median | criterion status | | --------------------------------------------- | -----------: | ---------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.3853 ms | reference | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2667 ms | reference | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2330 ms | reference | | `sqlite/write_root_all_rows/1k` | 5.4166 ms | improved | | `sqlite/write_delta_10pct_updates/1k` | 2.5490 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 2.6059 ms | improved | | `rocksdb/write_root_all_rows/1k` | 4.8746 ms | improved | | `rocksdb/write_delta_10pct_updates/1k` | 1.2758 ms | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 1.2795 ms | noisy guardrail | Rerun command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. | row | rerun median | criterion status | | ------------------------------------------ | -----------: | ----------------------- | | `sqlite/write_root_all_rows/1k` | 5.3105 ms | no change, lower median | | `sqlite/write_delta_10pct_updates/1k` | 2.5652 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 2.4195 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.7479 ms | no change, lower median | | `rocksdb/write_delta_10pct_updates/1k` | 1.2128 ms | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 1.2283 ms | improved | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1112216 | 1112.2 | unchanged | | Lix SQLite / after create_version | 1124576 | 1124.6 | unchanged | | Lix SQLite / after fast-forward merge | 5324328 | 5324.3 | unchanged | | Lix SQLite / after divergent merge | 5652176 | 5652.2 | unchanged | | Lix RocksDB / inserted | 1028557 | 1028.6 | unchanged | | Lix RocksDB / after create_version | 1030457 | 1030.5 | unchanged | | Lix RocksDB / after fast-forward merge | 1195234 | 1195.2 | unchanged | | Lix RocksDB / after divergent merge | 1576587 | 1576.6 | unchanged | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: supplied-ref path was correctness-critical but only protected by debug_assert; make the trusted prehashed path harder to construct accidentally. LOW: add direct json-store coverage and avoid pretending init eliminates a hash. Follow-up review: HIGH: none. MEDIUM: none. LOW: none. The prior MEDIUM is resolved by private NormalizedJsonRef fields plus explicit new/trusted_prehashed constructors. The intended production caller passes StageJson normalized bytes and the ref computed from those same bytes. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine json_store:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a root-write optimization. Primary axis: write_root_all_rows. The structural win removes a duplicate BLAKE3 hash over normalized JSON payloads at the JSON-store staging boundary when transaction staging has already computed the content ref. Timing: both SQLite and RocksDB root writes moved down, with Criterion improvements in the first focused run and lower medians on rerun. Delta and tombstone rows are treated as guardrails; their medians were neutral to better on rerun. No storage format change. No temporary shim. ``` ## Optimization 15: Move JSON content hash verification off hot reads Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Changed JSON payload decoding to split hot reads from explicit integrity verification: - `load_json_bytes_many_in_scope` uses `JsonHashCheck::TrustedHotRead` and no longer rehashes every decoded payload. - Added non-hot `verify_json_bytes_many_in_scope`, which uses `JsonHashCheck::Verify` and checks `blake3(decoded_payload) == JsonRef`. - Pack, direct, and direct-fallback decode paths share the same internal loader and thread the hash-check policy through to `decode_json_payload`. - Added `verified_batch_load_rejects_hash_mismatch`, which stores mismatched bytes under a requested JSON ref key, confirms the trusted hot path returns bytes without hashing, and confirms the verifier rejects the same row with a hash mismatch. This follows the reference-system shape: normal scans trust the storage layer and write-time content-address facts, while explicit integrity/fsck callers pay the exhaustive hash cost. SQLite has explicit integrity checks, and Sapling/Mononoke separates content-addressed storage from walker/validation jobs. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows)/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------- | -----------: | ----------------------- | | `raw_sqlite/get_many_exact_keys/1k` | 3.1921 ms | noisy reference | | `raw_sqlite/scan_full_rows/1k` | 1.8065 ms | noisy reference | | `sqlite/get_many_exact_keys/1k` | 3.4570 ms | no change, lower median | | `sqlite/scan_full_rows/1k` | 3.5119 ms | no change, lower median | | `rocksdb/get_many_exact_keys/1k` | 2.3411 ms | improved | | `rocksdb/scan_full_rows/1k` | 2.1430 ms | improved | Rerun command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | rerun median | criterion status | | ----------------------------------------- | -----------: | ------------------------------- | | `sqlite/get_many_exact_keys/1k` | 4.1679 ms | no change, noisy guardrail | | `sqlite/scan_full_rows/1k` | 3.5295 ms | no change, noisy guardrail | | `sqlite/prefix_scan_schema/1k` | 3.3561 ms | improved | | `sqlite/prefix_scan_schema_file_null/1k` | 3.7939 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.3749 ms | no change, lower than pre-patch | | `rocksdb/scan_full_rows/1k` | 2.2115 ms | no change, lower than pre-patch | | `rocksdb/prefix_scan_schema/1k` | 2.0643 ms | improved | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.1547 ms | improved | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1112216 | 1112.2 | unchanged | | Lix SQLite / after create_version | 1124576 | 1124.6 | unchanged | | Lix SQLite / after fast-forward merge | 5324328 | 5324.3 | unchanged | | Lix SQLite / after divergent merge | 5652176 | 5652.2 | unchanged | | Lix RocksDB / inserted | 1028557 | 1028.6 | unchanged | | Lix RocksDB / after create_version | 1030457 | 1030.5 | unchanged | | Lix RocksDB / after fast-forward merge | 1195234 | 1195.2 | unchanged | | Lix RocksDB / after divergent merge | 1576587 | 1576.6 | unchanged | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: hot reads now point to an integrity-check/fsck policy, but JSON store did not have a non-hot verifier entry point. Add one or keep a dedicated verification helper. LOW: make the decode API shape less likely to imply the ref is always checked. Follow-up review: HIGH: none. MEDIUM: none. LOW: none. The prior MEDIUM is resolved by verify_json_bytes_many_in_scope and the shared JsonHashCheck policy. The mismatch regression test covers the dangerous case. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine json_store:: --features storage-benches cargo test -p lix_engine tracked_state::materialization:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a read-path policy and CPU optimization. Primary axis: full-row and prefix scans, especially RocksDB. The structural win removes a full BLAKE3 pass over every JSON payload from normal reads while preserving a non-hot verifier for fsck/integrity workflows. Timing: RocksDB exact reads and scans improved strongly in the first focused run; RocksDB prefix scans improved again on rerun. SQLite was noisier, but prefix_scan_schema improved and full-scan medians stayed in the intended range. No storage format change. No benchmark shape change. No temporary shim. ``` ## Optimization 16: Fill JSON pack results directly Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Fused JSON pack decode with result placement: - Replaced `decode_json_pack(...) -> Vec<(JsonRef, Vec)>` plus a second pass in `load_from_packs`. - Added `load_json_pack_values(...)`, which parses the pack directory and writes matching decoded payloads directly into the caller's result slice using the existing `wanted: HashMap<[u8; 32], usize>`. - Unrequested pack entries are skipped without payload decode. - Requested entries still flow through `decode_json_payload(..., hash_check)`, so verified reads still hash-check requested refs. Added `verified_pack_load_checks_only_requested_entries` to pin the intended boundary: a bad unrequested pack entry is ignored by a verified read for a good ref, while requesting the bad ref fails with a hash mismatch. This is the same projection/predicate-pushdown shape used by database storage engines: do not decode rows or payloads outside the request path. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------------- | -----------: | -------------------------- | | `sqlite/get_many_exact_keys/1k` | 3.5231 ms | no change, noisy guardrail | | `sqlite/scan_full_rows/1k` | 3.1738 ms | no change | | `sqlite/prefix_scan_schema/1k` | 3.0404 ms | improved | | `sqlite/prefix_scan_schema_file_null/1k` | 3.4798 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.2726 ms | no change | | `rocksdb/scan_full_rows/1k` | 2.0346 ms | no change | | `rocksdb/prefix_scan_schema/1k` | 2.1176 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.0395 ms | improved | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1112216 | 1112.2 | unchanged | | Lix SQLite / after create_version | 1124576 | 1124.6 | unchanged | | Lix SQLite / after fast-forward merge | 5324328 | 5324.3 | unchanged | | Lix SQLite / after divergent merge | 5652176 | 5652.2 | unchanged | | Lix RocksDB / inserted | 1028557 | 1028.6 | unchanged | | Lix RocksDB / after create_version | 1030457 | 1030.5 | unchanged | | Lix RocksDB / after fast-forward merge | 1195234 | 1195.2 | unchanged | | Lix RocksDB / after divergent merge | 1576587 | 1576.6 | unchanged | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: none. LOW: add focused pack-local coverage for verifying requested entries while skipping unrequested entries. Fixed. Follow-up review: HIGH: none. MEDIUM: none. LOW: none. The new pack test covers the intended boundary. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine json_store:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a small pack-read cleanup. Primary axis: scan and prefix-scan reads. The structural win removes an intermediate vector of decoded pack entries and avoids decoding unrequested pack payloads. This compounds with the previous hot-read hash policy change. Timing: SQLite prefix_scan_schema and RocksDB prefix_scan_schema_file_null improved by Criterion. Other targeted rows were neutral/noisy but did not show a structural regression. No storage format change. No temporary shim. ``` ## Optimization 17: Borrow tracked-state delta slices Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Changed `TrackedStateWriter::stage_delta` to accept `&[TrackedStateDeltaRef<'_>]` instead of a generic `IntoIterator`: - Removed the internal `collect::>()` before delta-pack encoding. - Updated production and test callers to borrow their already-built delta vectors. - Kept `stage_projection_root` unchanged because it still needs to own and reuse the collected deltas while building projection roots. This lines the public staging helper up with the delta-pack encoder, which already accepts borrowed slices and immediately writes owned encoded bytes into the write set. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/write_root_all_rows/1k' ``` Result: passed. | row | after median | criterion status | | ----------------------------------- | -----------: | ----------------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.3512 ms | reference/no change | | `sqlite/write_root_all_rows/1k` | 5.5212 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.6132 ms | no change, lower median | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1112216 | 1112.2 | unchanged | | Lix SQLite / after create_version | 1124576 | 1124.6 | unchanged | | Lix SQLite / after fast-forward merge | 5324328 | 5324.3 | unchanged | | Lix SQLite / after divergent merge | 5652176 | 5652.2 | unchanged | | Lix RocksDB / inserted | 1028557 | 1028.6 | unchanged | | Lix RocksDB / after create_version | 1030457 | 1030.5 | unchanged | | Lix RocksDB / after fast-forward merge | 1195234 | 1195.2 | unchanged | | Lix RocksDB / after divergent merge | 1576587 | 1576.6 | unchanged | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. Recommendation: keep, but log it as a small allocation/API cleanup rather than a measured benchmark optimization. The slice API is clean because stage_delta only synchronously encodes into owned write-set bytes, and the production callers already hold the delta Vecs. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo test -p lix_engine live_state::context:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/write_root_all_rows/1k' ``` All commands passed. ### Interpretation ```text Keep as a small root-write allocation cleanup. Primary axis: write_root_all_rows. The structural win removes a redundant Vec allocation/copy on the tracked-state delta staging path after callers have already built the delta Vec. Timing: Criterion reported no statistically significant change. This is kept because it simplifies the hot staging API and removes real production work, not because it demonstrates a standalone benchmark win. No storage format change. No temporary shim. ``` ## Optimization 18: Decode ordered JSON packs without lookup maps Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Added a guarded ordered-pack read path in `json_store`: - When a read targets exactly one commit-local JSON pack and the requested unique refs exactly match that pack's directory count and order, decode pack entries directly into result slots. - If count or order does not match, clear any partially filled slots and fall back to the existing hash lookup path. - Shared pack parsing now flows through `JsonPackLayout` and `JsonPackEntry`, so the ordered path and fallback validate headers, directory length, payload bounds, codec, and truncation the same way. This avoids building a `HashMap<[u8; 32], usize>` and doing one hash lookup per pack entry for the common full-scan shape where projection refs are already in commit-pack order. Added coverage for the ordered fast path, unordered fallback, and the invariant that an order mismatch leaves the caller's result slots untouched before fallback. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | -------------------------------------------- | -----------: | ----------------------- | | `raw_sqlite/get_many_exact_keys/1k` | 2.0223 ms | reference/no change | | `raw_sqlite/scan_full_rows/1k` | 1.1436 ms | reference/no change | | `raw_sqlite/prefix_scan_schema/1k` | 1.2741 ms | reference/no change | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1876 ms | reference/no change | | `sqlite/get_many_exact_keys/1k` | 3.3477 ms | no change | | `sqlite/scan_full_rows/1k` | 3.0526 ms | no change, lower median | | `sqlite/prefix_scan_schema/1k` | 3.1708 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 3.1284 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.3137 ms | noisy guardrail | | `rocksdb/scan_full_rows/1k` | 2.0583 ms | no change | | `rocksdb/prefix_scan_schema/1k` | 2.0680 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 2.0187 ms | no change | Single-pass rerun command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | rerun median | criterion status | | ----------------------------------------- | -----------: | ----------------------- | | `sqlite/get_many_exact_keys/1k` | 3.2251 ms | no change | | `sqlite/scan_full_rows/1k` | 3.1249 ms | no change | | `sqlite/prefix_scan_schema/1k` | 3.0630 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 3.1658 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.3087 ms | no change | | `rocksdb/scan_full_rows/1k` | 2.0001 ms | no change, lower median | | `rocksdb/prefix_scan_schema/1k` | 1.9933 ms | no change, lower median | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.9861 ms | no change, lower median | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | ---------------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1112216 | 1112.2 | unchanged | | Lix SQLite / after create_version | 1124576 | 1124.6 | unchanged | | Lix SQLite / after fast-forward merge | 5303776 | 5303.8 | page-level noise | | Lix SQLite / after divergent merge | 5479976 | 5480.0 | page-level noise | | Lix RocksDB / inserted | 1028557 | 1028.6 | unchanged | | Lix RocksDB / after create_version | 1030457 | 1030.5 | unchanged | | Lix RocksDB / after fast-forward merge | 1195234 | 1195.2 | unchanged | | Lix RocksDB / after divergent merge | 1576585 | 1576.6 | unchanged | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: none. LOW: duplicated pack directory parsing between the ordered path and fallback. Follow-up review: HIGH: none. MEDIUM: none. LOW: none. The shared JsonPackLayout/JsonPackEntry helpers resolve the parser duplication. Final single-pass review: HIGH: none. MEDIUM: none. LOW: none. Count mismatch returns before writes, order mismatch clears filled slots before fallback, and corruption/decode/hash failures still return Err instead of being converted into fallback misses. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine json_store:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state::materialization:: --features storage-benches cargo test -p lix_engine tracked_state::context:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a small full-scan JSON pack read cleanup. Primary axis: full-row and prefix scans from one commit-local JSON pack. The structural win removes the lookup map from the ordered pack scan case while preserving the old path for unordered, duplicate, partial, or multi-pack reads. Timing: Criterion reports no statistically significant win, but the final single-pass rerun keeps guardrails neutral and shows lower RocksDB scan/prefix medians. SQLite scan medians remain in the same improved band as the previous pack-read cleanup. No storage format change. No temporary shim. ``` ## Optimization 19: Encode commit-store changes directly Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Replaced the per-change FlatBuffer record inside commit-store change packs with a direct binary `LXCH2` row: - `encode_change_ref` now writes length-prefixed fields directly: `id`, canonical entity-id JSON-array text, `schema_key`, optional `file_id`, optional 32-byte `snapshot_ref`, optional 32-byte `metadata_ref`, and `created_at`. - `decode_change` uses the existing checked `ByteCursor` machinery with new optional string and optional JSON-ref readers. - Removed the private FlatBuffer table/verifier scaffolding for commit-store changes. There is no backwards shim because Lix has not shipped. This matches the surrounding commit-store pack shape better: one pack-level codec with row fields encoded in place, rather than building and copying a separate tiny FlatBuffer for every authored change. Added direct malformed-input coverage for empty optionals, invalid option tags, truncated fixed-width refs, and trailing bytes. ### Benchmarks Codec command: ```sh cargo bench -p lix_engine --features storage-benches --bench storage -- 'storage/changelog/(encode_only|decode_only)/full_row/10k' ``` Result: passed. | row | after median | | -------------------------------------------- | -----------: | | `storage/changelog/encode_only/full_row/10k` | 2.6886 ms | | `storage/changelog/decode_only/full_row/10k` | 2.7384 ms | Focused physical write command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. | row | after median | criterion status | | --------------------------------------------- | -----------: | ----------------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.4888 ms | reference/no change | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2667 ms | reference/no change | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.1804 ms | noisy reference | | `sqlite/write_root_all_rows/1k` | 5.0831 ms | no change, lower median | | `sqlite/write_delta_10pct_updates/1k` | 2.2437 ms | improved | | `sqlite/write_tombstone_10pct_deletes/1k` | 2.0885 ms | improved | | `rocksdb/write_root_all_rows/1k` | 4.5929 ms | no change, lower median | | `rocksdb/write_delta_10pct_updates/1k` | 1.1566 ms | no change, lower median | | `rocksdb/write_tombstone_10pct_deletes/1k` | 1.1288 ms | improved | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1054536 | 1054.5 | improved | | Lix SQLite / after create_version | 1071016 | 1071.0 | improved | | Lix SQLite / after fast-forward merge | 5279368 | 5279.4 | improved | | Lix SQLite / after divergent merge | 5430920 | 5430.9 | improved | | Lix RocksDB / inserted | 964892 | 964.9 | improved | | Lix RocksDB / after create_version | 966733 | 966.7 | improved | | Lix RocksDB / after fast-forward merge | 1125265 | 1125.3 | improved | | Lix RocksDB / after divergent merge | 1494060 | 1494.1 | improved | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: none. LOW: add malformed-input coverage for the hand-rolled format, especially empty optionals, invalid option tags, truncated 32-byte refs, and trailing bytes. Follow-up review: HIGH: none. MEDIUM: none. LOW: truncated-ref test was not actually truncating inside the fixed-width ref. Second follow-up review: HIGH: none. MEDIUM: none. LOW: none. The truncated-ref test now advances to the snapshot_ref tag, truncates after only 16 ref bytes, and asserts the specific `truncated ref` error. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine commit_store::codec:: --features storage-benches cargo test -p lix_engine commit_store::storage:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine commit_store:: --features storage-benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo test -p lix_engine tracked_state::materializer:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench storage -- 'storage/changelog/(encode_only|decode_only)/full_row/10k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a commit-store physical row codec cleanup. Primary axis: write rows, especially delta/tombstone writes that stage compact commit-store change packs. The structural win removes per-change FlatBuffer builder allocation and nested row blobs from the pack format. Timing: SQLite delta and tombstone writes improved by Criterion; RocksDB tombstone writes improved and root-write medians moved down on both backends. Root writes remain over budget, so this is not the final write-side cut. Storage: inserted and merge-state byte counts improve on both SQLite and RocksDB because each change row carries less codec overhead. No storage compatibility shim. No benchmark measurement change. ``` ## Optimization 20: Stage generated bench roots as authored changes ### Hypothesis The physical storage benchmark helper for tracked roots was doing an extra commit-store index scan to classify generated rows as authored or adopted before calling `stage_commit_draft`. That pre-pass does not match the production transaction boundary: production staging already separates authored rows from adopted rows before entering the commit store. The helper-generated rows use commit-scoped fresh change ids (`tracked_change_id(commit_id, index)`, with a separate fresh append namespace), so every `write_tracked_root` row in these benchmark fixtures is authored. Staging those rows directly as authored changes keeps commit-store uniqueness validation intact while removing a redundant history scan from root/delta write measurement. ### Change - Removed `load_change_index_entries` pre-classification from `storage_bench.rs::write_tracked_root`. - Stage all helper-generated changes as authored changes and build tracked deltas by zipping staged authored locators back to the original rows. - Kept commit-store validation in `stage_commit_draft`; no storage format change and no validation weakening inside the commit store. Discarded experiment: a physical `change_id -> locator` commit-store index improved RocksDB delta writes but regressed SQLite writes and increased storage footprint, so it was reverted before this optimization. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. | row | after median | criterion status | | --------------------------------------------- | -----------: | ------------------------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.4088 ms | reference | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2788 ms | reference | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2642 ms | reference | | `sqlite/write_root_all_rows/1k` | 5.3781 ms | improved | | `sqlite/write_delta_10pct_updates/1k` | 1.9665 ms | improved | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.8551 ms | improved | | `rocksdb/write_root_all_rows/1k` | 4.6757 ms | improved | | `rocksdb/write_delta_10pct_updates/1k` | 911.94 µs | noisy, below pre-index baseline | | `rocksdb/write_tombstone_10pct_deletes/1k` | 893.40 µs | noisy, below pre-index baseline | Criterion marked RocksDB delta/tombstone as regressions only because the abandoned change-index experiment had just updated the local Criterion baseline. Compared to Optimization 19, both are lower medians. ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1054536 | 1054.5 | unchanged | | Lix SQLite / after create_version | 1071016 | 1071.0 | unchanged | | Lix SQLite / after fast-forward merge | 5279392 | 5279.4 | unchanged | | Lix SQLite / after divergent merge | 5570208 | 5570.2 | unchanged | | Lix RocksDB / inserted | 964892 | 964.9 | unchanged | | Lix RocksDB / after create_version | 966733 | 966.7 | unchanged | | Lix RocksDB / after fast-forward merge | 1125265 | 1125.3 | unchanged | | Lix RocksDB / after divergent merge | 1494060 | 1494.1 | unchanged | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. Reviewer confirmed no `write_tracked_root` benchmark path legitimately needs adopted changes: row generators use fresh commit-scoped change ids, and the append-child helper uses a separate fresh namespace. Ordering and timestamps remain preserved by zipping authored locators back to the original rows. ``` ### Verification ```sh cargo fmt -p lix_engine cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine commit_store::storage:: --features storage-benches cargo test -p lix_engine tracked_state::materializer:: --features storage-benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a benchmark-path correction and write optimization. The change removes work that production transaction staging does not do and keeps the commit-store validation boundary intact. SQLite delta/tombstone writes move under 2 ms in this run; root writes are modestly better but remain above the 1.5x target. No storage change, no backward shim. ``` ## Optimization 21: Load scan roots once ### Hypothesis `TrackedStateStoreReader::scan_rows_at_commit` was using `projection_has_pending_deltas` as a routing check before scan execution. For delta-pack-backed commits that helper walked the first-parent/delta chain, then `projection_entries_at_commit` walked it again to produce rows. For materialized root commits, the route also checked root existence and then loaded the same root again before scanning. Loading the target root once at scan entry should preserve the same routing: scan the root directly when it exists; otherwise let `projection_entries_at_commit` perform the delta/base walk exactly once. ### Change - `scan_rows_at_commit` now calls `tree.load_root(commit_id)` once. - If a root exists, scan it directly, preserving the by-file index fast path and fallback to the primary tree when no by-file root exists. - If no root exists, call `projection_entries_at_commit` directly. - Tombstone filtering, materialization, and request limit handling remain after row collection as before. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | -------------------------------------------- | -----------: | ---------------- | | `raw_sqlite/scan_keys_only/1k` | 1.1587 ms | reference | | `raw_sqlite/scan_headers_only/1k` | 1.1213 ms | reference | | `raw_sqlite/scan_full_rows/1k` | 1.2689 ms | reference | | `raw_sqlite/prefix_scan_schema/1k` | 1.1597 ms | reference | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1929 ms | reference | | `sqlite/scan_keys_only/1k` | 2.1147 ms | improved | | `sqlite/scan_headers_only/1k` | 2.7995 ms | no change | | `sqlite/scan_full_rows/1k` | 2.8024 ms | improved | | `sqlite/prefix_scan_schema/1k` | 2.7534 ms | improved | | `sqlite/prefix_scan_schema_file_null/1k` | 2.7506 ms | improved | | `rocksdb/scan_keys_only/1k` | 1.2154 ms | improved | | `rocksdb/scan_headers_only/1k` | 1.2315 ms | improved | | `rocksdb/scan_full_rows/1k` | 1.7649 ms | improved | | `rocksdb/prefix_scan_schema/1k` | 1.7814 ms | improved | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.8046 ms | improved | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1054536 | 1054.5 | unchanged | | Lix SQLite / after create_version | 1071016 | 1071.0 | unchanged | | Lix SQLite / after fast-forward merge | 5279368 | 5279.4 | unchanged | | Lix SQLite / after divergent merge | 5463856 | 5463.9 | unchanged | | Lix RocksDB / inserted | 964892 | 964.9 | unchanged | | Lix RocksDB / after create_version | 966733 | 966.7 | unchanged | | Lix RocksDB / after fast-forward merge | 1125265 | 1125.3 | unchanged | | Lix RocksDB / after divergent merge | 1494068 | 1494.1 | unchanged | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. Reviewer confirmed the root-first routing is equivalent: the old pending-delta predicate already stopped immediately when the target commit had a root, while delta-only and missing commits still go through the same projection/delta walk. By-file fallback, tombstone filtering, and limit behavior are preserved. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine tracked_state::context:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state::materializer:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a scan-path optimization. This removes duplicated route-discovery reads without changing storage or scan semantics. It improves most SQLite and RocksDB tracked scan rows, but SQLite full/prefix scans remain above the 1.5x target and need deeper tree/materialize work next. ``` ## Optimization 22: Fast-path single delta-pack scans ### Hypothesis The JSON-pointer tracked scan fixtures usually read a commit with no materialized projection root and exactly one tracked-state delta pack. The general overlay path inserts those delta entries into a `BTreeMap` and then collects the map back into sorted rows. For the single-pack/no-base case, that map is only doing three things: key filtering, sorted order, and duplicate-key last-write-wins collapse. A direct vector path can preserve those semantics with less per-row map work. ### Change - Added `single_delta_pack_entries` for the `base_commit_id == None` and `delta_commit_ids.len() == 1` case. - The fast path: - filters with the same `request.matches_key` predicate as the existing overlay path; - sorts by `(TrackedStateKey, original ordinal)`; - collapses duplicate keys by keeping the last ordinal; - skips final tombstones when `include_tombstones` is false. - Added coverage for duplicate-key and tombstone behavior in a single delta pack. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | -------------------------------------------- | -----------: | ---------------- | | `raw_sqlite/scan_keys_only/1k` | 1.1288 ms | reference | | `raw_sqlite/scan_headers_only/1k` | 1.1685 ms | reference | | `raw_sqlite/scan_full_rows/1k` | 1.1922 ms | reference | | `raw_sqlite/prefix_scan_schema/1k` | 1.2255 ms | reference | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.7144 ms | reference/noisy | | `sqlite/scan_keys_only/1k` | 2.3765 ms | noisy regression | | `sqlite/scan_headers_only/1k` | 2.2331 ms | improved | | `sqlite/scan_full_rows/1k` | 2.6767 ms | within noise | | `sqlite/prefix_scan_schema/1k` | 2.7255 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.7038 ms | no change | | `rocksdb/scan_keys_only/1k` | 1.2053 ms | no change | | `rocksdb/scan_headers_only/1k` | 1.1988 ms | improved | | `rocksdb/scan_full_rows/1k` | 1.6527 ms | improved | | `rocksdb/prefix_scan_schema/1k` | 1.6875 ms | improved | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.6230 ms | improved | SQLite rerun: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` | row | rerun median | criterion status | | ---------------------------------------- | -----------: | ---------------- | | `sqlite/scan_keys_only/1k` | 2.0399 ms | improved | | `sqlite/scan_headers_only/1k` | 2.1180 ms | no change | | `sqlite/scan_full_rows/1k` | 2.8050 ms | no change | | `sqlite/prefix_scan_schema/1k` | 2.7217 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.6412 ms | no change | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1054536 | 1054.5 | unchanged | | Lix SQLite / after create_version | 1071016 | 1071.0 | unchanged | | Lix SQLite / after fast-forward merge | 5279392 | 5279.4 | unchanged | | Lix SQLite / after divergent merge | 5586736 | 5586.7 | unchanged | | Lix RocksDB / inserted | 964892 | 964.9 | unchanged | | Lix RocksDB / after create_version | 966733 | 966.7 | unchanged | | Lix RocksDB / after fast-forward merge | 1125265 | 1125.3 | unchanged | | Lix RocksDB / after divergent merge | 1494068 | 1494.1 | unchanged | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. Reviewer confirmed the fast path matches the old BTreeMap overlay semantics: same key-only filtering, sorted key order, last duplicate wins, final tombstone removal when tombstones are excluded, and limits remain above materialization. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine tracked_state::context:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state::materializer:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a narrow delta-pack scan optimization. The best movement is headers/full rows and RocksDB scans; SQLite full/prefix rows remain mostly around the same medians as Optimization 21, with keys-only improving on rerun. No storage change. ``` ## Optimization 23: Encode delta packs directly into the output buffer ### Hypothesis `encode_delta_pack_refs` still allocated a temporary encoded key `Vec` and temporary encoded value `Vec` for every tracked delta, only to copy both into the delta pack as length-prefixed sections. Reference storage systems avoid per-row temporary records on hot write paths when the final output buffer can be written directly. Writing each key/value section directly into the pack and backpatching the section length should preserve the binary format while removing per-delta allocation/copy work. ### Change - Split `encode_key_ref` and `encode_value_ref` into allocation-returning public helpers plus private `append_key_ref` / `append_value_ref` buffer writers. - Changed `encode_delta_pack_refs` to write key/value sections directly via `push_sized_section`. - `decode_delta_pack` is unchanged; the encoded wire shape remains length-prefixed key bytes followed by length-prefixed value bytes. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. | row | after median | criterion status | | --------------------------------------------- | -----------: | ----------------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.4262 ms | reference | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.3524 ms | reference | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2769 ms | reference | | `sqlite/write_root_all_rows/1k` | 4.9586 ms | no change, lower median | | `sqlite/write_delta_10pct_updates/1k` | 1.9208 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 2.0990 ms | noisy regression | | `rocksdb/write_root_all_rows/1k` | 4.2122 ms | no change, lower median | | `rocksdb/write_delta_10pct_updates/1k` | 880.26 µs | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 836.97 µs | no change, lower median | SQLite rerun: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` | row | rerun median | criterion status | | ----------------------------------------- | -----------: | ---------------- | | `sqlite/write_root_all_rows/1k` | 5.0104 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 1.9488 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.7955 ms | improved | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1054536 | 1054.5 | unchanged | | Lix SQLite / after create_version | 1071016 | 1071.0 | unchanged | | Lix SQLite / after fast-forward merge | 5279368 | 5279.4 | unchanged | | Lix SQLite / after divergent merge | 5430920 | 5430.9 | unchanged | | Lix RocksDB / inserted | 964892 | 964.9 | unchanged | | Lix RocksDB / after create_version | 966733 | 966.7 | unchanged | | Lix RocksDB / after fast-forward merge | 1125265 | 1125.3 | unchanged | | Lix RocksDB / after divergent merge | 1494068 | 1494.1 | unchanged | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. Reviewer confirmed the binary shape is compatible: the append helpers preserve field order and primitive encoders, while `push_sized_section` backpatches the same four-byte length consumed by `decode_delta_pack`. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state::context:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a write-path allocation cleanup. This is a structural writer improvement with neutral-to-better medians on rerun and no storage format change. It does not close the remaining root-write gap by itself. ``` ## Optimization 24: Compact same-commit delta locators ### Hypothesis Tracked-state delta packs repeat `source_commit_id` inside every row locator, even though ordinary authored deltas point back to the delta pack's own commit. This is duplicated physical layout metadata: the storage key already identifies the delta pack commit, and the pack can carry that identity once in its header. Reference storage layouts avoid repeating page/segment identity in every record when a compact local locator can refer to the owning container. For Lix, a delta-pack-local `SAME_COMMIT` locator tag should shrink write bytes, scan decode bytes, and storage footprint while still preserving full locators for adopted cross-commit changes. ### Change - Bumped tracked-state delta packs from version 1 to version 2 with no backward shim; Lix has not shipped. - Delta packs now store `commit_id` once in the pack header. - Delta values encode locator source as: - `SAME_COMMIT`: no repeated source commit id, decoded from the pack header. - `FULL`: explicit source commit id for adopted/cross-commit locators. - Tree value encoding is unchanged. - `storage::load_delta_pack` validates the embedded pack commit id against the storage key before returning entries, so swapped/corrupt packs cannot silently rewrite same-commit locators. - Tests cover decoded pack identity plus same-commit and full locator roundtrip. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | after median | criterion status | | --------------------------------------------- | -----------: | ----------------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.4003 ms | reference | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2992 ms | reference | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2201 ms | reference | | `raw_sqlite/scan_keys_only/1k` | 1.1429 ms | reference | | `raw_sqlite/scan_full_rows/1k` | 1.1458 ms | reference | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.2437 ms | reference | | `sqlite/write_root_all_rows/1k` | 4.8006 ms | no change, lower median | | `sqlite/write_delta_10pct_updates/1k` | 2.0113 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.7745 ms | no change, lower median | | `sqlite/scan_keys_only/1k` | 2.1931 ms | noisy regression | | `sqlite/scan_full_rows/1k` | 2.6153 ms | no change, lower median | | `sqlite/prefix_scan_schema_file_null/1k` | 3.0283 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.5488 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 883.15 µs | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 830.45 µs | no change | | `rocksdb/scan_keys_only/1k` | 1.1580 ms | no change | | `rocksdb/scan_full_rows/1k` | 1.6353 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.8247 ms | no change | SQLite scan rerun: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` | row | rerun median | criterion status | | ---------------------------------------- | -----------: | ---------------- | | `sqlite/scan_keys_only/1k` | 1.9420 ms | improved | | `sqlite/scan_full_rows/1k` | 2.5912 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.6909 ms | no change | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 1013336 | 1013.3 | improved | | Lix SQLite / after create_version | 1029816 | 1029.8 | improved | | Lix SQLite / after fast-forward merge | 5230192 | 5230.2 | improved | | Lix SQLite / after divergent merge | 5385840 | 5385.8 | improved | | Lix RocksDB / inserted | 925304 | 925.3 | improved | | Lix RocksDB / after create_version | 927146 | 927.1 | improved | | Lix RocksDB / after fast-forward merge | 1085778 | 1085.8 | improved | | Lix RocksDB / after divergent merge | 1454922 | 1454.9 | improved | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: none. LOW: delta pack embeds commit_id but storage did not check it against the key; swapped/corrupt packs could produce wrong SAME_COMMIT locators. Follow-up review: HIGH: none. MEDIUM: none. LOW: none. The LOW is resolved by returning the decoded pack commit id from the codec and checking it in storage::load_delta_pack before exposing entries. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state::context:: --features storage-benches cargo test -p lix_engine tracked_state::materializer:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a physical layout optimization. Primary axis: storage footprint and delta-pack decode bytes. Timing is mostly neutral with lower medians in the hot write rows and a cleaner SQLite keys-only rerun. Storage improves on both SQLite and RocksDB for inserted and merge states. No backward shim. ``` ## Optimization 25: Dictionary-code delta-pack key prefixes ### Hypothesis Tracked-state delta packs repeat the same `schema_key` and `file_id` for every JSON-pointer row. The v2 delta-pack key format stored that full prefix inside each entry key even though the pack is already a locality unit. A pack-level prefix table should remove repeated key bytes while keeping decoded keys exactly the same shape for downstream ordering and filtering. This follows the same first-principles shape as page/segment dictionaries in systems like DuckDB/Turso/Dolt-style physical layouts: pay one compact table per storage unit, then store small indexes in repeated records. ### Change - Bumped tracked delta packs from version 2 to version 3. No backward shim. - Added a pack-level key-prefix dictionary of `(schema_key, file_id)`. - Encoded each delta key as `prefix_index + entity_id`. - Kept decode output as full `TrackedStateKey` values so scan collapse, ordering, and prefix filtering continue to operate on the existing key type. - Added coverage that verifies the prefix table is written for mixed file prefixes and corrupt out-of-bounds prefix indexes reject. - Avoided a `HashMap` prefix-index path after a focused rerun showed write regressions; the kept version uses the small prefix vector plus per-delta prefix indexes built during the prefix pass. ### Benchmarks Focused scan/write command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite)/smoke/(write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | median | criterion status | | --------------------------------------------- | --------: | ---------------- | | `raw_sqlite/scan_keys_only/1k` | 1.2058 ms | reference | | `raw_sqlite/scan_full_rows/1k` | 1.1330 ms | reference | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1647 ms | reference | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2337 ms | reference | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2127 ms | reference | | `sqlite/scan_keys_only/1k` | 1.9801 ms | improved | | `sqlite/scan_full_rows/1k` | 2.5814 ms | improved | | `sqlite/prefix_scan_schema_file_null/1k` | 2.6188 ms | no change | Final write-focused command after replacing the regressing `HashMap` prefix indexer: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. | row | median | criterion status | | ----------------------------------------- | --------: | ---------------- | | `sqlite/write_delta_10pct_updates/1k` | 2.2536 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 2.2861 ms | no change | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | vs Optimization 24 | | -------------------------------------- | ------: | --------: | -----------------: | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 996856 | 996.9 | -16480 | | Lix SQLite / after create_version | 1013336 | 1013.3 | -16480 | | Lix SQLite / after fast-forward merge | 5201424 | 5201.4 | -28768 | | Lix SQLite / after divergent merge | 5361240 | 5361.2 | -24600 | | Lix RocksDB / inserted | 912032 | 912.0 | -13272 | | Lix RocksDB / after create_version | 913889 | 913.9 | -13257 | | Lix RocksDB / after fast-forward merge | 1073314 | 1073.3 | -12464 | | Lix RocksDB / after divergent merge | 1442794 | 1442.8 | -12128 | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. The v3 shape writes a header-level key-prefix table, then each entry key stores only `prefix_index + entity_id`. Decode reconstructs full `TrackedStateKey`s, so downstream ordering/filter behavior still sees ordinary full keys. Corrupt prefix indexes and invalid prefix file-id tags reject. Empty packs work naturally with zero prefixes and zero entries. ``` ### Verification ```sh cargo fmt --check cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite)/smoke/(write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_full_rows|prefix_scan_schema_file_null)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/sqlite/smoke/(write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a storage-layout optimization. Primary axis: bytes per row. The JSON-pointer workload now pays for `json_pointer + NULL file_id` once per delta pack instead of once per delta row. The win is modest but repeatable across SQLite and RocksDB accounting, and the write guardrail is neutral after removing the HashMap indexer. This does not close the remaining <=1.5x gap by itself. It is a clean physical layout step that reduces repeated key bytes without changing the logical scan surface. ``` ## Optimization 26: Probe delta-pack existence without loading blobs ### Hypothesis Unmaterialized tracked commits are served from delta packs until a projection root exists. The scan planner only needs to know whether each first-parent commit has a delta pack, but it was calling `load_delta_pack`, which fetched and decoded the whole pack before the result-producing path fetched and decoded it again. This violates the same locality rule used by storage engines: use an index/key-existence probe to plan, and only read the value blob when the plan needs row data. ### Change - Added `tracked_state::storage::delta_pack_exists`. - Implemented it with `StorageReader::exists_many` against the delta-pack namespace/key, so it does not fetch delta-pack bytes. - Replaced the planning-time `load_delta_pack(...).is_some()` in `delta_commit_ids_since_projection_root` with the key-only existence probe. - Kept all result-producing paths on `load_delta_pack`, so corrupt or identity-mismatched packs still fail before scans, diffs, point reads, or existence checks return results. ### Benchmarks Focused read command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k' ``` Result: passed. Representative rerun medians: | row | median | criterion status | | -------------------------------------------- | --------: | ----------------------- | | `raw_sqlite/scan_keys_only/1k` | 1.1162 ms | reference | | `raw_sqlite/scan_headers_only/1k` | 1.1543 ms | reference | | `raw_sqlite/scan_full_rows/1k` | 1.2260 ms | reference | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.5672 ms | noisy reference | | `sqlite/get_many_exact_keys/1k` | 2.9404 ms | no change | | `sqlite/exists_many_exact_keys/1k` | 1.9841 ms | no change | | `sqlite/scan_keys_only/1k` | 1.8194 ms | no change, lower median | | `sqlite/scan_headers_only/1k` | 1.8309 ms | no change | | `sqlite/scan_full_rows/1k` | 2.3452 ms | no change, lower median | | `sqlite/prefix_scan_schema_file_null/1k` | 2.3869 ms | no change, lower median | | `rocksdb/get_many_exact_keys/1k` | 2.0017 ms | no change | | `rocksdb/exists_many_exact_keys/1k` | 1.1156 ms | no change | | `rocksdb/scan_keys_only/1k` | 835.73 us | no change, lower median | | `rocksdb/scan_headers_only/1k` | 878.24 us | improved | | `rocksdb/scan_full_rows/1k` | 1.4314 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.3956 ms | within noise threshold | Broad scan/write guardrail command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. Notable medians from the broad run: | row | median | criterion status | | ----------------------------------------- | --------: | ---------------------- | | `sqlite/write_root_all_rows/1k` | 5.2065 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 1.9378 ms | improved | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.8930 ms | within noise threshold | | `sqlite/scan_keys_only/1k` | 1.8567 ms | no change | | `sqlite/scan_headers_only/1k` | 1.7894 ms | no change | | `sqlite/scan_full_rows/1k` | 2.4152 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.4417 ms | no change | | `rocksdb/scan_keys_only/1k` | 914.58 us | improved | | `rocksdb/scan_full_rows/1k` | 1.4490 ms | improved | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. No format or write-path storage change. 1k rows: | row | bytes | bytes/row | | -------------------------------------- | ------: | --------: | | raw SQLite / inserted | 1692456 | 1692.5 | | Lix SQLite / inserted | 996856 | 996.9 | | Lix SQLite / after create_version | 1013336 | 1013.3 | | Lix SQLite / after fast-forward merge | 5205520 | 5205.5 | | Lix SQLite / after divergent merge | 5361192 | 5361.2 | | Lix RocksDB / inserted | 912032 | 912.0 | | Lix RocksDB / after create_version | 913889 | 913.9 | | Lix RocksDB / after fast-forward merge | 1073314 | 1073.3 | | Lix RocksDB / after divergent merge | 1442794 | 1442.8 | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: delta_pack_exists used get_values, so it avoided decode CPU but still fetched the blob. Use StorageReader::exists_many as a true key-only probe. Follow-up review: HIGH: none. MEDIUM: none. The prior MEDIUM is resolved. delta_pack_exists now uses StorageReader::exists_many against the delta-pack namespace/key. Result-producing paths still load and decode packs, so corrupt or identity-mismatched packs still fail before results are produced. ``` ### Verification ```sh cargo fmt --check cargo test -p lix_engine tracked_state::context:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a read-path physical access optimization. The structural win is precise: first-parent planning now asks the backend for key existence instead of fetching a delta-pack value blob it will decode later. This follows the reference-system pattern of separating metadata/index probes from value materialization. The strongest observed impact is on unmaterialized single-delta scans, where SQLite scan medians moved from the post-Optimization-25 range of roughly 1.94-2.69 ms down to roughly 1.82-2.39 ms in focused runs, and RocksDB scan medians moved below or near the raw SQLite reference for key/header scans. This does not change storage format, write layout, or corruption semantics for visible reads. It does not close the remaining full-row SQLite gap by itself. ``` ## Optimization 27: Decode delta-pack sections without temporary copies Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change `tracked_state::codec::decode_delta_pack` now parses each sized delta key and value section directly from the pack byte slice: - Replaced `read_sized_bytes(...)?` followed by borrowing the temporary `Vec` with `read_sized_slice(...)?`. - Kept the existing `decode_delta_key` and `decode_delta_value` parsers, so the section format, cursor advancement, truncation checks, and trailing-byte validation are unchanged. This removes two heap allocations and two payload copies per delta-pack row on unmaterialized tracked-state reads. ### Benchmarks Focused scan command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys)/1k' ``` Result: passed. Representative medians: | row | median | criterion status | | ----------------------------------------- | --------: | ---------------- | | `sqlite/get_many_exact_keys/1k` | 2.9171 ms | no change | | `sqlite/scan_keys_only/1k` | 1.8631 ms | no change | | `sqlite/scan_headers_only/1k` | 1.7398 ms | improved | | `sqlite/scan_full_rows/1k` | 2.3693 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.3791 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 1.9810 ms | no change | | `rocksdb/scan_keys_only/1k` | 849.54 us | no change | | `rocksdb/scan_headers_only/1k` | 840.90 us | no change | | `rocksdb/scan_full_rows/1k` | 1.3687 ms | improved | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.3407 ms | no change | Broad scan/write guardrail command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. Raw SQLite reference rows were noisy in this run, but Lix write rows were neutral, SQLite scan medians stayed in the improved band, and RocksDB tombstone writes improved. Follow-up scan rerun: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | rerun median | criterion status | | ----------------------------------------- | -----------: | ---------------- | | `sqlite/scan_headers_only/1k` | 1.7979 ms | no change | | `sqlite/scan_full_rows/1k` | 2.3657 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.3208 ms | no change | | `rocksdb/scan_headers_only/1k` | 817.34 us | improved | | `rocksdb/scan_full_rows/1k` | 1.3446 ms | improved | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.3315 ms | no change | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. No format or write-path storage change. 1k rows: | row | bytes | bytes/row | | -------------------------------------- | ------: | --------: | | raw SQLite / inserted | 1692456 | 1692.5 | | Lix SQLite / inserted | 996856 | 996.9 | | Lix SQLite / after create_version | 1013336 | 1013.3 | | Lix SQLite / after fast-forward merge | 5205520 | 5205.5 | | Lix SQLite / after divergent merge | 5369360 | 5369.4 | | Lix RocksDB / inserted | 912032 | 912.0 | | Lix RocksDB / after create_version | 913889 | 913.9 | | Lix RocksDB / after fast-forward merge | 1073314 | 1073.3 | | Lix RocksDB / after divergent merge | 1442794 | 1442.8 | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. The reviewer confirmed that read_sized_slice preserves the same overflow and truncation checks, the nested key/value decoders still reject trailing bytes, and the borrowed slices only live for the duration of parsing. Recommendation: keep as a clean read-side allocation cut. ``` ### Verification ```sh cargo fmt --check cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_keys_only|scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a read-side delta-pack decode cleanup. The physical win is small but real: unmaterialized tracked-state reads no longer copy every encoded delta key and value section before decoding owned rows from them. This reduces heap traffic without changing the pack format or corruption behavior. Timing is noisy but favorable enough to keep. SQLite header scans showed a significant improvement in the focused run and stayed in the lower band on rerun. RocksDB header/full scans improved significantly on rerun. Writes and storage bytes are neutral because the encoded bytes are unchanged. No storage format change. No temporary shim. ``` ## Optimization 28: Encode change-pack entries in place Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change `commit_store::codec::encode_change_pack` now writes each authored change directly into its length-prefixed pack entry: - Extracted `write_change_ref(&mut Vec, ChangeRef)` from `encode_change_ref`. - `encode_change_ref` still returns standalone `LXCH2` bytes by writing into a fresh `Vec`. - `encode_change_pack` now reserves the 4-byte little-endian section length, writes the `LXCH2` change bytes directly into the pack buffer, and backfills the length. - Removed the old temporary `encode_change_ref(change)?` plus `write_bytes` copy inside the pack loop. Added a unit test asserting that the bytes inside one change-pack entry are exactly the same as the standalone `encode_change_ref` bytes. ### Benchmarks Focused write command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed. Representative medians: | row | median | criterion status | | --------------------------------------------- | --------: | ----------------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.4908 ms | reference/no change | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2818 ms | reference/no change | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2797 ms | reference/no change | | `sqlite/write_root_all_rows/1k` | 4.7754 ms | no change, lower median | | `sqlite/write_delta_10pct_updates/1k` | 1.9721 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.7907 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.3187 ms | no change, lower median | | `rocksdb/write_delta_10pct_updates/1k` | 935.37 us | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 789.37 us | no change | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. The change-pack wire format is unchanged. 1k rows: | row | bytes | bytes/row | | -------------------------------------- | ------: | --------: | | raw SQLite / inserted | 1692456 | 1692.5 | | Lix SQLite / inserted | 996856 | 996.9 | | Lix SQLite / after create_version | 1013336 | 1013.3 | | Lix SQLite / after fast-forward merge | 5201424 | 5201.4 | | Lix SQLite / after divergent merge | 5348880 | 5348.9 | | Lix RocksDB / inserted | 912032 | 912.0 | | Lix RocksDB / after create_version | 913889 | 913.9 | | Lix RocksDB / after fast-forward merge | 1073314 | 1073.3 | | Lix RocksDB / after divergent merge | 1442792 | 1442.8 | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. The reviewer confirmed that the pack still writes a length-prefixed LXCH2 payload for each change, that decode still reads the same entry bytes, and that partial mutation on error is not exposed because encode_change_pack and encode_change_ref both build into fresh local Vecs. Recommendation: keep. ``` ### Verification ```sh cargo fmt --check cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine commit_store::codec:: --features storage-benches cargo test -p lix_engine commit_store:: --features storage-benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a write-side commit-pack allocation cleanup. The physical win is removing one temporary Vec allocation and one copy for each authored change encoded into a commit-store change pack. It is the same shape as the earlier direct delta-pack and direct commit-change row work: encode into the final pack buffer instead of building nested row blobs only to copy them. Timing is a modest median improvement for root writes on both Lix backends, but Criterion did not mark it statistically significant. This is kept because it removes real per-row hot-path work while preserving the byte format and storage footprint. No storage format change. No temporary shim. ``` ## Optimization 29: Compact matching tracked timestamps Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Bumped the tracked-state value codec from version 6 to version 7 and compacted the common timestamp shape in both materialized tree values and delta-pack values: - Values now write `created_at` once. - A one-byte tag follows: `TIMESTAMP_UPDATED_SAME` when `updated_at == created_at`, otherwise `TIMESTAMP_UPDATED_DISTINCT` plus the `updated_at` string. - Decode reconstructs `updated_at` from `created_at` for the same-timestamp case and rejects invalid timestamp tags. - `encoded_value_len` now uses the same timestamp-pair sizing helper as the encoder. There is no backwards shim because the format has not shipped. Added coverage that matching timestamps roundtrip and produce a shorter encoded value than distinct timestamps. Existing distinct timestamp roundtrip tests continue to cover the non-compact branch. ### Benchmarks Focused write/scan command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_headers_only|scan_full_rows)/1k' ``` Result: passed. Representative medians: | row | median | criterion status | | ------------------------------------------ | --------: | ---------------- | | `sqlite/write_root_all_rows/1k` | 5.0199 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 1.8704 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.7569 ms | no change | | `sqlite/scan_headers_only/1k` | 1.8272 ms | no change | | `sqlite/scan_full_rows/1k` | 2.3698 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.3505 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 840.51 us | improved | | `rocksdb/write_tombstone_10pct_deletes/1k` | 933.62 us | no change | | `rocksdb/scan_headers_only/1k` | 860.63 us | no change | | `rocksdb/scan_full_rows/1k` | 1.5218 ms | noisy regression | Rerun of scan guardrails: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | rerun median | criterion status | | ----------------------------------------- | -----------: | ---------------- | | `sqlite/scan_headers_only/1k` | 1.7102 ms | improved | | `sqlite/scan_full_rows/1k` | 2.2907 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2168 ms | no change | | `rocksdb/scan_headers_only/1k` | 815.47 us | no change | | `rocksdb/scan_full_rows/1k` | 1.3515 ms | improved | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.4072 ms | no change | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | ----------------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 972136 | 972.1 | improved | | Lix SQLite / after create_version | 984496 | 984.5 | improved | | Lix SQLite / after fast-forward merge | 5201544 | 5201.5 | roughly unchanged | | Lix SQLite / after divergent merge | 5365384 | 5365.4 | roughly unchanged | | Lix RocksDB / inserted | 884519 | 884.5 | improved | | Lix RocksDB / after create_version | 886342 | 886.3 | improved | | Lix RocksDB / after fast-forward merge | 1043067 | 1043.1 | improved | | Lix RocksDB / after divergent merge | 1404413 | 1404.4 | improved | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. The reviewer confirmed that the version/deleted header masking remains correct, tombstone visibility still only needs the header, both tree and delta values use the same timestamp pair helpers, distinct timestamps are preserved, invalid timestamp tags are rejected, and encoded_value_len matches the new format. Recommendation: keep. ``` ### Verification ```sh cargo fmt --check cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_headers_only|scan_full_rows)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(scan_headers_only|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a physical value-format compaction. The structural win is direct: inserted/root rows usually have matching created_at and updated_at, so storing the timestamp twice was duplicated row payload. Version 7 stores the common case as one string plus a tag while still preserving distinct timestamps for updates/adoptions. Storage improves materially on inserted/create_version states for both SQLite and RocksDB, and RocksDB merge-state bytes improve as well. Timing is mostly neutral with useful scan/write wins on rerun; the one RocksDB scan regression did not reproduce. No backward shim because the physical format is still unshipped. ``` ## Optimization 30: Compact commit and delta change ids Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Changed the unshipped physical pack formats: - Commit-store change packs move from `LXCP1` to `LXCP2`. - `LXCP2` stores shared `(schema_key, file_id)` shapes once per pack. - `LXCP2` stores entity identity directly as string parts instead of a JSON array string inside each packed change. - `LXCP2` stores change ids as a suffix when they start with the pack `commit_id`, otherwise stores the full id. - Tracked-state delta packs move from `LXTD3` to `LXTD4`. - `LXTD4` stores delta `change_id`s as a suffix when they start with the locator `source_commit_id`, otherwise stores the full id. Standalone `LXCH2` change encoding remains available, but change packs no longer embed standalone `LXCH2` records. There is no backwards shim because the physical format has not shipped. Added codec coverage for the compact change-pack shape and for a tracked delta-pack cross-commit locator whose `change_id` starts with the pack commit id but not with its locator source commit id. ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. Representative medians: | row | median | criterion status | | ------------------------------------------ | --------: | ---------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.3900 ms | no change | | `sqlite/write_root_all_rows/1k` | 4.4672 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 1.7570 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.5593 ms | no change | | `sqlite/scan_full_rows/1k` | 2.3076 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2825 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.3382 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 842.39 us | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 732.73 us | no change | | `rocksdb/scan_full_rows/1k` | 1.3643 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.3588 ms | no change | Earlier same-patch focused sweep also showed RocksDB delta writes at `751.11 us` improved and RocksDB tombstone writes at `741.36 us` improved; the combined rerun settled as neutral except raw SQLite tombstone noise. ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 947416 | 947.4 | improved | | Lix SQLite / after create_version | 959776 | 959.8 | improved | | Lix SQLite / after fast-forward merge | 5152248 | 5152.2 | improved | | Lix SQLite / after divergent merge | 5353168 | 5353.2 | improved | | Lix RocksDB / inserted | 864114 | 864.1 | improved | | Lix RocksDB / after create_version | 865938 | 865.9 | improved | | Lix RocksDB / after fast-forward merge | 1022770 | 1022.8 | improved | | Lix RocksDB / after divergent merge | 1384417 | 1384.4 | improved | ### Review Loop Reviewer pass 1 found one HIGH: `LXTD4` initially stripped delta change ids against the pack commit id while decode reconstructed suffixes against the locator `source_commit_id`, which could corrupt an adopted cross-commit locator whose id happened to start with the pack commit id. Fix: encode delta change-id suffixes against `value.change_locator.source_commit_id`, matching decode, and add a regression for the cross-commit prefix-collision case. Reviewer pass 2: ```text HIGH: none. MEDIUM: none. LOW: none. The reviewer confirmed the prior HIGH is resolved, suffix encode/decode now use the same source-commit basis, LXCP2 preserves entry order, shape indexes are bounds-checked, and commit-store suffix IDs use the same commit-id basis on encode/decode. Recommendation: keep. ``` ### Verification ```sh cargo fmt --check cargo test -p lix_engine commit_store::codec:: --features storage-benches cargo test -p lix_engine commit_store:: --features storage-benches cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a format-level compaction. Root-write timing remains above the 1.5x target and mostly Criterion-neutral, so this is not the final root-write answer. The pack bytes are meaningfully smaller, however, and the format removes repeated schema/file/change-id/entity encoding from durable commit packs while preserving locator semantics. The current budget misses remain root writes and SQLite full/prefix scans. ``` ## Optimization 31: Narrow JSON pack directory fields Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Changed the unshipped JSON commit-pack format from `lix-json-pack:v1` to `lix-json-pack:v2`. The per-entry directory keeps the same explicit shape: ```text hash, codec, uncompressed_len, payload_offset, payload_len ``` but narrows the three numeric payload fields from `u64` to `u32`. The entry header shrinks from `32 + 1 + 8 + 8 + 8 = 57` bytes to `32 + 1 + 4 + 4 + 4 = 45` bytes. This is a clean cut with no backwards shim because Lix has not shipped. Unlike the rejected implicit-offset JSON-pack experiment, this keeps explicit offsets, so unordered/fallback pack reads retain direct payload slicing instead of reconstructing offsets from earlier directory entries. Added codec tests for the compact 45-byte directory shape and for checked rejection of oversized u32 directory fields. ### Storage Command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | state | before | after | delta | | -------------------------- | --------: | --------: | ------: | | raw SQLite inserted | 1,692,456 | 1,692,456 | 0 | | Lix SQLite inserted | 947,416 | 939,176 | -8,240 | | Lix SQLite create_version | 959,776 | 951,536 | -8,240 | | Lix SQLite fast-forward | 5,152,248 | 5,152,296 | +48 | | Lix SQLite divergent | 5,353,168 | 5,320,304 | -32,864 | | Lix RocksDB inserted | 864,114 | 851,910 | -12,204 | | Lix RocksDB create_version | 865,938 | 853,721 | -12,217 | | Lix RocksDB fast-forward | 1,022,770 | 1,009,345 | -13,425 | | Lix RocksDB divergent | 1,384,417 | 1,368,580 | -15,837 | ### Timing Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. Representative medians: | row | median | criterion status | | --------------------------------------------- | --------: | ---------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.4135 ms | no change | | `raw_sqlite/scan_full_rows/1k` | 1.2061 ms | no change | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1669 ms | no change | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2859 ms | no change | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.1947 ms | no change | | `sqlite/write_root_all_rows/1k` | 4.6431 ms | no change | | `sqlite/scan_full_rows/1k` | 2.2783 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.3420 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 1.7931 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.6065 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.1708 ms | no change | | `rocksdb/scan_full_rows/1k` | 1.4051 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.3823 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 818.06 us | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 765.78 us | no change | ### Verification ```sh cargo fmt --check cargo test -p lix_engine json_store:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. Reviewer loop: - First pass: HIGH none, MEDIUM none, LOW requested direct overflow coverage for narrowed fields. - Added `json_pack_u32_rejects_oversized_directory_fields`. - Second pass: HIGH none, MEDIUM none, LOW none. Recommendation: keep. ### Interpretation ```text Keep as a compact physical-layout cleanup. Primary axis: storage bytes. Commit-local JSON packs are bounded KV blobs, not large archive files, so u32 payload lengths and offsets are enough while the encoder still rejects oversized packs explicitly. Timing: no Lix runtime row showed a detected regression in the focused write and scan guardrail. Keeping explicit offsets preserves the direct random-access fallback shape that the earlier implicit-offset experiment lost. No backwards shim. ``` ## Optimization 32: Varint change-pack local fields Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Changed the unshipped commit-store change-pack format from `LXCP2` to `LXCP3`. `LXCP3` keeps the same logical fields and explicit pack structure, but encodes pack-local lengths, counts, and indexes as checked u32 varints instead of fixed u32 fields: - commit id length - shape count - shape `schema_key` and optional `file_id` lengths - change count - per-change id length - entity-identity part count and part lengths - shape index - created-at length Standalone commit (`LXCM1`), standalone change (`LXCH2`), and membership-pack (`LXMP1`) encodings are unchanged. There is no backwards shim because the format has not shipped. Added decode coverage for overlong varints, values above `u32::MAX`, and non-canonical encodings such as `80 00`. ### Storage Command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | state | before | after | delta | | -------------------------- | --------: | --------: | ------: | | raw SQLite inserted | 1,692,456 | 1,692,456 | 0 | | Lix SQLite inserted | 939,176 | 922,696 | -16,480 | | Lix SQLite create_version | 951,536 | 935,056 | -16,480 | | Lix SQLite fast-forward | 5,152,296 | 5,123,600 | -28,696 | | Lix SQLite divergent | 5,320,304 | 5,308,064 | -12,240 | | Lix RocksDB inserted | 851,910 | 836,566 | -15,344 | | Lix RocksDB create_version | 853,721 | 838,350 | -15,371 | | Lix RocksDB fast-forward | 1,009,345 | 991,281 | -18,064 | | Lix RocksDB divergent | 1,368,580 | 1,345,089 | -23,491 | ### Timing Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k' ``` Result: passed. Representative medians: | row | median | criterion status | | --------------------------------------------- | --------: | ---------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.3938 ms | no change | | `raw_sqlite/get_many_exact_keys/1k` | 2.0500 ms | no change | | `raw_sqlite/exists_many_exact_keys/1k` | 2.0326 ms | no change | | `raw_sqlite/scan_full_rows/1k` | 1.1694 ms | no change | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1635 ms | no change | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2193 ms | no change | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2057 ms | no change | | `sqlite/write_root_all_rows/1k` | 4.5170 ms | no change | | `sqlite/get_many_exact_keys/1k` | 2.8695 ms | no change | | `sqlite/exists_many_exact_keys/1k` | 1.8988 ms | no change | | `sqlite/scan_full_rows/1k` | 2.2094 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2438 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 1.7065 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.6626 ms | no change | | `rocksdb/write_root_all_rows/1k` | 3.9725 ms | improved | | `rocksdb/get_many_exact_keys/1k` | 2.0688 ms | no change | | `rocksdb/exists_many_exact_keys/1k` | 1.0354 ms | no change | | `rocksdb/scan_full_rows/1k` | 1.4142 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.3397 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 736.65 us | no change | | `rocksdb/write_tombstone_10pct_deletes/1k` | 735.43 us | no change | ### Verification ```sh cargo fmt --check cargo test -p lix_engine commit_store::codec:: --features storage-benches cargo test -p lix_engine commit_store:: --features storage-benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k' ``` All commands passed. Reviewer loop: - First pass: HIGH none, MEDIUM found a malformed 5-byte varint could exceed `u32::MAX` without being rejected; LOW requested canonical varint rejection. - Fixed `read_var_usize` to reject fifth-byte continuation and high payload bits, and to reject non-canonical zero-extended encodings. - Added regressions for overlong, too-large, and non-canonical varints. - Second pass: HIGH none, MEDIUM none, LOW none. ### Interpretation ```text Keep as a compact change-pack layout cleanup. Primary axis: storage bytes. Change packs are commit-local bounded blobs whose per-row shape indexes and string lengths are usually tiny; fixed u32 metadata was pure overhead. Varints are limited to u32, canonical, and malformed packs reject before allocation-heavy paths can trust the decoded value. Timing: focused physical write/read/scan rows showed no detected Lix regressions. The only statistically visible Lix movement was a RocksDB root write improvement in the final run. No backwards shim. ``` ## Optimization 33: Varint tracked-state delta-pack local fields ### Change Changed tracked-state delta packs from version `4` to version `5` with no backwards shim. Tree node/key/value encodings remain on their existing fixed-width formats. Within `LXTD` v5 delta packs, pack-local lengths/counts/indexes now use checked canonical `u32` varints instead of fixed-width `u32` fields: - pack commit id length - key prefix count, prefix schema/file id lengths, entry count - per-entry key/value section lengths - key prefix index and entity identity part count/lengths - full source commit id length when needed - source pack id and source ordinal - delta change id length - timestamp lengths The section-length encoder writes in place, reserving the maximum 5-byte varint header and compacting it after the section is written, avoiding a temporary allocation per key/value section. Decoder hardening: - rejects overlong varints - rejects varints above `u32::MAX` - rejects non-canonical encodings such as `80 00` - avoids eager large `Vec::with_capacity(count)` allocations from corrupt decoded counts Added focused delta-pack tests for the malformed varint cases and updated the roundtrip fixture to assert the v5 varint header fields. ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | before bytes | after bytes | delta | | -------------------------------------- | -----------: | ----------: | --------: | | raw SQLite / inserted | 1,692,456 | 1,692,456 | reference | | Lix SQLite / inserted | 922,696 | 897,976 | -24,720 | | Lix SQLite / after create_version | 935,056 | 910,336 | -24,720 | | Lix SQLite / after fast-forward merge | 5,123,600 | 5,152,584 | +28,984 | | Lix SQLite / after divergent merge | 5,308,064 | 5,304,136 | -3,928 | | Lix RocksDB / inserted | 836,566 | 811,776 | -24,790 | | Lix RocksDB / after create_version | 838,350 | 813,523 | -24,827 | | Lix RocksDB / after fast-forward merge | 991,281 | 962,754 | -28,527 | | Lix RocksDB / after divergent merge | 1,345,089 | 1,306,406 | -38,683 | ### Benchmarks Focused command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k' ``` Result: passed. Rerun after replacing temporary section buffers with in-place varint section headers: | row | median | criterion status | | --------------------------------------------- | --------: | ---------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.4927 ms | no change | | `raw_sqlite/get_many_exact_keys/1k` | 2.0536 ms | improved | | `raw_sqlite/exists_many_exact_keys/1k` | 2.1659 ms | no change | | `raw_sqlite/scan_full_rows/1k` | 1.2557 ms | improved | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.2060 ms | no change | | `raw_sqlite/write_delta_10pct_updates/1k` | 1.2878 ms | improved | | `raw_sqlite/write_tombstone_10pct_deletes/1k` | 1.2843 ms | improved | | `sqlite/write_root_all_rows/1k` | 4.5495 ms | no change | | `sqlite/get_many_exact_keys/1k` | 2.7998 ms | no change | | `sqlite/exists_many_exact_keys/1k` | 1.8635 ms | no change | | `sqlite/scan_full_rows/1k` | 2.6022 ms | noise threshold | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2652 ms | no change | | `sqlite/write_delta_10pct_updates/1k` | 1.7003 ms | no change | | `sqlite/write_tombstone_10pct_deletes/1k` | 1.6276 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.3209 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.1036 ms | regressed | | `rocksdb/exists_many_exact_keys/1k` | 1.0935 ms | no change | | `rocksdb/scan_full_rows/1k` | 1.4418 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.4424 ms | no change | | `rocksdb/write_delta_10pct_updates/1k` | 754.76 us | improved | | `rocksdb/write_tombstone_10pct_deletes/1k` | 779.52 us | no change | The only Criterion regression in the rerun is RocksDB exact reads, which should not decode delta-pack values on this benchmark path. Treat as a noisy guardrail unless it repeats after later exact-read work. ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: none. The v5 delta-pack varint path rejects overlong, above-u32, and non-canonical encodings; section boundaries are preserved; tree node/key/value encodings still use fixed-width helpers; and the count allocation hardening avoids huge malformed-count allocation before truncation failure. ``` ### Verification ```sh cargo fmt --check cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine transaction::commit:: --features storage-benches cargo check -p lix_engine --features storage-benches --benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|exists_many_exact_keys)/1k' ``` All commands passed. ### Interpretation ```text Keep. This is another pure physical-layout byte win. The largest clean benefit is on root/create-version storage and RocksDB delta/merge footprints, with no intended logical behavior change and no compatibility shim. The fast-forward SQLite byte count moved up on this run while divergent SQLite and RocksDB merge rows moved down; the inserted/create-version rows show the direct delta-pack-local field compression most clearly. Target is still not met: SQLite root write remains about 4.55 / 2.49 = 1.83x raw SQLite in the latest focused run, and scans are still above the 1.5x budget. ``` ## Optimization 34: Probe ordered single JSON packs before dedupe Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Added an early JSON-store read fast path for the common materialization shape where all requested JSON refs come from one commit pack in pack order: - `load_json_bytes_many_in_scope_with_hash_check` now probes a single `JsonReadScopeRef::CommitPacks` pack before building the dedupe `HashMap`, direct-row key list, and request-index remapping. - If the ordered pack probe hits, the loader returns decoded values directly. - If the ordered probe misses because the pack is absent or not an exact ordered match, the existing dedupe/direct-row fallback still runs. A present but non-matching pack is carried into fallback so the same pack is not fetched twice. - Added `ordered_pack_probe_falls_back_to_direct_rows` to cover direct-row fallback after a mismatched ordered pack probe. ### Benchmarks Focused read command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys)/1k' ``` First clean run after the change: | row | median | criterion status | | -------------------------------------------- | --------: | ----------------------- | | `raw_sqlite/get_many_exact_keys/1k` | 2.0699 ms | improved baseline | | `raw_sqlite/scan_full_rows/1k` | 1.2684 ms | improved baseline | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1716 ms | improved baseline | | `sqlite/get_many_exact_keys/1k` | 2.7975 ms | improved | | `sqlite/scan_full_rows/1k` | 2.3225 ms | improved | | `sqlite/prefix_scan_schema_file_null/1k` | 2.3271 ms | no change, lower median | | `rocksdb/get_many_exact_keys/1k` | 1.9847 ms | improved | | `rocksdb/scan_full_rows/1k` | 1.4401 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.4838 ms | no change | Final rerun after fallback refinement: | row | median | criterion status | | -------------------------------------------- | --------: | ----------------------------- | | `raw_sqlite/get_many_exact_keys/1k` | 2.0341 ms | reference/no change | | `raw_sqlite/scan_full_rows/1k` | 1.1597 ms | reference/no change | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1901 ms | reference/no change | | `sqlite/get_many_exact_keys/1k` | 2.8496 ms | no change | | `sqlite/scan_full_rows/1k` | 2.3712 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2558 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.1639 ms | noisy regression vs prior run | | `rocksdb/scan_full_rows/1k` | 1.4752 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.4137 ms | no change | Write guardrail command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` Result: passed, with all measured write rows improved in that guardrail run. ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 897976 | 898.0 | unchanged | | Lix SQLite / after create_version | 910336 | 910.3 | unchanged | | Lix SQLite / after fast-forward merge | 5152584 | 5152.6 | unchanged | | Lix SQLite / after divergent merge | 5304136 | 5304.1 | unchanged | | Lix RocksDB / inserted | 811772 | 811.8 | unchanged | | Lix RocksDB / after create_version | 813519 | 813.5 | unchanged | | Lix RocksDB / after fast-forward merge | 962750 | 962.8 | unchanged | | Lix RocksDB / after divergent merge | 1306403 | 1306.4 | unchanged | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: early ordered probe could read the same single pack twice on non-exact fallback. LOW: none. Follow-up review: HIGH: none. MEDIUM: none. LOW: absent-pack fallback still rereads the missing pack; present/non-exact fallback copies the full pack. Final review: HIGH: none. MEDIUM: none. LOW: none beyond the intentionally accepted full-pack copy on uncommon present/non-exact fallback. The absent-pack path now goes directly to direct-row fallback without rereading the missing pack. ``` ### Verification ```sh cargo fmt --check cargo check -p lix_engine --features storage-benches cargo test -p lix_engine json_store:: --features storage-benches cargo test -p lix_engine tracked_state::context:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|write_delta_10pct_updates|write_tombstone_10pct_deletes)/1k' ``` All commands passed. ### Interpretation ```text Keep as a JSON pack read-path optimization. Primary axis: exact reads and full-row scans that materialize all JSON payloads from one commit pack in pack order. The structural win avoids building a dedupe HashMap and direct-row key list before the existing ordered pack loader can succeed. Timing: first clean run showed Criterion improvements for SQLite exact reads, SQLite full scans, and RocksDB exact reads. Final rerun after fallback cleanup held the new median band but did not show another Criterion improvement, as expected. RocksDB exact read was noisy in the final rerun and remains a guardrail to watch. Storage is unchanged. No format change, no backward shim, no benchmark measurement change. This does not complete the <= 1.5x target because SQLite full/prefix scans remain above budget. ``` ## Optimization 35: Pre-size tracked materialization JSON slots Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Pre-sized the tracked-state materialization JSON side buffers from known entry and projection counts: - `materialize_index_entries` now computes the maximum projected JSON slots as `entries.len() * projected_json_columns`. - `json_refs` and `json_ref_localities` reserve that capacity up front instead of growing from zero while planning rows. This follows the same locality principle used in the pack formats: when a scan already has the row count and projected column shape, allocate the side vectors once for the dense payload path. ### Benchmarks Focused read command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed. | row | median | criterion status | | -------------------------------------------- | --------: | ----------------------- | | `raw_sqlite/get_many_exact_keys/1k` | 2.1087 ms | reference/no change | | `raw_sqlite/scan_full_rows/1k` | 1.1755 ms | reference/no change | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.1727 ms | reference/no change | | `sqlite/get_many_exact_keys/1k` | 2.7590 ms | no change | | `sqlite/scan_full_rows/1k` | 2.1942 ms | no change, lower median | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2549 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.0010 ms | improved | | `rocksdb/scan_full_rows/1k` | 1.4752 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.4116 ms | no change | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 897976 | 898.0 | unchanged | | Lix SQLite / after create_version | 910336 | 910.3 | unchanged | | Lix SQLite / after fast-forward merge | 5152584 | 5152.6 | unchanged | | Lix SQLite / after divergent merge | 5312328 | 5312.3 | unchanged/noisy | | Lix RocksDB / inserted | 811772 | 811.8 | unchanged | | Lix RocksDB / after create_version | 813519 | 813.5 | unchanged | | Lix RocksDB / after fast-forward merge | 962750 | 962.8 | unchanged | | Lix RocksDB / after divergent merge | 1306401 | 1306.4 | unchanged | ### Review Loop Reviewer pass: ```text HIGH: none. MEDIUM: none. LOW: reserves the upper bound for sparse/tombstone-heavy rows. This is bounded to at most two slots per row and is an acceptable hot-path tradeoff for dense payload scans. ``` ### Verification ```sh cargo fmt --check cargo check -p lix_engine --features storage-benches cargo test -p lix_engine tracked_state::materialization:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a small tracked materialization allocation cleanup. Primary axis: dense full-row materialization for exact reads and scans. The structural win removes repeated growth of JSON ref/locality side buffers when the planner already knows the maximum slot count. Timing: RocksDB exact reads improved by Criterion. SQLite scan medians moved lower but remained Criterion-neutral. There were no measured regressions in the focused read run. Storage is unchanged. No format change, no backward shim, no benchmark measurement change. This does not complete the <= 1.5x target; SQLite full and prefix scans remain above budget and root writes still need a larger cut. ``` ## Optimization 36: Decode scan keys from trusted schema/file prefix Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Added a tracked-tree scan fast path for the common single schema/file prefix shape: - `scan_ranges` already proves rows are inside one encoded `schema_key + file_id` prefix when a request has one schema key, one non-Any file filter, and no entity filter. - `scan_key_decode_hint` carries that trusted prefix shape through recursive tree scans. - Leaf scans now decode only the entity suffix with `decode_key_with_trusted_prefix`, then materialize the known schema/file fields directly. - The normal full-key decoder and filter recheck remain in place for multi schema/file scans, Any-file scans, entity-filter scans, and all other shapes. Added direct coverage for the trusted suffix decoder and a tree scan test that locks the hinted branch against tombstone visibility, file filtering, and limit handling. ### Benchmarks Focused read command: ```sh cargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/(raw_sqlite/smoke/(select_all_path_value|select_one_by_pk)|raw_storage_(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null))/1k' ``` Result: passed. | row | median | criterion status | | ----------------------------------------- | --------: | ------------------- | | `raw_sqlite/select_all_path_value/1k` | 1.2300 ms | reference/no change | | `raw_sqlite/select_one_by_pk/1k` | 1.0905 ms | reference/no change | | `sqlite/get_many_exact_keys/1k` | 2.8295 ms | no change | | `sqlite/scan_full_rows/1k` | 2.3559 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2079 ms | improved | | `rocksdb/get_many_exact_keys/1k` | 2.0448 ms | no change | | `rocksdb/scan_full_rows/1k` | 1.5172 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.4547 ms | no change | Rerun command: ```sh cargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/(raw_sqlite/smoke/select_all_path_value|raw_storage_(sqlite|rocksdb)/smoke/(scan_full_rows|prefix_scan_schema_file_null))/1k' ``` Result: passed. | row | median | criterion status | | ----------------------------------------- | --------: | ----------------------- | | `raw_sqlite/select_all_path_value/1k` | 1.1630 ms | reference/no change | | `sqlite/scan_full_rows/1k` | 2.2529 ms | no change, lower median | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2044 ms | no change, lower median | | `rocksdb/scan_full_rows/1k` | 1.4055 ms | improved | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.4103 ms | no change | ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 897976 | 898.0 | unchanged | | Lix SQLite / after create_version | 910336 | 910.3 | unchanged | | Lix SQLite / after fast-forward merge | 5090808 | 5090.8 | unchanged/noisy | | Lix SQLite / after divergent merge | 5234168 | 5234.2 | unchanged/noisy | | Lix RocksDB / inserted | 811776 | 811.8 | unchanged | | Lix RocksDB / after create_version | 813523 | 813.5 | unchanged | | Lix RocksDB / after fast-forward merge | 962754 | 962.8 | unchanged | | Lix RocksDB / after divergent merge | 1306404 | 1306.4 | unchanged | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: none. LOW: trusted prefix helper should make its caller proof sharper; add targeted coverage for the hinted schema/file scan branch with tombstones and limits. Follow-up review: No HIGH/MEDIUM/LOW findings. ``` ### Verification ```sh cargo fmt --check cargo check -p lix_engine --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine scan_schema_file_prefix_honors_tombstones_and_limit --features storage-benches cargo test -p lix_engine key_codec_decodes_entity_suffix_with_trusted_prefix --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/(raw_sqlite/smoke/(select_all_path_value|select_one_by_pk)|raw_storage_(sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null))/1k' cargo bench -p lix_engine --bench json_pointer_crud --features storage-benches -- 'json_pointer_crud/(raw_sqlite/smoke/select_all_path_value|raw_storage_(sqlite|rocksdb)/smoke/(scan_full_rows|prefix_scan_schema_file_null))/1k' ``` All commands passed. ### Interpretation ```text Keep as a scan key-decoding optimization. Primary axis: schema/file prefix scans. The structural win avoids reparsing schema/file fields from every matched encoded key and avoids repeating the key filter check when the encoded prefix range already proved those fields. Timing: SQLite prefix scan improved by Criterion in the first focused run. Rerun medians stayed lower but were Criterion-neutral, while RocksDB full scan improved by Criterion. Exact reads remained neutral, as expected. Storage is unchanged. No format change, no backward shim. This does not complete the <= 1.5x target; SQLite scans and root writes still need larger cuts. ``` ## Optimization 37: Diff pending delta suffixes by changed keys Date: 2026-05-11 Commit: this entry is committed with the optimization ### Change Added a changed-key fast path for pending tracked-state delta chains: - `diff_tree_entries_at_commits` now detects when the two commits share the same projection base and one pending-delta chain is a prefix of the other. - For that prefix/suffix shape, the reader loads only the suffix delta packs, collapses touched keys in chain order, fetches base values for those keys with the existing keyed projection lookup, and emits diff entries for those keys. - Divergent chains, different projection bases, and projection-root-only diffs keep using the existing full diff paths. - Diff row materialization is batched by side: all `before` rows and all `after` rows are hydrated through grouped `materialize_tree_values` calls instead of one row at a time. - Added focused coverage for parent-to-child suffix diffs, child-to-parent reverse suffix diffs, and suffix tombstone preservation. This follows the Dolt/Sapling-style rule from the reference systems: diff work should scale with changed keys and delta depth, not with the full materialized state when ancestry proves a delta suffix relation. ### Benchmarks Primary changed-key command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k' ``` First post-patch run: | row | median | criterion status | | ----------------------------------------------------------------- | -------: | ---------------- | | `sqlite/changed_keys_update_10pct/1k` | 2.6094 ms | improved -96.469% | | `sqlite/changed_keys_delta_chain_10x1pct/1k` | 3.3063 ms | improved -72.997% | | `rocksdb/changed_keys_update_10pct/1k` | 1.8167 ms | improved -97.460% | | `rocksdb/changed_keys_delta_chain_10x1pct/1k` | 1.4175 ms | improved -86.437% | Final rerun after review LOW fixes: | row | median | interpretation | | ----------------------------------------------------------------- | -------: | -------------- | | `sqlite/changed_keys_update_10pct/1k` | 3.8562 ms | still massively below the pre-patch ~68 ms baseline | | `sqlite/changed_keys_delta_chain_10x1pct/1k` | 4.6675 ms | still materially below the pre-patch ~10 ms baseline | | `rocksdb/changed_keys_update_10pct/1k` | 2.1797 ms | still massively below the pre-patch ~67 ms baseline | | `rocksdb/changed_keys_delta_chain_10x1pct/1k` | 1.7321 ms | still materially below the pre-patch ~8.7 ms baseline | Read/write guardrail command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` Result: passed as a guardrail. The rerun reported no Criterion regressions for exact reads or scans. Medians were noisy and raw SQLite also moved, consistent with the change being isolated to diff planning/materialization. ### Storage Storage command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | status | | -------------------------------------- | ------: | --------: | --------------- | | raw SQLite / inserted | 1692456 | 1692.5 | reference | | Lix SQLite / inserted | 897976 | 898.0 | unchanged | | Lix SQLite / after create_version | 910336 | 910.3 | unchanged | | Lix SQLite / after fast-forward merge | 5152584 | 5152.6 | unchanged | | Lix SQLite / after divergent merge | 5304136 | 5304.1 | unchanged/noisy | | Lix RocksDB / inserted | 811760 | 811.8 | unchanged | | Lix RocksDB / after create_version | 813507 | 813.5 | unchanged | | Lix RocksDB / after fast-forward merge | 962738 | 962.7 | unchanged | | Lix RocksDB / after divergent merge | 1306390 | 1306.4 | unchanged | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: none. LOW: preserve the planned/materialized row-count invariant instead of using get(index).cloned(); add direct reverse-suffix and tombstone suffix coverage. Follow-up review: HIGH: none. MEDIUM: none. LOW: none. ``` ### Verification ```sh cargo fmt --check cargo check -p lix_engine --features storage-benches cargo test -p lix_engine tracked_state::diff:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null)/1k' ``` All commands passed. ### Interpretation ```text Keep as a changed-key physical-layout optimization. Primary axis: diff/changed-key discovery. The structural win avoids rebuilding both full pending states when commit ancestry proves that one side is the other plus a suffix of delta packs. Work now scales with touched suffix keys for the common base->child and base->delta-chain cases. Timing: the first post-patch run showed 72-97% Criterion improvements across SQLite and RocksDB changed-key rows. The final rerun was noisier and slower than the first post-patch run, but still far below the pre-patch tens-of-milliseconds baseline. Storage is unchanged. No format change, no backward shim, and no benchmark measurement change. This does not complete the <= 1.5x target; SQLite scans and root writes still need larger structural cuts. ``` ## Optimization 38: Stream delta-pack entries without per-field sections Date: 2026-05-11 Status: kept and committed. ### Hypothesis Tracked-state delta packs still carried a length-prefixed sub-section around every encoded delta key and every encoded delta value. Those wrappers were not needed for decoding because the key and value fields are already self-delimiting. Removing them should shrink delta packs and avoid per-entry encoder buffer surgery, improving any path that writes or decodes delta packs: root writes, exact reads from unmaterialized roots, scans from single delta packs, and changed-key suffix diffs. This is a clean-cut physical format change. Lix has not shipped, so the delta pack version was bumped from v5 to v6 without a backward shim. ### Change - Bumped `tracked_state` delta pack version from 5 to 6. - Changed `encode_delta_pack_refs` to stream each entry as: `delta key fields` followed by `delta value fields`. - Removed the old `push_var_sized_section` helper and the corresponding per-entry `read_var_sized_slice` boundaries. - Changed `decode_delta_pack` to advance a single cursor through each self-delimiting key/value pair, while retaining the whole-pack trailing-byte check. - Added `delta_pack_stream_decoder_rejects_trailing_entry_bytes` to lock the new stream boundary behavior. Reference-system rationale: this follows the DuckDB/Parquet-style row-group principle of removing unnecessary per-row wrapper overhead from hot physical streams. It is not the full columnar row-group design, but it moves the delta segment format one step toward compact, sequential, projection-friendly pages. ### Benchmarks Command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k' ``` Result: | row | median | criterion status | | ----------------------------------------------------------------- | -------: | ---------------- | | `sqlite/write_root_all_rows/1k` | 4.7096 ms | improved -6.2208% | | `sqlite/get_many_exact_keys/1k` | 2.8942 ms | improved -12.163% | | `sqlite/scan_full_rows/1k` | 2.2839 ms | improved -5.5690% | | `sqlite/prefix_scan_schema_file_null/1k` | 2.2837 ms | improved -9.4296% | | `sqlite/changed_keys_update_10pct/1k` | 2.3014 ms | improved -39.610% | | `sqlite/changed_keys_delta_chain_10x1pct/1k` | 2.5375 ms | improved -57.367% | | `rocksdb/write_root_all_rows/1k` | 4.5390 ms | no change (-3.8047%) | | `rocksdb/get_many_exact_keys/1k` | 2.1650 ms | improved -15.785% | | `rocksdb/scan_full_rows/1k` | 1.5886 ms | improved -6.3037% | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.5596 ms | improved -8.6713% | | `rocksdb/changed_keys_update_10pct/1k` | 1.7168 ms | improved -41.184% | | `rocksdb/changed_keys_delta_chain_10x1pct/1k` | 1.8068 ms | no change (-2.2237%) | Interpretation: keep. The root-write and scan rows did not clear the 10% bar, but exact reads and changed-key rows did on both backends or on the primary SQLite delta-chain rows. There were no Criterion regressions in the target guardrails. This still does not complete the <= 1.5x target. The remaining root-write and SQLite scan misses need the larger row-group/projection-page work identified by the first-principles pass. ### Storage Command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | bytes | bytes/row | | -------------------------------------- | ------: | --------: | | raw SQLite / inserted | 1692456 | 1692.5 | | Lix SQLite / inserted | 897976 | 898.0 | | Lix SQLite / after create_version | 910336 | 910.3 | | Lix SQLite / after fast-forward merge | 5115504 | 5115.5 | | Lix SQLite / after divergent merge | 5275248 | 5275.2 | | Lix RocksDB / inserted | 809722 | 809.7 | | Lix RocksDB / after create_version | 811467 | 811.5 | | Lix RocksDB / after fast-forward merge | 960498 | 960.5 | | Lix RocksDB / after divergent merge | 1303449 | 1303.4 | ### Review Loop Reviewer pass: ```text Initial review: HIGH: none. MEDIUM: none. LOW: add a focused malformed v6 entry test for the stream boundary behavior. Follow-up review: HIGH: none. MEDIUM: none. LOW: none. ``` ### Verification ```sh cargo fmt --check cargo check -p lix_engine --features storage-benches cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|scan_full_rows|prefix_scan_schema_file_null|get_many_exact_keys|changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k' ``` All commands passed. ### Next Work The independent first-principles sidecar pass ranked the next plausible >=10% structural moves as: 1. A unified commit/tracked row group that removes duplicated authored row facts between `commit_store.change_pack` and `tracked_state.delta_pack`. 2. A projection-page scan API so key/header/payload-ref/full-row scans decode only the columns they need. 3. Columnar tracked leaves/row groups for key suffixes, scalar headers, timestamp codes, and payload refs. Those are the likely paths for the remaining root-write and SQLite scan misses. ## Optimization 39: Use tracked delta packs as authored commit row groups Date: 2026-05-11 Status: kept and committed. ### Hypothesis Tracked commits still wrote authored row facts twice: 1. `commit_store.change_pack`, for commit/change APIs. 2. `tracked_state.delta_pack`, for tracked projection reads and diffs. Both streams carry the same authored change identity, schema/file/entity key, payload refs, change id, commit locator, and change timestamp. Removing the duplicate commit-store authored pack for tracked commits should cut root-write work and storage materially, while keeping commit-store APIs as views over the same tracked delta row group. This follows the row-group principle from DuckDB/Parquet and the content-addressed shared-structure principle from Dolt/Sapling: store the commit-local row facts once, then expose logical views over that one physical segment. ### Change - Added `CommitStoreWriter::stage_tracked_commit_draft(s)`. - Tracked commit call sites now use the tracked staging path: transaction commit, initialization, test support, live-state test helper, and storage-bench tracked root writes. - The tracked staging path still validates uniqueness/adoption through commit_store, still writes the commit header, and still writes membership packs for adopted rows, but it does not write a duplicate `commit_store.change_pack` for authored tracked rows. - `commit_store::storage::load_change_pack` still prefers a direct commit-store change pack. When it is absent, it reconstructs authored changes from `tracked_state.delta_pack` entries whose locators point at the requested `(commit_id, pack_id)`. - Fallback reconstruction uses `delta.value.updated_at` as `Change.created_at`, because commit-store `Change.created_at` is the change timestamp, not the original entity creation timestamp. - Fallback reconstruction collects by ordinal in a `BTreeMap` and validates dense ordinals from zero, avoiding allocations based on untrusted `source_ordinal` values. - Added focused tests for tracked commit-pack fallback and sparse ordinal rejection. No backward shim was added. Lix has not shipped. ### Benchmarks Primary command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k' ``` Key medians: | row | median | criterion status | | ----------------------------------------------------------------- | -------: | ---------------- | | `sqlite/write_root_all_rows/1k` | 4.3853 ms | improved -10.019% | | `sqlite/get_many_exact_keys/1k` | 2.7779 ms | improved -12.889% | | `sqlite/scan_full_rows/1k` | 2.9966 ms | regressed +35.198%; contradicted by rerun | | `sqlite/prefix_scan_schema_file_null/1k` | 2.9472 ms | regressed +26.394%; contradicted by rerun | | `sqlite/changed_keys_update_10pct/1k` | 2.4469 ms | regressed +18.411%; contradicted by rerun | | `sqlite/changed_keys_delta_chain_10x1pct/1k` | 2.3284 ms | improved -7.6062% | | `rocksdb/write_root_all_rows/1k` | 4.5553 ms | improved -43.051% | | `rocksdb/get_many_exact_keys/1k` | 2.1086 ms | improved -17.576% | | `rocksdb/scan_full_rows/1k` | 1.5743 ms | improved -8.0804% | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.6102 ms | no change +2.5900% | | `rocksdb/changed_keys_update_10pct/1k` | 1.3325 ms | no change -17.007%; p=0.15 | Rerun command for red-flag rows: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct)/1k' ``` Rerun: | row | median | criterion status | | --------------------------------------------- | -------: | ---------------- | | `sqlite/write_root_all_rows/1k` | 4.6088 ms | no change +1.6643% | | `sqlite/scan_full_rows/1k` | 2.2111 ms | improved -35.751% | | `sqlite/prefix_scan_schema_file_null/1k` | 2.1713 ms | improved -28.351% | | `sqlite/changed_keys_update_10pct/1k` | 2.1967 ms | improved -15.575% | | `rocksdb/write_root_all_rows/1k` | 4.3510 ms | no change -3.8310% | | `rocksdb/scan_full_rows/1k` | 1.5100 ms | no change -1.8000% | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.7379 ms | no change +3.0011% | | `rocksdb/changed_keys_update_10pct/1k` | 1.4297 ms | no change +2.7686% | Interpretation: keep. Criterion timing is noisy, but the primary run clears the >=10% bar on SQLite root writes, RocksDB root writes, and exact reads. The rerun clears the scan red flags. The structural storage reduction below is the stronger evidence that this is a real physical-layout win. ### Storage Command: ```sh cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | row | before bytes/row | after bytes/row | delta | | -------------------------------------- | ---------------: | --------------: | ----: | | raw SQLite / inserted | 1692.5 | 1692.5 | 0% | | Lix SQLite / inserted | 898.0 | 724.9 | -19.3% | | Lix SQLite / after create_version | 910.3 | 745.5 | -18.1% | | Lix SQLite / after fast-forward merge | 5115.5 | 3209.3 | -37.3% | | Lix SQLite / after divergent merge | 5275.2 | 5111.4 | -3.1% | | Lix RocksDB / inserted | 809.7 | 655.7 | -19.0% | | Lix RocksDB / after create_version | 811.5 | 657.2 | -19.0% | | Lix RocksDB / after fast-forward merge | 960.5 | 776.7 | -19.1% | | Lix RocksDB / after divergent merge | 1303.4 | 1060.1 | -18.7% | ### Review Loop Reviewer pass: ```text Initial review: HIGH: fallback Change.created_at used delta.value.created_at, but commit-store Change.created_at is the change timestamp. Use delta.value.updated_at. MEDIUM: fallback Vec resized from untrusted source_ordinal before dense-order validation. Avoid allocation from corrupt ordinal values. LOW: stage_tracked_commit_draft(s) leaves a two-step internal invariant: callers must also stage the matching tracked delta pack. Follow-up review: HIGH: none. MEDIUM: none. LOW: the two-step invariant remains acceptable for this first row-group step; eventual API should make tracked commit + delta staging atomic. ``` ### Verification ```sh cargo fmt --check cargo check -p lix_engine --features storage-benches cargo test -p lix_engine commit_store:: --features storage-benches cargo test -p lix_engine tracked_state:: --features storage-benches cargo test -p lix_engine transaction --features storage-benches cargo test -p lix_engine commit_store::storage::tests::tracked_commit_change_pack --features storage-benches cargo test -p lix_engine json_pointer_crud_storage_accounting --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct|changed_keys_delta_chain_10x1pct)/1k' cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(sqlite|rocksdb)/smoke/(write_root_all_rows|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct)/1k' ``` All commands passed. ### Next Work This is not the final row-group API yet. The next clean cut should remove the two-step invariant by making tracked commit staging and tracked delta staging one atomic commit-local row-group operation. After that, the projection-page scan API remains the likely path for the remaining SQLite scan/root-write ratio misses. ## Opt 40: Mixed JSON Pack Indexes in Delta Packs Implemented a corrected delta-pack JSON reference row-group: when tracked state and JSON payloads are staged into the same commit-local JSON pack, `tracked_state.delta_pack` v7 can encode JSON refs as pack-local ordinals instead of repeating 32-byte hashes. Refs that are not in the pack still fall back to inline hashes, and empty index maps fall back to the old inline mode. The index map comes from the actual `json_store.pack` write order in `JsonStoreWriter::stage_batch_report`, avoiding the failed guessed-ordinal attempt. Decode resolves ordinals against `json_store.pack` refs only when the delta pack declares mixed mode; inline packs do not depend on parsing a JSON pack. The public internal staging edge now carries explicit `(commit_id, pack_id)` identity and validates that this path is pack-0-only. ### Storage Command: ```sh cargo test -p lix_engine lix_key_value_insert_amplification_north_star --features storage-benches -- --ignored --nocapture ``` Result: passed. 1k rows: | namespace | before bytes | after bytes | delta | | -------------------------- | -----------: | ----------: | ----: | | `commit_store.commit` | 205 | 205 | 0.0% | | `json_store.pack` | 100,064 | 100,064 | 0.0% | | `tracked_state.delta_pack` | 131,968 | 101,841 | -22.8% | | `untracked_state.row` | 386 | 386 | 0.0% | | total write bytes | 232,623 | 202,496 | -12.9% | Read-call accounting improved from 95 to 85 calls for the 1k north-star run because delta and JSON-pack lookup are batched. ### Physical Benchmark Command: ```sh cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct)/1k' ``` Final run: | benchmark | mean | criterion result | | ---------------------------------------------- | --------: | ---------------- | | `raw_sqlite/write_root_all_rows/1k` | 2.6157 ms | no change | | `raw_sqlite/get_many_exact_keys/1k` | 2.3298 ms | no change | | `raw_sqlite/scan_full_rows/1k` | 1.2697 ms | no change | | `raw_sqlite/prefix_scan_schema_file_null/1k` | 1.2167 ms | no change | | `sqlite/write_root_all_rows/1k` | 4.4511 ms | no change | | `sqlite/get_many_exact_keys/1k` | 2.7910 ms | no change | | `sqlite/scan_full_rows/1k` | 2.2780 ms | no change | | `sqlite/prefix_scan_schema_file_null/1k` | 2.3066 ms | no change | | `sqlite/changed_keys_update_10pct/1k` | 2.2692 ms | no change | | `rocksdb/write_root_all_rows/1k` | 4.7723 ms | no change | | `rocksdb/get_many_exact_keys/1k` | 2.1705 ms | improved -7.8977% | | `rocksdb/scan_full_rows/1k` | 1.6106 ms | no change | | `rocksdb/prefix_scan_schema_file_null/1k` | 1.6688 ms | noise +4.5902% | | `rocksdb/changed_keys_update_10pct/1k` | 1.5158 ms | no change | Interpretation: keep. The durable result is a 12.9% root-write byte reduction with no significant physical-benchmark regression in the final run. This does not solve the remaining <=1.5x SQLite scan/write ratios, but it removes a real duplicate 32-byte-ref payload from the row-group layout. ### Review Loop Reviewer pass: ```text Initial review: HIGH: packless/empty-index batches could emit mixed mode with no JSON pack, making inline-only delta packs unreadable. MEDIUM: bare pack-index maps were too easy to misuse across commit/pack ids. Follow-up review: MEDIUM: load_delta_pack decoded JSON pack refs before knowing whether the delta pack was mixed-mode, so corrupt JSON pack data could break inline packs. Final review: HIGH: none. MEDIUM: none. ``` ### Verification ```sh cargo fmt -p lix_engine cargo test -p lix_engine tracked_state::codec:: --features storage-benches cargo test -p lix_engine json_store:: --features storage-benches cargo test -p lix_engine transaction --features storage-benches cargo test -p lix_engine lix_key_value_insert_amplification_north_star --features storage-benches -- --ignored --nocapture cargo bench -p lix_engine --features storage-benches --bench json_pointer_physical -- 'json_pointer_physical/(raw_sqlite|sqlite|rocksdb)/smoke/(write_root_all_rows|get_many_exact_keys|scan_full_rows|prefix_scan_schema_file_null|changed_keys_update_10pct)/1k' ``` All commands passed. ================================================ FILE: optimization_log9_sql2.md ================================================ # Optimization Log 9: SQL2 Logical CRUD Goal: make the logical work inside `sql2` fast for an isolated JSON-pointer CRUD benchmark surface owned by this log. The pure target is SQL2 overhead: statement classification, SQL parsing, DataFusion logical planning, provider scan planning, DML normalization, SQL runtime collection, parameter conversion, and result conversion. All optimization changes in this log must stay inside the `sql2` module. If a profile shows that SQL2 is slow because it lacks a better read/write primitive from another module, record that as an outside-SQL2 follow-up and keep the code change out of this log. ## Benchmark Fit The scorecard benchmark for this log is: ```sh cargo bench -p lix_engine --bench optimization9_sql2 --features storage-benches -- 'optimization9_sql2/smoke_crud' ``` The important isolated E2E groups are: ```text optimization9_sql2/smoke_crud/lix_sqlite optimization9_sql2/smoke_crud/lix_rocksdb ``` The CRUD operations already exercise the SQL2 path through `SessionContext::execute`: ```text insert_all_rows/1k select_all_path_value/1k select_one_by_pk/1k update_all_values/1k update_one_by_pk/1k delete_all_rows/1k delete_one_by_pk/1k ``` No raw SQLite, raw storage, branch, merge, or shared fixture rows belong to the Log 9 scorecard. If a profile points below SQL2, record the finding as an outside-SQL2 follow-up instead of expanding this benchmark. ```text optimization9_sql2/smoke_crud: isolated Log 9 scorecard optimization9_sql2 diagnostic groups: planning/execution/literal-vs-parameterized microscope ``` This keeps the SQL2 CRUD campaign independent from other benchmark suites and optimization logs. ## Why This Is A SQL2 Benchmark Each Lix CRUD benchmark iteration excludes fixture setup via Criterion `iter_batched`, then measures one user-visible SQL operation. Inside the measured operation, the call path is: ```text SessionContext::execute -> sql2::classify_statement -> sql2::create_logical_plan or sql2::create_write_logical_plan -> build_read_session or build_write_session -> DataFusion create_logical_plan -> provider logical planning / DML normalization -> sql2::execute_logical_plan -> sql2::runtime::collect_dataframe -> query_result_from_batches / affected_rows_from_query_result ``` That is exactly the logical SQL2 surface we need to optimize. The benchmark is especially useful because it covers both: ```text read SQL: SELECT path, value FROM json_pointer ORDER BY path SELECT path, value FROM json_pointer WHERE path = '' write SQL: INSERT INTO json_pointer (path, value) VALUES ... UPDATE json_pointer SET value = ... UPDATE json_pointer SET value = ... WHERE path = '' DELETE FROM json_pointer DELETE FROM json_pointer WHERE path = '' ``` ## Dedicated Diagnostic Bench `optimization9_sql2` is the dedicated SQL2 diagnostic suite for this log: ```sh cargo bench -p lix_engine --bench optimization9_sql2 --features storage-benches ``` It uses local copies of the JSON-pointer fixture and schema so the suite is isolated from `json_pointer_crud` and `plugin-json-v2` paths: ```text packages/engine/benches/optimization9_sql2/pnpm-lock.fixture.json packages/engine/benches/optimization9_sql2/json_pointer.schema.json ``` It is intentionally small and self-contained. Its job is to separate SQL2 planning cost from SQL2 execution cost and to compare literal vs parameterized point CRUD statements. Benchmark groups: ```text optimization9_sql2/smoke_crud/lix_sqlite optimization9_sql2/smoke_crud/lix_rocksdb optimization9_sql2/planning_only/lix_sqlite optimization9_sql2/planning_only/lix_rocksdb optimization9_sql2/execute_preplanned/lix_sqlite optimization9_sql2/execute_preplanned/lix_rocksdb optimization9_sql2/e2e_literal/lix_sqlite optimization9_sql2/e2e_literal/lix_rocksdb optimization9_sql2/e2e_parameterized/lix_sqlite optimization9_sql2/e2e_parameterized/lix_rocksdb ``` Diagnostic rows: ```text smoke_crud: insert_all_rows/1k select_all_path_value/1k select_one_by_pk/1k update_all_values/1k update_one_by_pk/1k delete_all_rows/1k delete_one_by_pk/1k planning_only: select_all_path_value/1k select_one_by_pk/1k insert_500_values/1k update_all_values/1k delete_all_rows/1k execute_preplanned: select_all_path_value/1k select_one_by_pk/1k e2e_literal: select_one_by_pk/1k update_one_by_pk/1k delete_one_by_pk/1k e2e_parameterized: select_one_by_pk/1k update_one_by_pk/1k delete_one_by_pk/1k ``` The split means: ```text smoke_crud: isolated 1k CRUD scorecard for this optimization log planning_only: parse/classify/session construction/DataFusion logical planning/provider setup execute_preplanned: physical collection/provider scan/result conversion after read SQL is planned e2e_literal vs e2e_parameterized: statement planning plus execution through public SessionContext::execute ``` Write `execute_preplanned` rows are intentionally not present yet. SQL2 write providers currently rely on a transaction-scoped `SqlWriteContext` pointer whose planning and execution must stay inside the same write frame. The suite records write planning separately and uses E2E literal/parameterized rows for write execution until SQL2 has a safe write-plan diagnostic boundary. ## Profiler Workflow Use the profiler before changing code. Profile one operation at a time so the flamegraph is readable. Primary filters: ```sh cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/smoke_crud/lix_sqlite' cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k' cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/planning_only/lix_sqlite/update_all_values/1k' cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/planning_only/lix_sqlite/delete_all_rows/1k' cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/execute_preplanned/lix_sqlite/select_one_by_pk/1k' cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 -- 'optimization9_sql2/e2e_parameterized/lix_sqlite/select_one_by_pk/1k' ``` Repeat the same filters for `lix_rocksdb` only after the SQLite profile has a clear hypothesis. If both backends show the same SQL2 stack, optimize SQL2. If they diverge below the SQL2 boundary, capture the missing primitive or backend cost as a later outside-SQL2 optimization lead. Record the top stacks in each entry with this classification: ```text sql2 planning: classify_statement, validate_supported_statement_ast, build_*_session, create_logical_plan, validate_supported_logical_plan, validate_json_predicates_in_logical_plan, provider table scan planning sql2 execution glue: execute_logical_plan, collect_dataframe, parameter conversion, query_result_from_batches, affected row conversion provider logical work: predicate extraction, projection mapping, DML normalization, insert/update/delete batch construction, value JSON coercion not SQL2: backend IO, tracked-state materialization, delta decoding, commit graph, RocksDB/SQLite storage write application outside-SQL2 follow-up: missing read/write primitive, storage/provider API limitation, layout issue, or backend-specific behavior that SQL2 cannot fix internally ``` ## Initial Scorecard The scorecard for this log is isolated in `optimization9_sql2/smoke_crud`. Do not use rows from any other benchmark suite as Log 9 baselines. Baseline command: ```sh cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 ``` Baseline commit: ```text 1010c12c plus uncommitted Log 9 benchmark files ``` Initial isolated 1k smoke CRUD rows after rebasing onto `origin/physical-layout-manual-goal-ii-`: | operation | Lix SQLite | Lix RocksDB | | ----------------------- | ---------------: | ---------------: | | `insert_all_rows` | 62.714-72.740 ms | 52.627-57.653 ms | | `select_all_path_value` | 18.980-20.138 ms | 9.9962-11.163 ms | | `select_one_by_pk` | 7.6860-9.2848 ms | 2.2846-2.7899 ms | | `update_all_values` | 53.337-123.20 ms | 19.038-20.238 ms | | `update_one_by_pk` | 8.5795-13.785 ms | 4.5116-4.7572 ms | | `delete_all_rows` | 30.914-33.230 ms | 21.999-25.876 ms | | `delete_one_by_pk` | 7.2750-7.9671 ms | 4.4184-5.2644 ms | Initial `optimization9_sql2` diagnostic rows after rebase: | group | operation | Lix SQLite | Lix RocksDB | | -------------------- | -------------------------- | ---------------: | ---------------: | | `planning_only` | `select_all_path_value/1k` | 3.3115-3.7821 ms | 1.6485-1.9012 ms | | `planning_only` | `select_one_by_pk/1k` | 2.9706-4.9726 ms | 1.6292-1.8691 ms | | `planning_only` | `insert_500_values/1k` | 11.099-11.953 ms | 11.316-12.420 ms | | `planning_only` | `update_all_values/1k` | 3.5833-3.9703 ms | 2.1247-2.3981 ms | | `planning_only` | `delete_all_rows/1k` | 3.6369-4.0269 ms | 2.0014-2.2900 ms | | `execute_preplanned` | `select_all_path_value/1k` | 8.7746-9.3653 ms | 8.8134-9.7773 ms | | `execute_preplanned` | `select_one_by_pk/1k` | 1.3400-1.4785 ms | 1.4099-1.8420 ms | | `e2e_literal` | `select_one_by_pk/1k` | 3.8340-4.1884 ms | 2.4221-3.5113 ms | | `e2e_literal` | `update_one_by_pk/1k` | 7.0420-8.2160 ms | 4.4839-5.3388 ms | | `e2e_literal` | `delete_one_by_pk/1k` | 7.4717-7.9987 ms | 4.2601-5.5313 ms | | `e2e_parameterized` | `select_one_by_pk/1k` | 3.7137-4.0738 ms | 2.1038-2.4607 ms | | `e2e_parameterized` | `update_one_by_pk/1k` | 7.5761-9.0774 ms | 4.1165-4.7877 ms | | `e2e_parameterized` | `delete_one_by_pk/1k` | 7.4651-8.2425 ms | 4.4257-5.1296 ms | Hetzner CX33 baseline rerun on 2026-05-11: ```text Machine: Hetzner CX33 Host: ubuntu-32gb-hil-1 CPU: 8 vCPU, AMD EPYC-Milan Processor, KVM Kernel: Linux 6.8.0-90-generic x86_64 Commit: 9ff4f9cb Command: cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 ``` Hetzner CX33 isolated 1k smoke CRUD rows: | operation | Lix SQLite | Lix RocksDB | | ----------------------- | ---------------: | ---------------: | | `insert_all_rows` | 70.105-71.910 ms | 67.767-68.316 ms | | `select_all_path_value` | 17.530-17.943 ms | 13.421-13.936 ms | | `select_one_by_pk` | 6.6463-6.9219 ms | 2.9247-3.0022 ms | | `update_all_values` | 34.429-35.507 ms | 25.341-25.724 ms | | `update_one_by_pk` | 10.367-10.581 ms | 6.3116-6.4393 ms | | `delete_all_rows` | 35.935-36.724 ms | 26.690-27.071 ms | | `delete_one_by_pk` | 10.616-10.778 ms | 6.4811-6.6185 ms | Hetzner CX33 `optimization9_sql2` diagnostic rows: | group | operation | Lix SQLite | Lix RocksDB | | -------------------- | -------------------------- | ---------------: | ---------------: | | `planning_only` | `select_all_path_value/1k` | 5.7264-5.8371 ms | 2.1837-2.3126 ms | | `planning_only` | `select_one_by_pk/1k` | 5.3823-5.5152 ms | 2.2103-2.2705 ms | | `planning_only` | `insert_500_values/1k` | 14.105-14.283 ms | 12.987-13.275 ms | | `planning_only` | `update_all_values/1k` | 6.3326-6.4489 ms | 2.7961-2.8708 ms | | `planning_only` | `delete_all_rows/1k` | 6.2279-7.0361 ms | 2.6768-2.7504 ms | | `execute_preplanned` | `select_all_path_value/1k` | 11.515-11.711 ms | 11.964-12.364 ms | | `execute_preplanned` | `select_one_by_pk/1k` | 1.5469-1.5784 ms | 1.6215-1.6790 ms | | `e2e_literal` | `select_one_by_pk/1k` | 6.3640-6.4680 ms | 2.9476-2.9911 ms | | `e2e_literal` | `update_one_by_pk/1k` | 9.9933-10.128 ms | 6.1048-6.2638 ms | | `e2e_literal` | `delete_one_by_pk/1k` | 10.509-11.015 ms | 6.4548-6.6268 ms | | `e2e_parameterized` | `select_one_by_pk/1k` | 6.5033-6.6564 ms | 3.1192-3.2197 ms | | `e2e_parameterized` | `update_one_by_pk/1k` | 10.169-11.111 ms | 6.4063-6.6222 ms | | `e2e_parameterized` | `delete_one_by_pk/1k` | 10.407-10.631 ms | 6.4029-6.5440 ms | Interpretation: ```text The benchmark suite is good enough to start optimizing SQL2 CRUD now. The highest-value SQL2 profiles are insert_all_rows, delete_all_rows, and update_all_values, with PK read/update/delete as planning/provider overhead probes. Full scan is the lowest priority within this isolated scorecard because it is already much closer than insert and bulk writes. ``` SQL2-only boundary: ```text Allowed edit surface: packages/engine/src/sql2/** Not allowed in this log: storage layout changes tracked-state reader/writer changes live-state changes transaction staging changes outside SQL2 benchmark success achieved by changing backend behavior Required handling for outside-SQL2 findings: Record the profile evidence, name the missing primitive or non-SQL2 bottleneck, and leave it for a future non-SQL2 optimization log. ``` ## Optimization Order 1. `insert_all_rows` 2. `delete_all_rows` 3. `update_all_values` 4. `update_one_by_pk` and `delete_one_by_pk` 5. `select_one_by_pk` 6. `select_all_path_value` Rationale: ```text Insert is still hundreds of milliseconds for 1k rows and executes the richest SQL2 write path: large VALUES planning, JSON literal coercion, insert normalization, identity/default handling, and staging. Bulk delete and update are the best probes for avoidable provider logical work over many current rows. Single-row PK operations isolate per-statement SQL2 overhead. They are small in absolute time now, but they reveal whether SQL2 is doing too much planning or provider setup for point operations. ``` ## Candidate Optimization Themes Do not implement these blindly. Each needs a profile entry first. ```text Session/catalog setup: avoid rebuilding expensive read/write DataFusion session state per statement when visible schemas and functions are unchanged inside a benchmark session Logical-plan validation: collapse repeated recursive walks over the same DataFusion logical plan combine support validation, JSON predicate validation, notices, and statement kind classification where possible DML normalization: reduce per-row cloning and JSON string/value round trips for INSERT VALUES build typed row batches directly from DataFusion expressions when safe Provider scan planning: push path equality filters into exact-key load requests early avoid broad scan request construction for single-PK SELECT/UPDATE/DELETE Result conversion: avoid unnecessary cloning of column metadata and JSON values keep affected-row write results minimal Runtime collection: make SQL2 collect only the needed batches/columns for affected-row DML avoid full row materialization when the operation only needs a count ``` ## Keep Criteria For every kept optimization: ```text primary: improves the targeted Lix SQLite 1k smoke CRUD row by >= 10% does not regress any other Lix SQLite 1k CRUD row by > 5% cross-backend: improves or stays neutral on the matching Lix RocksDB row any backend split is explained by profile evidence guardrails: benchmark suite stays isolated to optimization9_sql2 fixture/schema files any non-SQL2 bottleneck is recorded as outside-SQL2 follow-up sql2 and code-structure tests pass ``` Verification commands: ```sh cargo bench -p lix_engine --features storage-benches --bench optimization9_sql2 cargo test -p lix_engine sql2 cargo test -p lix_engine --test code_structure sql2 ``` ## Entry Template Use one entry per kept SQL2 optimization. ```text ## Optimization N: Commit: or uncommitted on Target operation: insert_all_rows | select_all_path_value | select_one_by_pk | update_all_values | update_one_by_pk | delete_all_rows | delete_one_by_pk Profile before: command: top SQL2 stacks: non-SQL2 stacks: conclusion: Change: What changed? Why does this reduce logical SQL2 work? What semantic invariant is preserved? Results: Include impacted optimization9_sql2 diagnostic rows. Include optimization9_sql2/smoke_crud Lix SQLite and Lix RocksDB rows for every CRUD operation. Guardrails: Confirm the benchmark still uses only local optimization9_sql2 fixture/schema files. Outside-SQL2 follow-up: If the profile points to a missing primitive or non-SQL2 bottleneck, record it here. Do not include that implementation in this log. Decision: Keep, revert, or follow-up. ``` ## Optimization 1: Reuse Parsed DataFusion Statement For Write Planning Commit: uncommitted on 80f4f68a Target operation: logical planning for optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k Profile before: command: perf record --output=/tmp/sql2-insert-plan.perf.data -F 499 -g --call-graph dwarf target/release/deps/optimization9_sql2-bd3fa4efccf19070 --bench 'optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k' --profile-time 8 perf report --stdio --quiet --no-inline --input=/tmp/sql2-insert-plan.perf.data --no-call-graph --sort=symbol --percent-limit=1 top SQL2 stacks: sqlparser::tokenizer::Tokenizer::tokenize_quoted_string: 17.10% self sqlparser parser/tokenizer helpers collectively appeared below that hotspot non-SQL2 stacks: _int_malloc: 7.35%, __memmove_avx_unaligned_erms: 6.55%, malloc: 2.31% conclusion: SQL2 write planning parsed the same INSERT text multiple times: once for Lix AST validation/history target extraction and again through DataFusion planning. Large literal INSERT statements spend significant time tokenizing quoted JSON strings, so the duplicate parse is a first-order logical planning bottleneck. Change: create_write_logical_plan now parses once into DataFusion's Statement with the SQL session parser, validates supported Lix SQL against that AST, extracts read-only history DML targets from the same AST, and passes the same Statement to SessionState::statement_to_plan. The cheap parse/validate/read-only phase now runs before write provider registration. The write session is built only after parse and policy checks succeed. DML target extraction normalizes unquoted identifiers to lowercase while preserving quoted identifiers, matching DataFusion's identifier normalization rule. Added coverage for read-only history DML through lowercase, uppercase, schema-qualified uppercase, and EXPLAIN-wrapped DELETE targets. Read planning remains on the previous path, so this optimization is scoped to SQL2 write planning. Best-practice references: DataFusion exposes and uses the parse-once lower-level flow: sql_to_statement followed by statement_to_plan (artifact/datafusion/datafusion/core/src/execution/session_state.rs). DataFusion normalizes unquoted identifiers before planning (artifact/datafusion/datafusion/sql/src/planner.rs and artifact/datafusion/datafusion/sql/src/utils.rs). SpiceAI intercepts parsed statements before planning for DataFusion integration work (artifact/spiceai/crates/runtime/src/datafusion/planner/mod.rs). Turso's standalone DB flow parses SQL into AST before translation/codegen (artifact/turso/docs/manual.md). Semantic invariant preserved: Statement support checks, history-view read-only enforcement, and DataFusion logical planning all inspect the same parsed statement. Unsupported DataFusion extension statements are still rejected before planning. Results: Focused planning rows after review fixes: optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k: [7.5696 ms 7.7067 ms 7.9807 ms] vs logged baseline [14.105 ms 14.283 ms], about 44-46% faster. optimization9_sql2/planning_only/lix_sqlite/update_all_values/1k: [6.5332 ms 6.6237 ms 6.7164 ms], neutral vs logged baseline [6.3326 ms 6.4489 ms]. optimization9_sql2/planning_only/lix_sqlite/delete_all_rows/1k: [6.3737 ms 6.4816 ms 6.6179 ms], neutral vs logged baseline [6.2279 ms 7.0361 ms]. Smoke CRUD guardrail after review fixes: Lix SQLite: insert_all_rows: [59.787 ms 60.000 ms 60.251 ms], faster than baseline [70.105 ms 71.910 ms] select_all_path_value: [16.936 ms 17.095 ms 17.266 ms], neutral/faster than baseline [17.530 ms 17.943 ms] select_one_by_pk: [6.4369 ms 6.5101 ms 6.5946 ms], neutral/faster than baseline [6.6463 ms 6.9219 ms] update_all_values: [33.796 ms 34.192 ms 34.606 ms], neutral/faster than baseline [34.429 ms 35.507 ms] update_one_by_pk: [10.334 ms 10.408 ms 10.480 ms], neutral vs baseline [10.367 ms 10.581 ms] delete_all_rows: [34.715 ms 34.957 ms 35.215 ms], neutral/faster than baseline [35.935 ms 36.724 ms] delete_one_by_pk: [10.624 ms 10.686 ms 10.751 ms], neutral vs baseline [10.616 ms 10.778 ms] Lix RocksDB: insert_all_rows: [59.644 ms 60.006 ms 60.461 ms], faster than baseline [67.767 ms 68.316 ms] select_all_path_value: [13.053 ms 13.142 ms 13.238 ms], neutral/faster than baseline [13.421 ms 13.936 ms] select_one_by_pk: [2.9783 ms 2.9920 ms 3.0078 ms], neutral vs baseline [2.9247 ms 3.0022 ms] update_all_values: [25.567 ms 25.748 ms 25.948 ms], neutral vs baseline [25.341 ms 25.724 ms] update_one_by_pk: [6.3481 ms 6.4059 ms 6.4673 ms], neutral vs baseline [6.3116 ms 6.4393 ms] delete_all_rows: [27.078 ms 27.294 ms 27.545 ms], neutral vs baseline [26.690 ms 27.071 ms] delete_one_by_pk: [6.4115 ms 6.4388 ms 6.4659 ms], neutral/faster than baseline [6.4811 ms 6.6185 ms] Post-change profile: command: perf record --output=/tmp/sql2-insert-plan-after.perf.data -F 499 -g --call-graph dwarf target/release/deps/optimization9_sql2-bd3fa4efccf19070 --bench 'optimization9_sql2/planning_only/lix_sqlite/insert_500_values/1k' --profile-time 8 perf report --stdio --quiet --no-inline --input=/tmp/sql2-insert-plan-after.perf.data --no-call-graph --sort=symbol --percent-limit=1 result: sqlparser::tokenizer::Tokenizer::tokenize_quoted_string dropped from 17.10% to 8.53% self. This profiler percentage is diagnostic evidence that the targeted duplicate-parse hot stack was reduced; it is not the keep threshold. The keep threshold is benchmark speedup: insert_500_values planning improved by about 44-46%, and the corresponding SQLite smoke insert row improved by about 14-17%, both above the required >=10% speed improvement. Remaining top entries are allocator/memory movement or broadly distributed DataFusion/schema work. Review: First review reported no HIGH findings and two MEDIUM findings: normalize unquoted DML target identifiers consistently with DataFusion; parse/validate before write session/provider construction. Both MEDIUM findings were implemented. Second review reported no HIGH or MEDIUM findings. Guardrails: Benchmark remains isolated to optimization9_sql2 fixture/schema files. SQL2 and code-structure tests pass: cargo test -p lix_engine execute_sql_rejects_writes_to_history_views_before_planning --features storage-benches cargo test -p lix_engine sql2 --features storage-benches Outside-SQL2 follow-up: SessionContext::execute still performs a separate pre-SQL2 classification parse in packages/engine/src/session/execute.rs before dispatching to create_write_logical_plan. This is outside the SQL2-only implementation scope and should be addressed separately if end-to-end parse elimination is desired. Decision: Keep. Completion audit: Additional post-change logical-planning profiles were used as diagnostics after verifying the benchmark speedup. They check whether the optimization exposed another dominant planning stack, but the keep/revert decision remains based on >=10% benchmark speed improvement: insert_500_values/1k: sqlparser::tokenizer::Tokenizer::tokenize_quoted_string: 8.53% _int_malloc: 8.01% select_all_path_value/1k: _int_malloc: 6.24% malloc: 2.78% DataFusion simplification symbols below 1% select_one_by_pk/1k: _int_malloc: 6.43% malloc: 2.36% DataFusion simplification symbols below 1% delete_all_rows/1k: _int_malloc: 7.38% malloc: 2.30% DataFusion simplification symbols below 1% The previous insert-planning SQL tokenizer hot stack was reduced, and the benchmark speedup exceeds the required >=10% improvement. The remaining visible costs are allocator/general DataFusion work spread across the logical-planning profiles. ================================================ FILE: package.json ================================================ { "private": true, "name": "monorepo", "type": "module", "scripts": { "build": "pnpm exec nx run-many --nx-bail --target=build --parallel", "bench:engine:baseline": "node packages/engine/scripts/log-bench-baseline.mjs", "postinstall": "command -v cargo >/dev/null 2>&1 && cargo fetch || true", "test": "pnpm test:js && pnpm test:rs", "test:js": "pnpm exec nx run-many --target=test --parallel", "test:rs": "cargo test --workspace", "lint": "pnpm lint:js && pnpm lint:rs", "lint:js": "pnpm exec nx run-many --target=lint --parallel", "lint:rs": "cargo fmt --all --check && cargo clippy --workspace --all-targets", "format": "pnpm exec nx run-many --target=format --parallel", "clean": "pnpm recursive run clean && rm -rf ./.env ./node_modules", "----- CI ---- used to test the codebase on every commit": "", "ci": "pnpm lint && pnpm test && pnpm build" }, "packageManager": "pnpm@10.23.0", "engines": { "node": ">=22", "pnpm": ">=10 <11" }, "devDependencies": { "@changesets/cli": "^2.29.7", "@vitest/coverage-v8": "^3.1.1", "nx": "^21.0.0", "nx-cloud": "^19.1.0", "vitest": "^3.1.1" } } ================================================ FILE: packages/cli/Cargo.toml ================================================ [package] name = "lix_cli" version = "0.1.0" edition = "2021" [[bin]] name = "lix" path = "src/main.rs" [dependencies] async-trait = "0.1" clap = { version = "4.5.31", features = ["derive"] } lix_rs_sdk = { path = "../rs-sdk" } serde = { version = "1", features = ["derive"] } serde_json = "1.0" pollster = "0.4" comfy-table = "7.1" base64 = "0.22" sha2 = "0.10" tokio = { version = "1", features = ["rt"] } ================================================ FILE: packages/cli/src/app/context.rs ================================================ use std::path::PathBuf; #[derive(Debug, Clone)] pub struct AppContext { pub lix_path: Option, pub no_hints: bool, } ================================================ FILE: packages/cli/src/app/mod.rs ================================================ mod context; mod run; mod welcome; pub use context::AppContext; pub use run::run; ================================================ FILE: packages/cli/src/app/run.rs ================================================ use super::context::AppContext; use super::welcome; use crate::cli::root::{Cli, Command}; use crate::commands; use crate::error::CliError; use crate::hints; use clap::{CommandFactory, Parser}; use std::io::Write; pub fn run() -> Result<(), CliError> { let cli = Cli::parse(); let no_hints = cli.no_hints; let lix_path = cli.path; let command = match cli.command { Some(command) => command, None => { welcome::print_banner(lix_path.as_deref()); Cli::command().print_help().ok(); println!(); return Ok(()); } }; let context = AppContext { lix_path, no_hints }; let result = match command { Command::Exp(exp_command) => commands::exp::run(&context, exp_command), Command::Init(init_command) => commands::init::run(init_command), Command::Redo(redo_command) => commands::redo::run(&context, redo_command), Command::Sql(sql_command) => commands::sql::run(&context, sql_command), Command::Undo(undo_command) => commands::undo::run(&context, undo_command), Command::Version(version_command) => commands::version::run(&context, version_command), }; match result { Ok(output) => { if !no_hints { hints::render_hints(&output.hints); } Ok(()) } Err(err) => { let mut stderr = std::io::stderr().lock(); render_error_output(&err, no_hints, &mut stderr); Err(err) } } } /// Render a `CliError` to the given writer: the error message on one line, /// followed by a `hint:` line when hints are enabled and a hint is attached. /// Factored out of [`run`] so the rendering path is unit-testable. pub(crate) fn render_error_output(err: &CliError, no_hints: bool, out: &mut W) { writeln!(out, "{err}").ok(); if !no_hints { for hint in hints::hint_from_error(err) { writeln!(out, "hint: {hint}").ok(); } } } #[cfg(test)] mod tests { use super::*; use lix_rs_sdk::LixError; fn rendered(err: &CliError, no_hints: bool) -> String { let mut buf: Vec = Vec::new(); render_error_output(err, no_hints, &mut buf); String::from_utf8(buf).expect("render output is valid utf-8") } #[test] fn renders_hint_line_when_error_carries_hint() { let err = CliError::from_lix( "sql execution failed", LixError::new( "LIX_ERROR_UNSUPPORTED_WRITE_EXPRESSION", "json(...) is not supported", ) .with_hint("use lix_json('...') instead"), ); let out = rendered(&err, false); assert_eq!( out, "sql execution failed: json(...) is not supported\n\ hint: use lix_json('...') instead\n" ); } #[test] fn suppresses_hint_when_no_hints_is_set() { let err = CliError::from_lix( "sql execution failed", LixError::new("CODE", "boom").with_hint("try the fix"), ); let out = rendered(&err, true); assert_eq!(out, "sql execution failed: boom\n"); } #[test] fn omits_hint_line_when_error_has_no_hint() { let err = CliError::from_lix("ctx", LixError::new("CODE", "boom")); let out = rendered(&err, false); assert_eq!(out, "ctx: boom\n"); } #[test] fn omits_hint_line_for_non_lix_error_variants() { let err = CliError::msg("plain message"); let out = rendered(&err, false); assert_eq!(out, "plain message\n"); } } ================================================ FILE: packages/cli/src/app/welcome.rs ================================================ use std::io::IsTerminal; use std::path::{Path, PathBuf}; const CYAN: &str = "\x1b[38;2;8;181;214m"; const RESET: &str = "\x1b[0m"; const LOGO: [&str; 6] = [ "██╗ ██╗██╗ ██╗", "██║ ██║╚██╗██╔╝", "██║ ██║ ╚███╔╝ ", "██║ ██║ ██╔██╗ ", "███████╗██║██╔╝ ██╗", "╚══════╝╚═╝╚═╝ ╚═╝", ]; const TAGLINE: &str = "change control system for everything"; pub fn print_banner(explicit_lix_path: Option<&Path>) { let color = use_color(); let (cyan, reset) = if color { (CYAN, RESET) } else { ("", "") }; let version = env!("CARGO_PKG_VERSION"); let info = [ String::new(), format!("lix v{version}"), TAGLINE.to_string(), current_dir_display(), describe_lix_state(explicit_lix_path), String::new(), ]; println!(); for (logo_line, text) in LOGO.iter().zip(info.iter()) { if text.is_empty() { println!(" {cyan}{logo_line}{reset}"); } else { println!(" {cyan}{logo_line}{reset} {text}"); } } println!(); } fn use_color() -> bool { std::io::stdout().is_terminal() && std::env::var_os("NO_COLOR").is_none() } fn current_dir_display() -> String { let cwd = match std::env::current_dir() { Ok(path) => path, Err(_) => return String::new(), }; if let Some(home) = std::env::var_os("HOME") { let home = PathBuf::from(home); if let Ok(relative) = cwd.strip_prefix(&home) { let rel = relative.display().to_string(); return if rel.is_empty() { "~".to_string() } else { format!("~/{rel}") }; } } cwd.display().to_string() } fn describe_lix_state(explicit: Option<&Path>) -> String { if let Some(path) = explicit { return format!("using {}", path.display()); } let cwd = match std::env::current_dir() { Ok(path) => path, Err(_) => return String::new(), }; let mut lix_files: Vec = Vec::new(); if let Ok(entries) = std::fs::read_dir(&cwd) { for entry in entries.flatten() { let path = entry.path(); if path.is_file() && path.extension().and_then(|ext| ext.to_str()) == Some("lix") { lix_files.push(path); } } } match lix_files.len() { 0 => "no .lix file detected · run `lix init `".to_string(), 1 => { let name = lix_files[0] .file_name() .map(|n| n.to_string_lossy().into_owned()) .unwrap_or_default(); format!("detected {name}") } n => format!("{n} .lix files · pass --path "), } } ================================================ FILE: packages/cli/src/cli/exp.rs ================================================ use clap::{value_parser, Args, Subcommand, ValueHint}; use std::path::PathBuf; #[derive(Debug, Args)] pub struct ExpCommand { #[command(subcommand)] pub command: ExpSubcommand, } #[derive(Debug, Subcommand)] pub enum ExpSubcommand { /// Replay git history into a Lix artifact. GitReplay(ExpGitReplayArgs), } #[derive(Debug, Args)] pub struct ExpGitReplayArgs { /// Path to the git repository to replay. #[arg(long, value_hint = ValueHint::DirPath)] pub repo_path: PathBuf, /// Output .lix path. #[arg(long, value_hint = ValueHint::FilePath)] pub output_lix_path: PathBuf, /// Branch/ref to replay from (use '*' to replay commits reachable from all refs). #[arg(long, default_value = "main")] pub branch: String, /// Start replay from this commit (inclusive). #[arg(long)] pub from_commit: Option, /// Maximum number of commits to replay (after applying --from-commit, if set). #[arg(long, value_parser = value_parser!(u32).range(1..))] pub num_commits: Option, /// Verify file paths and payload hashes after each replayed commit. #[arg(long, default_value_t = false)] pub verify_state: bool, /// Overwrite output files if they already exist. #[arg(long, default_value_t = false)] pub force: bool, /// Write per-commit replay profiling data as JSON. #[arg(long, value_hint = ValueHint::FilePath)] pub profile_json: Option, /// Write backend SQL tracing data as JSON. #[arg(long, value_hint = ValueHint::FilePath)] pub trace_sql_json: Option, /// Trace only the replayed commit matching this full SHA or unique SHA prefix. #[arg(long)] pub trace_commit: Option, } ================================================ FILE: packages/cli/src/cli/init.rs ================================================ use clap::{Args, ValueHint}; use std::path::PathBuf; #[derive(Debug, Args)] pub struct InitCommand { /// Path to the .lix file to initialize. #[arg(value_hint = ValueHint::FilePath)] pub path: PathBuf, } ================================================ FILE: packages/cli/src/cli/mod.rs ================================================ pub mod exp; pub mod init; pub mod redo; pub mod root; pub mod sql; pub mod undo; pub mod version; ================================================ FILE: packages/cli/src/cli/redo.rs ================================================ use clap::Args; #[derive(Debug, Args)] pub struct RedoCommand { /// Override the target version by `lix_version.id` / active `version_id`, /// not the `lix_active_version.id` row key. #[arg(long)] pub version: Option, } ================================================ FILE: packages/cli/src/cli/root.rs ================================================ use super::exp::ExpCommand; use super::init::InitCommand; use super::redo::RedoCommand; use super::sql::SqlCommand; use super::undo::UndoCommand; use super::version::VersionCommand; use clap::{Parser, Subcommand, ValueHint}; use std::path::PathBuf; #[derive(Debug, Parser)] #[command(name = "lix")] #[command(about = "Lix command line interface")] pub struct Cli { /// Path to the .lix file (required when multiple .lix files exist). #[arg(long, global = true, value_hint = ValueHint::FilePath)] pub path: Option, /// Disable contextual hints that guide you on what to do next. Keep hints /// enabled until you understand how lix works. AI agents and LLMs should /// not use this flag. #[arg(long, global = true)] pub no_hints: bool, #[command(subcommand)] pub command: Option, } #[derive(Debug, Subcommand)] pub enum Command { /// Experimental commands for benchmarking and diagnostics. Exp(ExpCommand), /// Initialize a lix at the provided path. Init(InitCommand), /// Reapply the most recently undone committed change unit. Redo(RedoCommand), /// Execute raw SQL against a lix. Sql(SqlCommand), /// Undo the most recent committed change unit. Undo(UndoCommand), /// Version operations such as merging branches. Version(VersionCommand), } #[cfg(test)] mod tests { use super::{Cli, Command}; use crate::cli::sql::SqlSubcommand; use crate::cli::version::VersionSubcommand; use clap::Parser; use std::path::PathBuf; #[test] fn parses_init_command_path_argument() { let cli = Cli::try_parse_from(["lix", "init", "tmp/new.lix"]).expect("parse succeeds"); match cli.command { Some(Command::Init(init)) => assert_eq!(init.path, PathBuf::from("tmp/new.lix")), _ => panic!("expected init command"), } } #[test] fn parses_sql_execute_params_json_flag() { let cli = Cli::try_parse_from([ "lix", "sql", "execute", "--params", "[\"first\", \"second\"]", "SELECT ?1, ?2", ]) .expect("parse succeeds"); match cli.command { Some(Command::Sql(sql)) => match sql.command { SqlSubcommand::Execute(args) => { assert_eq!(args.params, Some("[\"first\", \"second\"]".to_string())); assert_eq!(args.sql, "SELECT ?1, ?2"); } }, _ => panic!("expected sql command"), } } #[test] fn parses_undo_command_version_flag() { let cli = Cli::try_parse_from(["lix", "undo", "--version", "branch-1"]).expect("parse succeeds"); match cli.command { Some(Command::Undo(command)) => { assert_eq!(command.version.as_deref(), Some("branch-1")) } _ => panic!("expected undo command"), } } #[test] fn parses_redo_command_without_version() { let cli = Cli::try_parse_from(["lix", "redo"]).expect("parse succeeds"); match cli.command { Some(Command::Redo(command)) => assert_eq!(command.version, None), _ => panic!("expected redo command"), } } #[test] fn parses_version_merge_command() { let cli = Cli::try_parse_from([ "lix", "version", "merge", "--source-name", "draft-a", "--target-id", "main", ]) .expect("parse succeeds"); match cli.command { Some(Command::Version(command)) => match command.command { VersionSubcommand::Merge(args) => { assert_eq!(args.source_name.as_deref(), Some("draft-a")); assert_eq!(args.target_id.as_deref(), Some("main")); } _ => panic!("expected version merge command"), }, _ => panic!("expected version command"), } } #[test] fn parses_version_create_command() { let cli = Cli::try_parse_from([ "lix", "version", "create", "--id", "branch-a", "--name", "Branch A", "--from-name", "main", "--hidden", ]) .expect("parse succeeds"); match cli.command { Some(Command::Version(command)) => match command.command { VersionSubcommand::Create(args) => { assert_eq!(args.id.as_deref(), Some("branch-a")); assert_eq!(args.name.as_deref(), Some("Branch A")); assert_eq!(args.from_name.as_deref(), Some("main")); assert!(args.hidden); } _ => panic!("expected version create command"), }, _ => panic!("expected version command"), } } #[test] fn parses_version_switch_command() { let cli = Cli::try_parse_from(["lix", "version", "switch", "--name", "branch-a"]) .expect("parse succeeds"); match cli.command { Some(Command::Version(command)) => match command.command { VersionSubcommand::Switch(args) => { assert_eq!(args.name.as_deref(), Some("branch-a")); } _ => panic!("expected version switch command"), }, _ => panic!("expected version command"), } } #[test] fn rejects_version_switch_without_reference_flag() { let error = Cli::try_parse_from(["lix", "version", "switch"]).expect_err("parse should fail"); let message = error.to_string(); assert!(message.contains("--id")); assert!(message.contains("--name")); } } ================================================ FILE: packages/cli/src/cli/sql.rs ================================================ use clap::{Args, Subcommand, ValueEnum}; #[derive(Debug, Args)] pub struct SqlCommand { #[command(subcommand)] pub command: SqlSubcommand, } #[derive(Debug, Subcommand)] pub enum SqlSubcommand { /// Execute SQL text. Use '-' to read SQL from stdin. #[command(after_long_help = "\ Examples: lix sql execute \"INSERT INTO lix_file (path, data) VALUES ('/hello.md', lix_text_encode('# Hello'))\" lix sql execute \"SELECT path, lix_text_decode(data) FROM lix_file\" lix sql execute \"SELECT path, lixcol_depth FROM lix_file_history\"")] Execute(SqlExecuteArgs), } #[derive(Clone, Copy, Debug, Eq, PartialEq, ValueEnum)] pub enum SqlOutputFormat { Table, Json, } #[derive(Debug, Args)] pub struct SqlExecuteArgs { /// Output format for query results. #[arg(long, value_enum, default_value_t = SqlOutputFormat::Table)] pub format: SqlOutputFormat, /// Bind positional SQL parameters from a JSON array. /// /// Use inline JSON (`--params '[1,true,null,\"text\"]'`) or `-` to read JSON from stdin. /// Supported values: null, booleans, numbers, strings, and blobs via {"$blob":""}. #[arg(long = "params")] pub params: Option, /// SQL query text to execute. Use '-' to read from stdin. pub sql: String, } ================================================ FILE: packages/cli/src/cli/undo.rs ================================================ use clap::Args; #[derive(Debug, Args)] pub struct UndoCommand { /// Override the target version by `lix_version.id` / active `version_id`, /// not the `lix_active_version.id` row key. #[arg(long)] pub version: Option, } ================================================ FILE: packages/cli/src/cli/version.rs ================================================ use clap::{Args, Subcommand}; #[derive(Debug, Args)] pub struct VersionCommand { #[command(subcommand)] pub command: VersionSubcommand, } #[derive(Debug, Subcommand)] pub enum VersionSubcommand { /// Create a new version from the current active version head. Create(CreateVersionCommand), /// Merge one version into another. Merge(MergeVersionCommand), /// Switch the active version. Switch(SwitchVersionCommand), } #[derive(Debug, Args)] pub struct CreateVersionCommand { /// Explicit version id. If omitted, Lix generates one. #[arg(long)] pub id: Option, /// Human-readable version name. Defaults to the id. #[arg(long)] pub name: Option, /// Source version id to branch from. Defaults to the active version. #[arg(long, conflicts_with = "from_name")] pub from_id: Option, /// Source version name to branch from. Defaults to the active version. #[arg(long, conflicts_with = "from_id")] pub from_name: Option, /// Hide the version from default listings. #[arg(long, default_value_t = false)] pub hidden: bool, } #[derive(Debug, Args)] pub struct MergeVersionCommand { /// Source version id to merge from. #[arg( long, conflicts_with = "source_name", required_unless_present = "source_name" )] pub source_id: Option, /// Source version name to merge from. #[arg( long, conflicts_with = "source_id", required_unless_present = "source_id" )] pub source_name: Option, /// Target version id to merge into. #[arg( long, conflicts_with = "target_name", required_unless_present = "target_name" )] pub target_id: Option, /// Target version name to merge into. #[arg( long, conflicts_with = "target_id", required_unless_present = "target_id" )] pub target_name: Option, } #[derive(Debug, Args)] pub struct SwitchVersionCommand { /// Version id to make active. #[arg(long, conflicts_with = "name", required_unless_present = "name")] pub id: Option, /// Version name to make active. #[arg(long, conflicts_with = "id", required_unless_present = "id")] pub name: Option, } ================================================ FILE: packages/cli/src/commands/exp/git_replay.rs ================================================ use crate::cli::exp::ExpGitReplayArgs; use crate::db; use crate::error::CliError; use lix_rs_sdk::{Lix, Value}; use serde::Serialize; use sha2::{Digest, Sha256}; use std::collections::{BTreeMap, BTreeSet, HashMap, HashSet}; use std::fs; use std::io::Write; use std::path::{Path, PathBuf}; use std::process::{Command, Stdio}; use std::time::{Duration, Instant}; const NULL_OID: &str = "0000000000000000000000000000000000000000"; const PROGRESS_EVERY: usize = 10; const DEFAULT_INSERT_BATCH_ROWS: usize = 100; #[derive(Debug, Clone)] struct Change { status: char, old_mode: String, new_mode: String, new_oid: String, old_path: Option, new_path: Option, } impl Change { fn new_is_blob(&self) -> bool { mode_is_blob(&self.new_mode) } } #[derive(Debug)] struct PatchSet { changes: Vec, blob_by_oid: HashMap>, } #[derive(Default)] struct ReplayState { path_to_file_id: HashMap, known_file_ids: HashSet, } #[derive(Debug, Clone)] struct WriteRow { id: String, path: String, data: Vec, } #[derive(Debug)] struct PreparedBatch { deletes: Vec, inserts: Vec, updates: Vec, } #[derive(Debug)] struct SqlStatement { sql: String, params: Vec, } #[derive(Debug, Clone)] struct ExpectedFile { path: String, sha256: String, } #[derive(Debug, Default, Serialize)] struct ReplayProfilePhaseTotals { read_patch_ms: f64, prepare_ms: f64, build_sql_ms: f64, execute_ms: f64, verify_ms: f64, total_ms: f64, } #[derive(Debug, Serialize)] struct ReplayCommitProfile { commit_sha: String, changed_paths: usize, inserts: usize, updates: usize, deletes: usize, statement_count: usize, sql_chars: usize, blob_bytes: usize, noop: bool, read_patch_ms: f64, prepare_ms: f64, build_sql_ms: f64, execute_ms: f64, verify_ms: Option, total_ms: f64, } #[derive(Debug, Serialize)] struct ReplayProfileReport { repo_path: String, output_lix_path: String, branch: String, from_commit: Option, num_commits_requested: Option, verify_state: bool, commits_replayed: usize, commits_applied: usize, commits_noop: usize, changed_paths_total: usize, phase_totals: ReplayProfilePhaseTotals, commits: Vec, } #[derive(Debug, Clone)] struct SqlTraceCommitTarget { commit_sha: String, } #[derive(Debug, Serialize)] struct ReplaySqlTraceReport { repo_path: String, output_lix_path: String, branch: String, from_commit: Option, num_commits_requested: Option, traced_commit: Option, commits: Vec, } #[derive(Debug, Serialize)] struct ReplaySqlTraceCommit { commit_sha: String, changed_paths: usize, inserts: usize, updates: usize, deletes: usize, statement_count: usize, outer_execute_ms: f64, operations: Vec, } #[derive(Debug, Serialize)] struct ReplaySqlTraceOperation { sequence: u64, kind: &'static str, sql: Option, sql_chars: usize, params_count: usize, blob_params: usize, blob_param_bytes: usize, row_count: Option, column_count: Option, duration_ms: f64, error: Option, } pub fn run(args: ExpGitReplayArgs) -> Result<(), CliError> { let repo_path = absolutize_from_cwd(&args.repo_path)?; validate_repo_dir(&repo_path)?; validate_git_repo(&repo_path)?; let output_lix_path = absolutize_from_cwd(&args.output_lix_path)?; db::prepare_lix_output_path(&output_lix_path, args.force)?; let profile_json_path = args .profile_json .as_ref() .map(|path| absolutize_from_cwd(path)) .transpose()?; if let Some(path) = &profile_json_path { prepare_regular_output_path(path, args.force)?; } let trace_sql_json_path = args .trace_sql_json .as_ref() .map(|path| absolutize_from_cwd(path)) .transpose()?; if let Some(path) = &trace_sql_json_path { prepare_regular_output_path(path, args.force)?; } if trace_sql_json_path.is_some() { return Err(CliError::msg( "--trace-sql-json is not available with the current rs-sdk backend API", )); } if args.trace_commit.is_some() && trace_sql_json_path.is_none() { return Err(CliError::InvalidArgs( "--trace-commit requires --trace-sql-json", )); } let replay_ref = normalize_replay_ref(&args.branch)?; let from_commit = args .from_commit .as_deref() .map(|raw| resolve_commit_oid(&repo_path, raw)) .transpose()?; let commits = list_linear_commits( &repo_path, &replay_ref, from_commit.as_deref(), args.num_commits, )?; if commits.is_empty() { return Err(CliError::msg(format!( "no commits found in {} for ref '{}'", repo_path.display(), args.branch ))); } let trace_commit_target = resolve_trace_commit_target(&commits, args.trace_commit.as_deref())?; let lix = init_and_open_lix_at_path(&output_lix_path)?; let mut state = ReplayState::default(); let mut expected_state_by_id = HashMap::::new(); let mut applied = 0usize; let mut noop = 0usize; let mut changed_paths = 0usize; let mut verified = 0usize; let mut phase_totals = ReplayProfilePhaseTotals::default(); let mut commit_profiles = Vec::::with_capacity(commits.len()); let mut sql_trace_commits = Vec::::new(); println!( "[git-replay] replaying {} commits from {}", commits.len(), repo_path.display() ); for (index, commit_sha) in commits.iter().enumerate() { let commit_started = Instant::now(); let read_patch_started = Instant::now(); let patch_set = read_commit_patch_set(&repo_path, commit_sha)?; let read_patch_ms = duration_to_ms(read_patch_started.elapsed()); phase_totals.read_patch_ms += read_patch_ms; changed_paths += patch_set.changes.len(); let prepare_started = Instant::now(); let prepared = prepare_commit_changes(&mut state, &patch_set.changes, &patch_set.blob_by_oid)?; let prepare_ms = duration_to_ms(prepare_started.elapsed()); phase_totals.prepare_ms += prepare_ms; let build_sql_started = Instant::now(); let statements = build_replay_commit_statements(&prepared, DEFAULT_INSERT_BATCH_ROWS); let build_sql_ms = duration_to_ms(build_sql_started.elapsed()); phase_totals.build_sql_ms += build_sql_ms; let statement_count = statements.len(); let sql_chars = total_statement_sql_chars(&statements); let blob_bytes = prepared_blob_bytes(&prepared); let inserts = prepared.inserts.len(); let updates = prepared.updates.len(); let deletes = prepared.deletes.len(); let mut execute_ms = 0.0f64; let mut verify_ms = None; if statements.is_empty() { noop += 1; } else { let should_trace_commit = should_trace_commit(commit_sha, trace_commit_target.as_ref()); let execute_started = Instant::now(); execute_statements_as_transaction(&lix, &statements, commit_sha)?; execute_ms = duration_to_ms(execute_started.elapsed()); phase_totals.execute_ms += execute_ms; if should_trace_commit { sql_trace_commits.push(ReplaySqlTraceCommit { commit_sha: commit_sha.clone(), changed_paths: patch_set.changes.len(), inserts, updates, deletes, statement_count, outer_execute_ms: execute_ms, operations: Vec::new(), }); } applied += 1; } if args.verify_state { let verify_started = Instant::now(); apply_prepared_to_expected_state(&mut expected_state_by_id, &prepared); verify_commit_state_hashes(&lix, &expected_state_by_id, commit_sha)?; let verify_elapsed_ms = duration_to_ms(verify_started.elapsed()); phase_totals.verify_ms += verify_elapsed_ms; verify_ms = Some(verify_elapsed_ms); verified += 1; } let total_ms = duration_to_ms(commit_started.elapsed()); phase_totals.total_ms += total_ms; commit_profiles.push(ReplayCommitProfile { commit_sha: commit_sha.clone(), changed_paths: patch_set.changes.len(), inserts, updates, deletes, statement_count, sql_chars, blob_bytes, noop: statements.is_empty(), read_patch_ms, prepare_ms, build_sql_ms, execute_ms, verify_ms, total_ms, }); if index == 0 || (index + 1) % PROGRESS_EVERY == 0 || index + 1 == commits.len() { println!( "[git-replay] {}/{} commits (applied={}, noop={}, changedPaths={})", index + 1, commits.len(), applied, noop, changed_paths ); } } println!("[git-replay] done"); println!("[git-replay] ref: {}", args.branch); println!("[git-replay] output: {}", output_lix_path.display()); println!("[git-replay] commits replayed: {}", commits.len()); println!("[git-replay] commits applied: {}", applied); println!("[git-replay] commits noop: {}", noop); println!("[git-replay] changed paths total: {}", changed_paths); if args.verify_state { println!( "[git-replay] verified commits: {verified}/{}", commits.len() ); } if let Some(profile_path) = &profile_json_path { write_profile_report( profile_path, ReplayProfileReport { repo_path: repo_path.display().to_string(), output_lix_path: output_lix_path.display().to_string(), branch: args.branch.clone(), from_commit: args.from_commit.clone(), num_commits_requested: args.num_commits, verify_state: args.verify_state, commits_replayed: commits.len(), commits_applied: applied, commits_noop: noop, changed_paths_total: changed_paths, phase_totals, commits: commit_profiles, }, )?; println!("[git-replay] profile json: {}", profile_path.display()); } if let Some(trace_path) = &trace_sql_json_path { write_sql_trace_report( trace_path, ReplaySqlTraceReport { repo_path: repo_path.display().to_string(), output_lix_path: output_lix_path.display().to_string(), branch: args.branch.clone(), from_commit: args.from_commit.clone(), num_commits_requested: args.num_commits, traced_commit: trace_commit_target.map(|target| target.commit_sha), commits: sql_trace_commits, }, )?; println!("[git-replay] sql trace json: {}", trace_path.display()); } Ok(()) } fn init_and_open_lix_at_path(path: &Path) -> Result { db::init_lix_at(path)?; let lix = db::open_lix_at(path)?; crate::db::block_on(lix.execute( "INSERT INTO lix_key_value (key, value) VALUES ('lix_deterministic_mode', '{\"enabled\":true}')", &[], )) .map_err(|err| CliError::msg(format!("failed to enable deterministic mode: {err}")))?; Ok(lix) } fn execute_statements_as_transaction( lix: &Lix, statements: &[SqlStatement], commit_sha: &str, ) -> Result<(), CliError> { let script = build_transaction_script(statements); let params = statements .iter() .flat_map(|statement| statement.params.iter().cloned()) .collect::>(); crate::db::block_on(lix.execute(&script, ¶ms)).map_err(|error| { let sql_preview = script.chars().take(160).collect::(); CliError::msg(format!( "failed at commit {commit_sha} while executing replay SQL '{sql_preview}': {error}" )) })?; Ok(()) } fn build_transaction_script(statements: &[SqlStatement]) -> String { let mut script = String::from("BEGIN;"); let mut next_param_index = 1usize; for statement in statements { script.push(' '); script.push_str(&number_sql_parameters( &statement.sql, &mut next_param_index, )); script.push(';'); } script.push_str(" COMMIT;"); script } fn number_sql_parameters(sql: &str, next_param_index: &mut usize) -> String { let mut numbered = String::with_capacity(sql.len() + 16); for ch in sql.chars() { if ch == '?' { numbered.push('?'); numbered.push_str(&next_param_index.to_string()); *next_param_index += 1; } else { numbered.push(ch); } } numbered } fn prepared_blob_bytes(prepared: &PreparedBatch) -> usize { prepared .inserts .iter() .chain(prepared.updates.iter()) .map(|row| row.data.len()) .sum() } fn total_statement_sql_chars(statements: &[SqlStatement]) -> usize { statements.iter().map(|statement| statement.sql.len()).sum() } fn duration_to_ms(duration: Duration) -> f64 { duration.as_secs_f64() * 1000.0 } fn write_profile_report(path: &Path, report: ReplayProfileReport) -> Result<(), CliError> { let mut bytes = serde_json::to_vec_pretty(&report).map_err(|error| { CliError::msg(format!( "failed to serialize replay profile report: {error}" )) })?; bytes.push(b'\n'); fs::write(path, bytes).map_err(|source| CliError::io("failed to write profile json", source)) } fn write_sql_trace_report(path: &Path, report: ReplaySqlTraceReport) -> Result<(), CliError> { let mut bytes = serde_json::to_vec_pretty(&report).map_err(|error| { CliError::msg(format!( "failed to serialize replay sql trace report: {error}" )) })?; bytes.push(b'\n'); fs::write(path, bytes).map_err(|source| CliError::io("failed to write sql trace json", source)) } fn list_linear_commits( repo_path: &Path, replay_ref: &str, from_commit: Option<&str>, limit: Option, ) -> Result, CliError> { let mut args = vec![ "rev-list".to_string(), "--reverse".to_string(), "--first-parent".to_string(), ]; if replay_ref == "--all" { args.push("--all".to_string()); } else { args.push(replay_ref.to_string()); } let output = run_git_text(repo_path, &args, None)?; let commits = output .lines() .map(str::trim) .filter(|line| !line.is_empty()) .map(ToOwned::to_owned) .collect::>(); select_replay_commits(commits, from_commit, limit) } fn resolve_trace_commit_target( commits: &[String], raw: Option<&str>, ) -> Result, CliError> { let Some(raw) = raw else { return Ok(None); }; let needle = raw.trim(); if needle.is_empty() { return Err(CliError::InvalidArgs("trace_commit must not be empty")); } let matches = commits .iter() .filter(|commit| commit == &needle || commit.starts_with(needle)) .cloned() .collect::>(); match matches.len() { 0 => Err(CliError::msg(format!( "--trace-commit {} did not match any replayed commit", raw ))), 1 => Ok(Some(SqlTraceCommitTarget { commit_sha: matches.into_iter().next().expect("exactly one trace match"), })), _ => Err(CliError::msg(format!( "--trace-commit {} matched multiple replayed commits; provide a longer prefix", raw ))), } } fn should_trace_commit(commit_sha: &str, target: Option<&SqlTraceCommitTarget>) -> bool { match target { Some(target) => target.commit_sha == commit_sha, None => true, } } fn select_replay_commits( mut commits: Vec, from_commit: Option<&str>, limit: Option, ) -> Result, CliError> { if let Some(from_commit) = from_commit { let from_index = commits .iter() .position(|commit| commit == from_commit) .ok_or_else(|| { CliError::msg(format!( "--from-commit {} is not reachable from selected ref", from_commit )) })?; commits = commits.split_off(from_index); } if let Some(limit) = limit { commits.truncate(limit as usize); } Ok(commits) } fn resolve_commit_oid(repo_path: &Path, raw: &str) -> Result { let trimmed = raw.trim(); if trimmed.is_empty() { return Err(CliError::InvalidArgs("from_commit must not be empty")); } let args = vec![ "rev-parse".to_string(), "--verify".to_string(), format!("{trimmed}^{{commit}}"), ]; let output = run_git_text(repo_path, &args, None).map_err(|error| { CliError::msg(format!( "failed to resolve --from-commit {}: {}", raw, error )) })?; let oid = output.trim(); if oid.is_empty() { return Err(CliError::msg(format!( "failed to resolve --from-commit {}: empty rev-parse output", raw ))); } Ok(oid.to_string()) } fn read_commit_patch_set(repo_path: &Path, commit_sha: &str) -> Result { let raw_args = vec![ "diff-tree".to_string(), "--root".to_string(), "--raw".to_string(), "-r".to_string(), "-z".to_string(), "-m".to_string(), "--first-parent".to_string(), "--find-renames".to_string(), "--no-commit-id".to_string(), commit_sha.to_string(), ]; let raw = run_git_bytes(repo_path, &raw_args, None)?; let changes = parse_raw_diff_tree(&raw)?; let wanted_blob_ids = collect_wanted_blob_ids(&changes); let blob_by_oid = read_blobs(repo_path, &wanted_blob_ids)?; Ok(PatchSet { changes, blob_by_oid, }) } fn parse_raw_diff_tree(raw: &[u8]) -> Result, CliError> { if raw.is_empty() { return Ok(Vec::new()); } let mut tokens = raw.split(|byte| *byte == 0).collect::>(); if tokens.last().is_some_and(|token| token.is_empty()) { tokens.pop(); } let mut changes = Vec::new(); let mut index = 0usize; while index < tokens.len() { let header_token = tokens[index]; index += 1; if header_token.is_empty() || !header_token.starts_with(b":") { continue; } let header_text = String::from_utf8_lossy(header_token); let fields = header_text[1..].split_whitespace().collect::>(); if fields.len() < 5 { continue; } let old_mode = fields[0].to_string(); let new_mode = fields[1].to_string(); let new_oid = fields[3].to_string(); let status = fields[4].chars().next().unwrap_or('M'); let first_path = token_to_string(tokens.get(index).ok_or_else(|| { CliError::msg("malformed git diff-tree output: missing path token") })?); index += 1; if status == 'R' || status == 'C' { let second_path = token_to_string(tokens.get(index).ok_or_else(|| { CliError::msg("malformed git diff-tree output: missing rename destination") })?); index += 1; changes.push(Change { status, old_mode, new_mode, new_oid, old_path: Some(first_path), new_path: Some(second_path), }); continue; } let old_path = if status == 'A' { None } else { Some(first_path.clone()) }; let new_path = if status == 'D' { None } else { Some(first_path) }; changes.push(Change { status, old_mode, new_mode, new_oid, old_path, new_path, }); } Ok(changes) } fn collect_wanted_blob_ids(changes: &[Change]) -> Vec { let mut wanted_blob_ids = BTreeSet::::new(); for change in changes { if change.new_path.is_none() || !change.new_is_blob() { continue; } if !change.new_oid.is_empty() && change.new_oid != NULL_OID { wanted_blob_ids.insert(change.new_oid.clone()); } } wanted_blob_ids.into_iter().collect() } fn read_blobs(repo_path: &Path, blob_ids: &[String]) -> Result>, CliError> { if blob_ids.is_empty() { return Ok(HashMap::new()); } let mut request_body = String::new(); for blob_id in blob_ids { request_body.push_str(blob_id); request_body.push('\n'); } let args = vec!["cat-file".to_string(), "--batch".to_string()]; let stdout = run_git_bytes(repo_path, &args, Some(request_body.as_bytes()))?; let mut blobs = HashMap::>::new(); let mut offset = 0usize; while offset < stdout.len() { let line_end = stdout[offset..] .iter() .position(|byte| *byte == b'\n') .map(|relative| offset + relative) .ok_or_else(|| { CliError::msg("malformed git cat-file output: missing header newline") })?; let header = String::from_utf8_lossy(&stdout[offset..line_end]) .trim() .to_string(); offset = line_end + 1; if header.is_empty() { continue; } let fields = header.split_whitespace().collect::>(); if fields.len() < 2 { return Err(CliError::msg(format!( "malformed git cat-file header: {header}" ))); } let oid = fields[0]; let object_type = fields[1]; if object_type == "missing" { return Err(CliError::msg(format!( "missing blob object in git repository: {oid}" ))); } if fields.len() < 3 { return Err(CliError::msg(format!( "malformed git cat-file header (missing size): {header}" ))); } let size = fields[2].parse::().map_err(|_| { CliError::msg(format!( "invalid blob size '{}' in git cat-file output for {oid}", fields[2] )) })?; let data_start = offset; let data_end = data_start.saturating_add(size); if data_end > stdout.len() { return Err(CliError::msg(format!( "git cat-file output truncated while reading blob {oid}" ))); } blobs.insert(oid.to_string(), stdout[data_start..data_end].to_vec()); offset = data_end; if offset < stdout.len() && stdout[offset] == b'\n' { offset += 1; } } for blob_id in blob_ids { if !blobs.contains_key(blob_id) { return Err(CliError::msg(format!( "blob {blob_id} was requested but not returned by git cat-file" ))); } } Ok(blobs) } fn prepare_commit_changes( state: &mut ReplayState, changes: &[Change], blob_by_oid: &HashMap>, ) -> Result { let mut delete_ids = BTreeSet::::new(); let mut inserts_by_id = BTreeMap::::new(); let mut updates_by_id = BTreeMap::::new(); for change in changes { let status = normalize_status(change.status); if should_delete_old_entry(change, status) { if let Some(deleted_id) = resolve_delete_path(state, change) { delete_ids.insert(deleted_id.clone()); inserts_by_id.remove(&deleted_id); updates_by_id.remove(&deleted_id); } } if status == 'D' || !change.new_is_blob() { continue; } let new_path = match &change.new_path { Some(path) => path, None => continue, }; let target = resolve_write_target(state, change, status)?; let bytes = blob_by_oid.get(&change.new_oid).ok_or_else(|| { CliError::msg(format!( "missing blob {} while applying {} {}", change.new_oid, status, new_path )) })?; let row = WriteRow { id: target.id.clone(), path: to_lix_path(new_path), data: bytes.clone(), }; if delete_ids.contains(&row.id) { delete_ids.remove(&row.id); } if target.is_insert { inserts_by_id.insert(row.id.clone(), row); updates_by_id.remove(&target.id); state.known_file_ids.insert(target.id); continue; } if inserts_by_id.contains_key(&row.id) { inserts_by_id.insert(row.id.clone(), row); continue; } updates_by_id.insert(row.id.clone(), row); } Ok(PreparedBatch { deletes: delete_ids.into_iter().collect(), inserts: inserts_by_id.into_values().collect(), updates: updates_by_id.into_values().collect(), }) } fn should_delete_old_entry(change: &Change, status: char) -> bool { if change.old_path.is_none() || !mode_is_blob(&change.old_mode) { return false; } match status { 'D' | 'R' => true, 'A' | 'C' => false, _ => !change.new_is_blob(), } } struct WriteTarget { id: String, is_insert: bool, } fn resolve_delete_path(state: &mut ReplayState, change: &Change) -> Option { let old_path = change.old_path.as_ref()?; let id = state.path_to_file_id.remove(old_path)?; state.known_file_ids.remove(&id); Some(id) } fn resolve_write_target( state: &mut ReplayState, change: &Change, status: char, ) -> Result { let new_path = change .new_path .as_ref() .ok_or(CliError::InvalidArgs("write target requires new path"))?; if status == 'R' { if let Some(old_path) = change.old_path.as_ref() { if let Some(existing_id) = state.path_to_file_id.get(old_path).cloned() { state.path_to_file_id.remove(old_path); state .path_to_file_id .insert(new_path.clone(), existing_id.clone()); return Ok(WriteTarget { id: existing_id, is_insert: false, }); } } } if let Some(existing_id) = state.path_to_file_id.get(new_path).cloned() { return Ok(WriteTarget { id: existing_id, is_insert: false, }); } let generated = stable_file_id(new_path); let is_insert = !state.known_file_ids.contains(&generated); state .path_to_file_id .insert(new_path.clone(), generated.clone()); Ok(WriteTarget { id: generated, is_insert, }) } fn build_replay_commit_statements( batch: &PreparedBatch, max_insert_rows: usize, ) -> Vec { if batch.deletes.is_empty() && batch.inserts.is_empty() && batch.updates.is_empty() { return Vec::new(); } let mut statements = Vec::::new(); for delete_chunk in batch.deletes.chunks(500) { if delete_chunk.is_empty() { continue; } let placeholders = vec!["?"; delete_chunk.len()].join(", "); let sql = format!("DELETE FROM lix_file WHERE id IN ({placeholders})"); let params = delete_chunk .iter() .cloned() .map(Value::Text) .collect::>(); statements.push(SqlStatement { sql, params }); } let insert_batch_size = max_insert_rows.max(1); for insert_chunk in batch.inserts.chunks(insert_batch_size) { if insert_chunk.is_empty() { continue; } let mut params = Vec::::with_capacity(insert_chunk.len() * 3); let values_sql = insert_chunk .iter() .map(|row| { params.push(Value::Text(row.id.clone())); params.push(Value::Text(row.path.clone())); params.push(Value::Blob(row.data.clone())); "(?, ?, ?)" }) .collect::>() .join(", "); let sql = format!("INSERT INTO lix_file (id, path, data) VALUES {values_sql}"); statements.push(SqlStatement { sql, params }); } for row in &batch.updates { if stable_file_id(&row.path) == row.id { statements.push(SqlStatement { sql: "UPDATE lix_file SET data = ? WHERE id = ?".to_string(), params: vec![Value::Blob(row.data.clone()), Value::Text(row.id.clone())], }); } else { statements.push(SqlStatement { sql: "UPDATE lix_file SET path = ?, data = ? WHERE id = ?".to_string(), params: vec![ Value::Text(row.path.clone()), Value::Blob(row.data.clone()), Value::Text(row.id.clone()), ], }); } } statements } fn apply_prepared_to_expected_state( expected_state_by_id: &mut HashMap, prepared: &PreparedBatch, ) { for id in &prepared.deletes { expected_state_by_id.remove(id); } for row in prepared.inserts.iter().chain(prepared.updates.iter()) { expected_state_by_id.insert( row.id.clone(), ExpectedFile { path: row.path.clone(), sha256: sha256_hex(&row.data), }, ); } } fn verify_commit_state_hashes( lix: &Lix, expected_state_by_id: &HashMap, commit_sha: &str, ) -> Result<(), CliError> { let result = crate::db::block_on(lix.execute("SELECT id, path, data FROM lix_file", &[] as &[Value])) .map_err(|err| { CliError::msg(format!( "failed to query replay state for verification: {err}" )) })?; let rows = result.rows(); if rows.len() != expected_state_by_id.len() { return Err(CliError::msg(format!( "state mismatch at {commit_sha}: row count differs (lix={}, expected={})", rows.len(), expected_state_by_id.len() ))); } let mut seen = HashSet::::new(); for (index, row) in rows.iter().enumerate() { if row.values().len() < 3 { return Err(CliError::msg(format!( "state mismatch at {commit_sha}: row {index} has fewer than 3 columns" ))); } let id = value_to_string( row.get_index(0) .ok_or_else(|| CliError::msg(format!("missing verify.id[{index}]")))?, &format!("verify.id[{index}]"), )?; let path = value_to_string( row.get_index(1) .ok_or_else(|| CliError::msg(format!("missing verify.path[{index}]")))?, &format!("verify.path[{index}]"), )?; let data = value_to_blob( row.get_index(2) .ok_or_else(|| CliError::msg(format!("missing verify.data[{index}]")))?, &format!("verify.data[{index}]"), )?; let hash = sha256_hex(data); let expected = expected_state_by_id.get(&id).ok_or_else(|| { CliError::msg(format!( "state mismatch at {commit_sha}: unexpected file id in lix state: {id}" )) })?; if expected.path != path { return Err(CliError::msg(format!( "state mismatch at {commit_sha}: path differs for id {id} (lix={path}, expected={})", expected.path ))); } if expected.sha256 != hash { return Err(CliError::msg(format!( "state mismatch at {commit_sha}: hash differs for id {id}" ))); } seen.insert(id); } if seen.len() != expected_state_by_id.len() { return Err(CliError::msg(format!( "state mismatch at {commit_sha}: missing rows (lix={}, expected={})", seen.len(), expected_state_by_id.len() ))); } Ok(()) } fn value_to_string(value: &Value, context: &str) -> Result { match value { Value::Text(text) => Ok(text.clone()), Value::Integer(number) => Ok(number.to_string()), Value::Real(number) => Ok(number.to_string()), Value::Boolean(flag) => Ok(flag.to_string()), _ => Err(CliError::msg(format!( "unexpected scalar type for {context}" ))), } } fn value_to_blob<'a>(value: &'a Value, context: &str) -> Result<&'a [u8], CliError> { match value { Value::Blob(bytes) => Ok(bytes), _ => Err(CliError::msg(format!("unexpected blob type for {context}"))), } } fn sha256_hex(bytes: &[u8]) -> String { let digest = Sha256::digest(bytes); let mut out = String::with_capacity(digest.len() * 2); for byte in digest { out.push(hex_digit_lower(byte >> 4)); out.push(hex_digit_lower(byte & 0x0f)); } out } fn hex_digit_lower(value: u8) -> char { match value { 0..=9 => (b'0' + value) as char, 10..=15 => (b'a' + (value - 10)) as char, _ => '0', } } fn hex_digit_upper(value: u8) -> char { match value { 0..=9 => (b'0' + value) as char, 10..=15 => (b'A' + (value - 10)) as char, _ => '0', } } fn normalize_status(value: char) -> char { value.to_ascii_uppercase() } fn stable_file_id(path: &str) -> String { to_lix_path(path) } fn to_lix_path(path: &str) -> String { let normalized = path.replace('\\', "/"); let without_leading_slash = normalized.strip_prefix('/').unwrap_or(&normalized); let encoded = without_leading_slash .split('/') .map(encode_path_segment) .collect::>() .join("/"); format!("/{encoded}") } fn encode_path_segment(segment: &str) -> String { let mut encoded = String::new(); for byte in segment.as_bytes() { let is_alpha_num = byte.is_ascii_alphanumeric(); let is_safe = matches!(*byte, b'.' | b'_' | b'~' | b'-'); if is_alpha_num || is_safe { encoded.push(*byte as char); } else { encoded.push('%'); encoded.push(hex_digit_upper(byte >> 4)); encoded.push(hex_digit_upper(byte & 0x0f)); } } encoded } fn mode_is_blob(mode: &str) -> bool { mode.starts_with("100") || mode == "120000" } fn token_to_string(token: &[u8]) -> String { String::from_utf8_lossy(token).to_string() } fn run_git_text( repo_path: &Path, args: &[String], stdin: Option<&[u8]>, ) -> Result { let output = run_git_bytes(repo_path, args, stdin)?; Ok(String::from_utf8_lossy(&output).to_string()) } fn run_git_bytes( repo_path: &Path, args: &[String], stdin: Option<&[u8]>, ) -> Result, CliError> { let mut command = Command::new("git"); command.arg("-C").arg(repo_path); for arg in args { command.arg(arg); } command.stdout(Stdio::piped()); command.stderr(Stdio::piped()); if stdin.is_some() { command.stdin(Stdio::piped()); } else { command.stdin(Stdio::null()); } let mut child = command .spawn() .map_err(|source| CliError::io("failed to spawn git command", source))?; if let Some(input) = stdin { let mut child_stdin = child .stdin .take() .ok_or_else(|| CliError::msg("failed to open stdin for git command"))?; child_stdin .write_all(input) .map_err(|source| CliError::io("failed to write stdin for git command", source))?; } let output = child .wait_with_output() .map_err(|source| CliError::io("failed to wait for git command", source))?; if output.status.success() { return Ok(output.stdout); } let args_preview = args.join(" "); let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string(); let status = output .status .code() .map(|code| format!("exit code {code}")) .unwrap_or_else(|| "terminated by signal".to_string()); Err(CliError::msg(format!( "git -C {} {} failed with {}: {}", repo_path.display(), args_preview, status, stderr ))) } fn prepare_regular_output_path(path: &Path, force: bool) -> Result<(), CliError> { if let Some(parent) = path.parent() { fs::create_dir_all(parent) .map_err(|source| CliError::io("failed to create output directory", source))?; } if path.exists() { if path.is_dir() { return Err(CliError::msg(format!( "output path points to a directory, expected a file: {}", path.display() ))); } if force { fs::remove_file(path) .map_err(|source| CliError::io("failed to remove existing output file", source))?; return Ok(()); } return Err(CliError::msg(format!( "output path already exists: {}", path.display() ))); } Ok(()) } fn validate_repo_dir(path: &Path) -> Result<(), CliError> { if path.is_dir() { return Ok(()); } Err(CliError::msg(format!( "repo path does not exist or is not a directory: {}", path.display() ))) } fn validate_git_repo(path: &Path) -> Result<(), CliError> { let args = vec!["rev-parse".to_string(), "--is-inside-work-tree".to_string()]; let output = run_git_text(path, &args, None)?; if output.trim() == "true" { return Ok(()); } Err(CliError::msg(format!( "repo path is not a git work tree: {}", path.display() ))) } fn normalize_replay_ref(raw: &str) -> Result { let trimmed = raw.trim(); if trimmed.is_empty() { return Err(CliError::InvalidArgs("branch must not be empty")); } if trimmed == "*" { return Ok("--all".to_string()); } Ok(trimmed.to_string()) } fn absolutize_from_cwd(path: &Path) -> Result { if path.is_absolute() { return Ok(path.to_path_buf()); } let cwd = std::env::current_dir() .map_err(|source| CliError::io("failed to read current directory", source))?; Ok(cwd.join(path)) } #[cfg(test)] mod tests { use super::*; use std::time::{SystemTime, UNIX_EPOCH}; #[test] fn collect_wanted_blob_ids_skips_gitlink_oids() { let changes = vec![ Change { status: 'A', old_mode: "000000".to_string(), new_mode: "100644".to_string(), new_oid: "1111111111111111111111111111111111111111".to_string(), old_path: None, new_path: Some("regular.txt".to_string()), }, Change { status: 'A', old_mode: "000000".to_string(), new_mode: "160000".to_string(), new_oid: "4c9431adbd4a24aed1d9afdecbfe4eaac3a6bba9".to_string(), old_path: None, new_path: Some("submodule".to_string()), }, ]; let wanted = collect_wanted_blob_ids(&changes); assert_eq!( wanted, vec!["1111111111111111111111111111111111111111".to_string()] ); } #[test] fn select_replay_commits_starts_from_specific_commit_inclusive() { let commits = vec![ "a".to_string(), "b".to_string(), "c".to_string(), "d".to_string(), ]; let selected = select_replay_commits(commits, Some("c"), None) .expect("select_replay_commits should succeed"); assert_eq!(selected, vec!["c".to_string(), "d".to_string()]); } #[test] fn select_replay_commits_applies_limit_after_from_commit() { let commits = vec![ "a".to_string(), "b".to_string(), "c".to_string(), "d".to_string(), ]; let selected = select_replay_commits(commits, Some("b"), Some(2)) .expect("select_replay_commits should succeed"); assert_eq!(selected, vec!["b".to_string(), "c".to_string()]); } #[test] fn select_replay_commits_errors_when_from_commit_missing() { let commits = vec!["a".to_string(), "b".to_string()]; let result = select_replay_commits(commits, Some("missing"), None); assert!(result.is_err(), "expected error for missing from-commit"); let message = format!( "{}", result.expect_err("expected missing from-commit error") ); assert!( message.contains("not reachable from selected ref"), "unexpected error message: {message}" ); } #[test] fn prepare_commit_changes_typechange_blob_to_gitlink_deletes_file() { let mut state = ReplayState::default(); state.path_to_file_id.insert( "artifact/spa-prerender-repro".to_string(), "/artifact/spa-prerender-repro".to_string(), ); state .known_file_ids .insert("/artifact/spa-prerender-repro".to_string()); let changes = vec![Change { status: 'T', old_mode: "100644".to_string(), new_mode: "160000".to_string(), new_oid: "4c9431adbd4a24aed1d9afdecbfe4eaac3a6bba9".to_string(), old_path: Some("artifact/spa-prerender-repro".to_string()), new_path: Some("artifact/spa-prerender-repro".to_string()), }]; let prepared = prepare_commit_changes(&mut state, &changes, &HashMap::new()) .expect("gitlink typechange should not error"); assert_eq!( prepared.deletes, vec!["/artifact/spa-prerender-repro".to_string()] ); assert!(prepared.inserts.is_empty()); assert!(prepared.updates.is_empty()); assert!(!state .path_to_file_id .contains_key("artifact/spa-prerender-repro")); } #[test] fn prepare_output_path_rejects_existing_file() { let temp_dir = unique_temp_dir(); fs::create_dir_all(&temp_dir).expect("temp dir should be created"); let output_path = temp_dir.join("existing.lix"); fs::write(&output_path, b"existing").expect("seed file should be written"); let result = db::prepare_lix_output_path(&output_path, false); assert!(result.is_err(), "expected error when output file exists"); let message = format!("{}", result.expect_err("expected output path error")); assert!( message.contains("output path already exists"), "unexpected error message: {message}" ); fs::remove_file(&output_path).expect("seed file should be removable"); fs::remove_dir_all(&temp_dir).expect("temp dir should be removable"); } #[test] fn prepare_output_path_allows_nonexistent_file_and_creates_parent() { let temp_dir = unique_temp_dir(); let nested_parent = temp_dir.join("nested").join("output"); let output_path = nested_parent.join("new.lix"); let result = db::prepare_lix_output_path(&output_path, false); assert!(result.is_ok(), "expected success for absent output file"); assert!( nested_parent.is_dir(), "expected parent directories to be created" ); fs::remove_dir_all(&temp_dir).expect("temp dir should be removable"); } #[test] fn prepare_output_lix_path_force_removes_existing_file_and_sidecars() { let temp_dir = unique_temp_dir(); fs::create_dir_all(&temp_dir).expect("temp dir should be created"); let output_path = temp_dir.join("existing.lix"); fs::write(&output_path, b"existing").expect("seed file should be written"); fs::write( PathBuf::from(format!("{}-wal", output_path.display())), b"wal-bytes", ) .expect("wal file should be written"); fs::write( PathBuf::from(format!("{}-shm", output_path.display())), b"shm-bytes", ) .expect("shm file should be written"); let result = db::prepare_lix_output_path(&output_path, true); assert!(result.is_ok(), "expected success when force is enabled"); assert!( !output_path.exists(), "expected existing output file to be removed" ); assert!( !PathBuf::from(format!("{}-wal", output_path.display())).exists(), "expected wal sidecar to be removed" ); assert!( !PathBuf::from(format!("{}-shm", output_path.display())).exists(), "expected shm sidecar to be removed" ); fs::remove_dir_all(&temp_dir).expect("temp dir should be removable"); } #[test] fn build_replay_commit_statements_omits_path_for_stable_updates() { let batch = PreparedBatch { deletes: Vec::new(), inserts: Vec::new(), updates: vec![WriteRow { id: "/src/main.ts".to_string(), path: "/src/main.ts".to_string(), data: b"hello".to_vec(), }], }; let statements = build_replay_commit_statements(&batch, DEFAULT_INSERT_BATCH_ROWS); assert_eq!(statements.len(), 1); assert_eq!( statements[0].sql, "UPDATE lix_file SET data = ? WHERE id = ?" ); assert_eq!( statements[0].params, vec![ Value::Blob(b"hello".to_vec()), Value::Text("/src/main.ts".to_string()) ] ); } #[test] fn build_replay_commit_statements_preserves_path_for_renames() { let batch = PreparedBatch { deletes: Vec::new(), inserts: Vec::new(), updates: vec![WriteRow { id: "/src/old.ts".to_string(), path: "/src/new.ts".to_string(), data: b"hello".to_vec(), }], }; let statements = build_replay_commit_statements(&batch, DEFAULT_INSERT_BATCH_ROWS); assert_eq!(statements.len(), 1); assert_eq!( statements[0].sql, "UPDATE lix_file SET path = ?, data = ? WHERE id = ?" ); assert_eq!( statements[0].params, vec![ Value::Text("/src/new.ts".to_string()), Value::Blob(b"hello".to_vec()), Value::Text("/src/old.ts".to_string()) ] ); } fn unique_temp_dir() -> PathBuf { let nanos = SystemTime::now() .duration_since(UNIX_EPOCH) .expect("system time should be after unix epoch") .as_nanos(); std::env::temp_dir().join(format!( "lix-cli-git-replay-test-{}-{nanos}", std::process::id() )) } } ================================================ FILE: packages/cli/src/commands/exp/mod.rs ================================================ mod git_replay; use crate::app::AppContext; use crate::cli::exp::{ExpCommand, ExpSubcommand}; use crate::error::CliError; use crate::hints::CommandOutput; pub fn run(_context: &AppContext, command: ExpCommand) -> Result { match command.command { ExpSubcommand::GitReplay(args) => { git_replay::run(args)?; Ok(CommandOutput::empty()) } } } ================================================ FILE: packages/cli/src/commands/init.rs ================================================ use crate::cli::init::InitCommand; use crate::db; use crate::error::CliError; use crate::hints::{self, CommandOutput}; pub fn run(command: InitCommand) -> Result { let initialized = db::init_lix_at(&command.path)?; if initialized { println!("initialized {}", command.path.display()); } else { println!("already initialized {}", command.path.display()); } Ok(CommandOutput::with_hints(hints::hint_after_init())) } ================================================ FILE: packages/cli/src/commands/mod.rs ================================================ pub mod exp; pub mod init; pub mod redo; pub mod sql; pub mod undo; pub mod version; ================================================ FILE: packages/cli/src/commands/redo.rs ================================================ use crate::app::AppContext; use crate::cli::redo::RedoCommand; use crate::error::CliError; use crate::hints::CommandOutput; pub fn run(_context: &AppContext, _command: RedoCommand) -> Result { Err(CliError::msg( "redo is not available in the current rs-sdk surface", )) } ================================================ FILE: packages/cli/src/commands/sql/execute.rs ================================================ use crate::app::AppContext; use crate::cli::sql::{SqlExecuteArgs, SqlOutputFormat}; use crate::db; use crate::error::CliError; use crate::hints::{self, CommandOutput}; use crate::output; use base64::Engine as _; use lix_rs_sdk::Value; use serde_json::Value as JsonValue; use std::io::Read; pub fn run(context: &AppContext, args: SqlExecuteArgs) -> Result { let (sql, params) = resolve_sql_and_params(&args)?; let lix_path = db::resolve_db_path(context)?; let lix = db::open_lix_at(&lix_path)?; let result = crate::db::block_on(lix.execute(&sql, ¶ms)) .map_err(|err| CliError::from_lix("sql execution failed", err))?; match args.format { SqlOutputFormat::Json => output::print_execute_result_json(&result), SqlOutputFormat::Table => output::print_execute_result_table(&result), } let output_hints = if context.no_hints || !hints::are_hints_enabled(&lix) { Vec::new() } else { hints::hint_blob_in_result(&result) }; Ok(CommandOutput::with_hints(output_hints)) } fn resolve_sql_and_params(args: &SqlExecuteArgs) -> Result<(String, Vec), CliError> { let sql_from_stdin = args.sql == "-"; let params_from_stdin = args.params.as_deref() == Some("-"); if sql_from_stdin && params_from_stdin { return Err(CliError::InvalidArgs( "sql and params cannot both be read from stdin", )); } let stdin_payload = if sql_from_stdin { Some(read_stdin("failed to read SQL from stdin")?) } else if params_from_stdin { Some(read_stdin("failed to read params JSON from stdin")?) } else { None }; let sql = if sql_from_stdin { let input = stdin_payload .as_deref() .ok_or(CliError::InvalidArgs("stdin SQL input is empty"))?; if input.trim().is_empty() { return Err(CliError::InvalidArgs("stdin SQL input is empty")); } input.to_string() } else { args.sql.clone() }; let params = resolve_params(args.params.as_deref(), stdin_payload.as_deref())?; Ok((sql, params)) } fn read_stdin(context: &'static str) -> Result { let mut input = String::new(); std::io::stdin() .read_to_string(&mut input) .map_err(|source| CliError::io(context, source))?; Ok(input) } fn resolve_params( params_input: Option<&str>, stdin_payload: Option<&str>, ) -> Result, CliError> { let Some(raw_params) = params_input else { return Ok(Vec::new()); }; let json_text = if raw_params == "-" { let input = stdin_payload.ok_or(CliError::InvalidArgs("stdin params JSON input is empty"))?; if input.trim().is_empty() { return Err(CliError::InvalidArgs("stdin params JSON input is empty")); } input } else { raw_params }; parse_params_json(json_text) } fn parse_params_json(raw: &str) -> Result, CliError> { let parsed: JsonValue = serde_json::from_str(raw).map_err(|error| { CliError::msg(format!( "invalid --params JSON: expected a JSON array, parse error: {error}" )) })?; let values = parsed.as_array().ok_or_else(|| { CliError::msg("invalid --params JSON: expected a JSON array of positional parameters") })?; values .iter() .enumerate() .map(|(index, value)| parse_param_value(value, index)) .collect::, _>>() } fn parse_param_value(value: &JsonValue, index: usize) -> Result { match value { JsonValue::Null => Ok(Value::Null), JsonValue::Bool(v) => Ok(Value::Boolean(*v)), JsonValue::Number(v) => { if let Some(as_i64) = v.as_i64() { return Ok(Value::Integer(as_i64)); } if let Some(as_f64) = v.as_f64() { return Ok(Value::Real(as_f64)); } Err(CliError::msg(format!( "invalid --params value at index {index}: unsupported number representation" ))) } JsonValue::String(v) => Ok(Value::Text(v.clone())), JsonValue::Object(map) => parse_object_param(map, index), JsonValue::Array(_) => Err(CliError::msg(format!( "invalid --params value at index {index}: nested arrays are not supported" ))), } } fn parse_object_param( map: &serde_json::Map, index: usize, ) -> Result { if map.len() == 1 && map.contains_key("$blob") { let encoded = map .get("$blob") .and_then(JsonValue::as_str) .ok_or_else(|| { CliError::msg(format!( "invalid --params value at index {index}: $blob must be a base64 string" )) })?; let bytes = base64::engine::general_purpose::STANDARD .decode(encoded) .map_err(|error| { CliError::msg(format!( "invalid --params value at index {index}: $blob is not valid base64: {error}" )) })?; return Ok(Value::Blob(bytes)); } Err(CliError::msg(format!( "invalid --params value at index {index}: objects must use only {{\"$blob\":\"\"}}" ))) } #[cfg(test)] mod tests { use super::*; use std::path::PathBuf; use std::time::{SystemTime, UNIX_EPOCH}; #[test] fn resolve_params_defaults_to_empty_when_unset() { let resolved = resolve_params(None, None).expect("params should resolve"); assert!(resolved.is_empty()); } #[test] fn resolve_params_maps_json_array_values_to_typed_sql_values() { let resolved = resolve_params( Some("[null, true, 7, 2.5, \"hello\", {\"$blob\":\"aGk=\"}]"), None, ) .expect("typed params should resolve"); assert_eq!( resolved, vec![ Value::Null, Value::Boolean(true), Value::Integer(7), Value::Real(2.5), Value::Text("hello".to_string()), Value::Blob(vec![0x68, 0x69]), ] ); } #[test] fn resolve_params_rejects_non_array_json() { let error = resolve_params(Some("{\"a\":1}"), None).expect_err("non-array should fail"); assert_eq!( error.to_string(), "invalid --params JSON: expected a JSON array of positional parameters" ); } #[test] fn resolve_params_rejects_invalid_object_shape() { let error = resolve_params(Some("[{\"k\":\"v\"}]"), None).expect_err("invalid object should fail"); assert_eq!( error.to_string(), "invalid --params value at index 0: objects must use only {\"$blob\":\"\"}" ); } #[test] fn resolve_sql_and_params_rejects_double_stdin_usage() { let args = SqlExecuteArgs { format: SqlOutputFormat::Table, params: Some("-".to_string()), sql: "-".to_string(), }; let error = resolve_sql_and_params(&args).expect_err("double stdin read should be rejected"); assert_eq!( error.to_string(), "invalid arguments: sql and params cannot both be read from stdin" ); } #[test] fn execute_accepts_numbered_placeholders_with_json_params() { let handle = std::thread::Builder::new() .name("sql-execute-param-binding".to_string()) .stack_size(32 * 1024 * 1024) .spawn(|| { let path = test_lix_path("param-binding"); db::init_lix_at(&path).expect("init test lix file"); let context = AppContext { lix_path: Some(path.clone()), no_hints: true, }; let args = SqlExecuteArgs { format: SqlOutputFormat::Json, params: Some("[\"left\", \"right\"]".to_string()), sql: "SELECT ?1 AS first_value, ?2 AS second_value".to_string(), }; let result = run(&context, args); let _ = std::fs::remove_file(&path); assert!( result.is_ok(), "expected sql execute to succeed: {result:?}" ); }) .expect("spawn test thread"); handle.join().expect("test thread joins"); } fn test_lix_path(label: &str) -> PathBuf { let nonce = SystemTime::now() .duration_since(UNIX_EPOCH) .expect("system clock after unix epoch") .as_nanos(); std::env::temp_dir().join(format!("lix-cli-{label}-{nonce}.lix")) } } ================================================ FILE: packages/cli/src/commands/sql/mod.rs ================================================ mod execute; use crate::app::AppContext; use crate::cli::sql::{SqlCommand, SqlSubcommand}; use crate::error::CliError; use crate::hints::CommandOutput; pub fn run(context: &AppContext, command: SqlCommand) -> Result { match command.command { SqlSubcommand::Execute(args) => execute::run(context, args), } } ================================================ FILE: packages/cli/src/commands/undo.rs ================================================ use crate::app::AppContext; use crate::cli::undo::UndoCommand; use crate::error::CliError; use crate::hints::CommandOutput; pub fn run(_context: &AppContext, _command: UndoCommand) -> Result { Err(CliError::msg( "undo is not available in the current rs-sdk surface", )) } ================================================ FILE: packages/cli/src/commands/version/create.rs ================================================ use crate::app::AppContext; use crate::cli::version::CreateVersionCommand; use crate::commands::version::{ resolve_active_version_ref, resolve_version_ref, ResolvedVersionRef, VersionLookup, }; use crate::db::{open_lix_at, resolve_db_path}; use crate::error::CliError; use crate::hints::CommandOutput; use lix_rs_sdk::{CreateVersionOptions, CreateVersionResult, SwitchVersionOptions}; pub fn run(context: &AppContext, command: CreateVersionCommand) -> Result { let path = resolve_db_path(context)?; let lix = open_lix_at(&path)?; let source = match (command.from_id.as_deref(), command.from_name.as_deref()) { (Some(id), None) => Some(resolve_version_ref(&lix, VersionLookup::Id(id))?), (None, Some(name)) => Some(resolve_version_ref(&lix, VersionLookup::Name(name))?), (None, None) => None, _ => { return Err(CliError::msg( "version create accepts at most one of --from-id or --from-name", )); } }; let original_active = resolve_active_version_ref(&lix)?; if let Some(source) = &source { crate::db::block_on(lix.switch_version(SwitchVersionOptions { version_id: source.id.clone(), })) .map_err(|error| CliError::msg(error.to_string()))?; } let name = command .name .clone() .or_else(|| command.id.clone()) .ok_or_else(|| CliError::msg("version create requires --name when --id is omitted"))?; let result = crate::db::block_on(lix.create_version(CreateVersionOptions { id: command.id, name, from_commit_id: None, })) .map_err(|error| CliError::msg(error.to_string()))?; if source.is_some() { crate::db::block_on(lix.switch_version(SwitchVersionOptions { version_id: original_active.id.clone(), })) .map_err(|error| CliError::msg(error.to_string()))?; } let parent = source.as_ref().unwrap_or(&original_active); let (created_line, active_line) = create_confirmation_lines(&result, parent, &original_active); println!("{created_line}"); println!("{active_line}"); Ok(CommandOutput::empty()) } fn create_confirmation_lines( result: &CreateVersionResult, parent: &ResolvedVersionRef, active: &ResolvedVersionRef, ) -> (String, String) { ( format!( "Created version {} from {} ({}).", result.version_id, parent.name, parent.id ), format!( "Active version is still {} ({}). Use `lix version switch --id {}` to work on it.", active.name, active.id, result.version_id ), ) } #[cfg(test)] mod tests { use super::create_confirmation_lines; use crate::commands::version::ResolvedVersionRef; use lix_rs_sdk::CreateVersionResult; #[test] fn create_confirmation_uses_active_version_not_parent_version() { let result = CreateVersionResult { version_id: "new-version".to_string(), }; let parent = ResolvedVersionRef { id: "feature-b".to_string(), name: "Feature B".to_string(), }; let active = ResolvedVersionRef { id: "feature-a".to_string(), name: "Feature A".to_string(), }; let (_, active_line) = create_confirmation_lines(&result, &parent, &active); assert!(active_line.contains("Feature A (feature-a)")); assert!(!active_line.contains("Feature B (feature-b)")); } } ================================================ FILE: packages/cli/src/commands/version/merge.rs ================================================ use crate::app::AppContext; use crate::cli::version::MergeVersionCommand; use crate::commands::version::{resolve_version_ref, VersionLookup}; use crate::db::{open_lix_at, resolve_db_path}; use crate::error::CliError; use crate::hints::CommandOutput; use lix_rs_sdk::{MergeVersionOptions, MergeVersionOutcome, SwitchVersionOptions}; pub fn run(context: &AppContext, command: MergeVersionCommand) -> Result { let path = resolve_db_path(context)?; let lix = open_lix_at(&path)?; let source = resolve_version_ref( &lix, match (command.source_id.as_deref(), command.source_name.as_deref()) { (Some(id), None) => VersionLookup::Id(id), (None, Some(name)) => VersionLookup::Name(name), _ => { return Err(CliError::msg( "version merge requires exactly one of --source-id or --source-name", )); } }, )?; let target = resolve_version_ref( &lix, match (command.target_id.as_deref(), command.target_name.as_deref()) { (Some(id), None) => VersionLookup::Id(id), (None, Some(name)) => VersionLookup::Name(name), _ => { return Err(CliError::msg( "version merge requires exactly one of --target-id or --target-name", )); } }, )?; crate::db::block_on(lix.switch_version(SwitchVersionOptions { version_id: target.id.clone(), })) .map_err(|error| CliError::msg(error.to_string()))?; let result = crate::db::block_on(lix.merge_version(MergeVersionOptions { source_version_id: source.id.clone(), })) .map_err(|error| CliError::msg(error.to_string()))?; match result.outcome { MergeVersionOutcome::AlreadyUpToDate => { println!( "{} ({}) already contains {} ({})", target.name, target.id, source.name, source.id ); } MergeVersionOutcome::FastForward => { println!( "Fast-forwarded {} ({}) to {} ({}) at {}", target.name, target.id, source.name, source.id, result.target_head_after_commit_id ); } MergeVersionOutcome::MergeCommitted => { let commit_id = result.created_merge_commit_id.ok_or_else(|| { CliError::msg("merge_version returned MergeCommitted without a merge commit id") })?; println!( "Merged {} ({}) into {} ({}) with commit {}", source.name, source.id, target.name, target.id, commit_id ); } } Ok(CommandOutput::empty()) } ================================================ FILE: packages/cli/src/commands/version/mod.rs ================================================ mod create; mod merge; mod switch; use crate::app::AppContext; use crate::cli::version::{VersionCommand, VersionSubcommand}; use crate::error::CliError; use crate::hints::CommandOutput; use lix_rs_sdk::{ExecuteResult, Lix, Row as LixRow, Value}; #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(super) enum VersionLookup<'a> { Id(&'a str), Name(&'a str), } #[derive(Debug, Clone, PartialEq, Eq)] pub(super) struct ResolvedVersionRef { pub id: String, pub name: String, } pub fn run(context: &AppContext, command: VersionCommand) -> Result { match command.command { VersionSubcommand::Create(command) => create::run(context, command), VersionSubcommand::Merge(command) => merge::run(context, command), VersionSubcommand::Switch(command) => switch::run(context, command), } } pub(super) fn resolve_version_ref( lix: &Lix, lookup: VersionLookup<'_>, ) -> Result { match lookup { VersionLookup::Id(id) => resolve_version_by_id(lix, id), VersionLookup::Name(name) => resolve_version_by_name(lix, name), } } pub(super) fn resolve_active_version_ref(lix: &Lix) -> Result { let active_id = crate::db::block_on(lix.active_version_id()) .map_err(|error| CliError::msg(error.to_string()))?; resolve_version_by_id(lix, &active_id) } fn resolve_version_by_id(lix: &Lix, id: &str) -> Result { let result = crate::db::block_on(lix.execute( "SELECT id, name FROM lix_version WHERE id = $1 LIMIT 1", &[Value::Text(id.to_string())], )) .map_err(|error| CliError::msg(error.to_string()))?; let rows = statement_rows(&result)?; let Some(row) = rows.first() else { return Err(CliError::msg(format!("no version exists with id '{id}'"))); }; Ok(ResolvedVersionRef { id: text_at(row, 0, "lix_version.id")?, name: text_at(row, 1, "lix_version.name")?, }) } fn resolve_version_by_name(lix: &Lix, name: &str) -> Result { let result = crate::db::block_on(lix.execute( "SELECT id, name FROM lix_version WHERE name = $1 ORDER BY id", &[Value::Text(name.to_string())], )) .map_err(|error| CliError::msg(error.to_string()))?; let rows = statement_rows(&result)?; match rows { [] => Err(CliError::msg(format!( "no version exists with name '{name}'" ))), [row] => Ok(ResolvedVersionRef { id: text_at(row, 0, "lix_version.id")?, name: text_at(row, 1, "lix_version.name")?, }), rows => { let matching_ids = rows .iter() .map(|row| text_at(row, 0, "lix_version.id")) .collect::, _>>()? .join(", "); Err(CliError::msg(format!( "version name '{name}' is ambiguous; matching ids: {matching_ids}" ))) } } } fn statement_rows(result: &ExecuteResult) -> Result<&[LixRow], CliError> { Ok(result.rows()) } fn text_at(row: &LixRow, index: usize, field: &str) -> Result { match row.get_index(index) { Some(Value::Text(value)) if !value.is_empty() => Ok(value.clone()), Some(Value::Text(_)) => Err(CliError::msg(format!("{field} is empty"))), Some(Value::Integer(value)) => Ok(value.to_string()), Some(other) => Err(CliError::msg(format!( "expected text-like value for {field}, got {other:?}" ))), None => Err(CliError::msg(format!("missing {field}"))), } } #[cfg(test)] mod tests { use super::{create, merge, resolve_version_ref, switch, VersionLookup}; use crate::app::AppContext; use crate::cli::version::{CreateVersionCommand, MergeVersionCommand, SwitchVersionCommand}; use crate::db::{init_lix_at, open_lix_at}; use lix_rs_sdk::{CreateVersionOptions, ExecuteResult, Value}; use std::path::{Path, PathBuf}; use std::time::{SystemTime, UNIX_EPOCH}; fn temp_lix_path(label: &str) -> PathBuf { let nanos = SystemTime::now() .duration_since(UNIX_EPOCH) .expect("system time should be after unix epoch") .as_nanos(); std::env::temp_dir().join(format!( "lix-cli-version-{label}-{}-{nanos}.lix", std::process::id() )) } fn cleanup_lix_path(path: &Path) { let _ = std::fs::remove_file(path); let _ = std::fs::remove_file(format!("{}-wal", path.display())); let _ = std::fs::remove_file(format!("{}-shm", path.display())); let _ = std::fs::remove_file(format!("{}-journal", path.display())); } fn text_at(result: &ExecuteResult, row: usize, col: usize) -> String { match result.rows().get(row).and_then(|row| row.get_index(col)) { Some(Value::Text(value)) => { serde_json::from_str::(value).unwrap_or_else(|_| value.clone()) } Some(Value::Json(serde_json::Value::String(value))) => value.clone(), Some(Value::Json(value)) => value.to_string(), Some(Value::Integer(value)) => value.to_string(), other => panic!("expected text-like value, got {other:?}"), } } #[test] fn fast_forward_merge_keeps_database_openable_across_fresh_opens() { std::thread::Builder::new() .name("fast_forward_merge_keeps_database_openable_across_fresh_opens".to_string()) .stack_size(32 * 1024 * 1024) .spawn(|| { fast_forward_merge_keeps_database_openable_across_fresh_opens_inner(); }) .expect("test thread should spawn") .join() .expect("test thread should not panic"); } fn fast_forward_merge_keeps_database_openable_across_fresh_opens_inner() { let path = temp_lix_path("fast-forward-openable"); cleanup_lix_path(&path); init_lix_at(&path).expect("lix init should succeed"); let context = AppContext { lix_path: Some(path.clone()), no_hints: true, }; let lix = open_lix_at(&path).expect("initial open should succeed"); crate::db::block_on(lix.execute( "INSERT INTO lix_key_value (key, value) VALUES ('greeting', 'hello')", &[], )) .expect("main insert should succeed"); create::run( &context, CreateVersionCommand { id: Some("feature".to_string()), name: Some("feature".to_string()), from_id: None, from_name: None, hidden: false, }, ) .expect("version create should succeed"); switch::run( &context, SwitchVersionCommand { id: None, name: Some("feature".to_string()), }, ) .expect("version switch should succeed"); let lix = open_lix_at(&path).expect("open on feature should succeed"); crate::db::block_on(lix.execute( "INSERT INTO lix_key_value (key, value) VALUES ('feature_key', 'feature_val')", &[], )) .expect("feature insert should succeed"); let lix = open_lix_at(&path).expect("open for id lookup should succeed"); let main_id_result = crate::db::block_on(lix.execute( "SELECT id FROM lix_version WHERE name = 'main' LIMIT 1", &[], )) .expect("main id lookup should succeed"); let main_id = text_at(&main_id_result, 0, 0); merge::run( &context, MergeVersionCommand { source_id: None, source_name: Some("feature".to_string()), target_id: Some(main_id.clone()), target_name: None, }, ) .expect("fast-forward merge should succeed"); let reopened = open_lix_at(&path).expect("database should reopen after fast-forward merge"); let select_result = crate::db::block_on(reopened.execute("SELECT 1", &[])) .expect("reopened query should succeed"); assert_eq!(text_at(&select_result, 0, 0), "1"); switch::run( &context, SwitchVersionCommand { id: Some(main_id), name: None, }, ) .expect("switch back to main should succeed"); let reopened = open_lix_at(&path).expect("main reopen should succeed"); let feature_result = crate::db::block_on(reopened.execute( "SELECT value FROM lix_key_value WHERE key = 'feature_key' LIMIT 1", &[], )) .expect("feature key query should succeed"); assert_eq!(text_at(&feature_result, 0, 0), "feature_val"); cleanup_lix_path(&path); } #[test] fn resolve_version_ref_by_name_rejects_ambiguous_matches() { std::thread::Builder::new() .name("resolve_version_ref_by_name_rejects_ambiguous_matches".to_string()) .stack_size(32 * 1024 * 1024) .spawn(resolve_version_ref_by_name_rejects_ambiguous_matches_inner) .expect("test thread should spawn") .join() .expect("test thread should not panic"); } fn resolve_version_ref_by_name_rejects_ambiguous_matches_inner() { let path = temp_lix_path("ambiguous-version-name"); cleanup_lix_path(&path); init_lix_at(&path).expect("lix init should succeed"); let lix = open_lix_at(&path).expect("open should succeed"); crate::db::block_on(lix.create_version(CreateVersionOptions { id: Some("feature-a".to_string()), name: "feature".to_string(), from_commit_id: None, })) .expect("first version create should succeed"); crate::db::block_on(lix.create_version(CreateVersionOptions { id: Some("feature-b".to_string()), name: "feature".to_string(), from_commit_id: None, })) .expect("second version create should succeed"); let error = resolve_version_ref(&lix, VersionLookup::Name("feature")) .expect_err("ambiguous version name should fail"); assert_eq!( error.to_string(), "version name 'feature' is ambiguous; matching ids: feature-a, feature-b" ); cleanup_lix_path(&path); } #[test] fn resolve_version_ref_by_name_rejects_missing_match() { std::thread::Builder::new() .name("resolve_version_ref_by_name_rejects_missing_match".to_string()) .stack_size(32 * 1024 * 1024) .spawn(resolve_version_ref_by_name_rejects_missing_match_inner) .expect("test thread should spawn") .join() .expect("test thread should not panic"); } fn resolve_version_ref_by_name_rejects_missing_match_inner() { let path = temp_lix_path("missing-version-name"); cleanup_lix_path(&path); init_lix_at(&path).expect("lix init should succeed"); let lix = open_lix_at(&path).expect("open should succeed"); let error = resolve_version_ref(&lix, VersionLookup::Name("missing")) .expect_err("missing version name should fail"); assert_eq!(error.to_string(), "no version exists with name 'missing'"); cleanup_lix_path(&path); } } ================================================ FILE: packages/cli/src/commands/version/switch.rs ================================================ use crate::app::AppContext; use crate::cli::version::SwitchVersionCommand; use crate::commands::version::{resolve_version_ref, VersionLookup}; use crate::db::{open_lix_at, resolve_db_path}; use crate::error::CliError; use crate::hints::CommandOutput; use lix_rs_sdk::SwitchVersionOptions; pub fn run(context: &AppContext, command: SwitchVersionCommand) -> Result { let path = resolve_db_path(context)?; let lix = open_lix_at(&path)?; let resolved = resolve_version_ref( &lix, match (command.id.as_deref(), command.name.as_deref()) { (Some(id), None) => VersionLookup::Id(id), (None, Some(name)) => VersionLookup::Name(name), _ => { return Err(CliError::msg( "version switch requires exactly one of --id or --name", )); } }, )?; crate::db::block_on(lix.switch_version(SwitchVersionOptions { version_id: resolved.id.clone(), })) .map_err(|error| CliError::msg(error.to_string()))?; println!( "Switched active version to {} ({})", resolved.name, resolved.id ); Ok(CommandOutput::empty()) } ================================================ FILE: packages/cli/src/db/mod.rs ================================================ use crate::app::AppContext; use crate::error::CliError; use async_trait::async_trait; use base64::Engine as _; use lix_rs_sdk::{ open_lix, KvPair, KvScanRange, Lix, LixBackend, LixBackendTransaction, LixError, OpenLixOptions, TransactionBeginMode, }; use serde::{Deserialize, Serialize}; use std::collections::BTreeMap; use std::fs; use std::path::{Path, PathBuf}; use std::sync::{Arc, Mutex}; pub fn resolve_db_path(context: &AppContext) -> Result { if let Some(path) = &context.lix_path { validate_lix_file_path(path)?; if !path.exists() { return Err(CliError::msg(format!( "lix file does not exist: {}", path.display() ))); } return Ok(path.clone()); } let cwd = std::env::current_dir().map_err(|source| CliError::io("failed to read cwd", source))?; let mut candidates = find_lix_files(&cwd)?; if candidates.is_empty() { return Err(CliError::msg( "no .lix files found in current directory; pass --path ", )); } if candidates.len() > 1 { candidates.sort(); let paths = candidates .iter() .map(|path| path.display().to_string()) .collect::>() .join(", "); return Err(CliError::msg(format!( "multiple .lix files found ({paths}); pass --path " ))); } Ok(candidates.remove(0)) } pub fn open_lix_at(path: &Path) -> Result { let backend = FileBackend::from_path(path)?; block_on(open_lix(OpenLixOptions { backend: Some(Box::new(backend)), })) .map_err(|err| CliError::msg(format!("failed to open lix at {}: {}", path.display(), err))) } pub fn init_lix_at(path: &Path) -> Result { validate_lix_file_path(path)?; if let Some(parent) = path.parent() { if !parent.as_os_str().is_empty() { fs::create_dir_all(parent).map_err(|source| { CliError::io("failed to create parent directory for lix file", source) })?; } } let initialized = !path.exists(); let _ = open_lix_at(path)?; Ok(initialized) } pub fn destroy_lix_at(path: &Path) -> Result<(), CliError> { match fs::remove_file(path) { Ok(()) => Ok(()), Err(error) if error.kind() == std::io::ErrorKind::NotFound => Ok(()), Err(error) => Err(CliError::io("failed to destroy lix file", error)), } .and_then(|_| remove_sidecar(path, "wal")) .and_then(|_| remove_sidecar(path, "shm")) .and_then(|_| remove_sidecar(path, "journal")) } /// Prepares a `.lix` output target for initialization. /// /// The CLI delegates storage-backed cleanup to the backend boundary so command /// code does not need to know how a backend represents its physical artifacts. pub fn prepare_lix_output_path(path: &Path, force: bool) -> Result<(), CliError> { validate_lix_file_path(path)?; if let Some(parent) = path.parent() { if !parent.as_os_str().is_empty() { fs::create_dir_all(parent) .map_err(|source| CliError::io("failed to create output directory", source))?; } } if path.exists() && path.is_dir() { return Err(CliError::msg(format!( "output path points to a directory, expected a file: {}", path.display() ))); } if force { destroy_lix_at(path)?; return Ok(()); } if path.exists() { return Err(CliError::msg(format!( "output path already exists: {}", path.display() ))); } Ok(()) } fn find_lix_files(cwd: &Path) -> Result, CliError> { let mut files = Vec::new(); let entries = fs::read_dir(cwd).map_err(|source| CliError::io("failed to read cwd entries", source))?; for entry in entries { let entry = entry.map_err(|source| CliError::io("failed to read directory entry", source))?; let path = entry.path(); if !path.is_file() { continue; } if path.extension().and_then(|ext| ext.to_str()) == Some("lix") { files.push(path); } } files.sort(); Ok(files) } fn validate_lix_file_path(path: &Path) -> Result<(), CliError> { if path.extension().and_then(|ext| ext.to_str()) == Some("lix") { return Ok(()); } Err(CliError::msg(format!( "expected a .lix file path: {}", path.display() ))) } pub fn block_on(future: F) -> F::Output { tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("tokio runtime should initialize") .block_on(future) } fn remove_sidecar(path: &Path, suffix: &str) -> Result<(), CliError> { let sidecar = PathBuf::from(format!("{}-{suffix}", path.display())); match fs::remove_file(sidecar) { Ok(()) => Ok(()), Err(error) if error.kind() == std::io::ErrorKind::NotFound => Ok(()), Err(error) => Err(CliError::io("failed to destroy lix sidecar file", error)), } } type KvMap = BTreeMap<(String, Vec), Vec>; #[derive(Clone)] struct FileBackend { path: Arc, kv: Arc>, } impl FileBackend { fn from_path(path: &Path) -> Result { let kv = read_kv_file(path)?; Ok(Self { path: Arc::new(path.to_path_buf()), kv: Arc::new(Mutex::new(kv)), }) } } #[async_trait] impl LixBackend for FileBackend { async fn begin_transaction( &self, mode: TransactionBeginMode, ) -> Result, LixError> { let snapshot = self .kv .lock() .map_err(|_| lock_error("cli file backend kv"))? .clone(); Ok(Box::new(FileBackendTransaction { mode, path: Arc::clone(&self.path), parent: Arc::clone(&self.kv), kv: snapshot, })) } async fn kv_get(&self, namespace: &str, key: &[u8]) -> Result>, LixError> { Ok(self .kv .lock() .map_err(|_| lock_error("cli file backend kv"))? .get(&(namespace.to_string(), key.to_vec())) .cloned()) } async fn kv_scan( &self, namespace: &str, range: KvScanRange, limit: Option, ) -> Result, LixError> { let guard = self .kv .lock() .map_err(|_| lock_error("cli file backend kv"))?; Ok(scan_map(&guard, namespace, &range, limit)) } } struct FileBackendTransaction { mode: TransactionBeginMode, path: Arc, parent: Arc>, kv: KvMap, } #[async_trait] impl LixBackendTransaction for FileBackendTransaction { fn mode(&self) -> TransactionBeginMode { self.mode } async fn kv_get(&mut self, namespace: &str, key: &[u8]) -> Result>, LixError> { Ok(self.kv.get(&(namespace.to_string(), key.to_vec())).cloned()) } async fn kv_scan( &mut self, namespace: &str, range: KvScanRange, limit: Option, ) -> Result, LixError> { Ok(scan_map(&self.kv, namespace, &range, limit)) } async fn kv_put(&mut self, namespace: &str, key: &[u8], value: &[u8]) -> Result<(), LixError> { self.kv .insert((namespace.to_string(), key.to_vec()), value.to_vec()); Ok(()) } async fn kv_delete(&mut self, namespace: &str, key: &[u8]) -> Result<(), LixError> { self.kv.remove(&(namespace.to_string(), key.to_vec())); Ok(()) } async fn commit(self: Box) -> Result<(), LixError> { write_kv_file(&self.path, &self.kv)?; *self .parent .lock() .map_err(|_| lock_error("cli file backend kv"))? = self.kv; Ok(()) } async fn rollback(self: Box) -> Result<(), LixError> { Ok(()) } } #[derive(Debug, Serialize, Deserialize)] struct FileSnapshot { entries: Vec, } #[derive(Debug, Serialize, Deserialize)] struct FileEntry { namespace: String, key: String, value: String, } fn read_kv_file(path: &Path) -> Result { if !path.exists() { return Ok(KvMap::new()); } let bytes = fs::read(path).map_err(|source| CliError::io("failed to read lix file", source))?; if bytes.is_empty() { return Ok(KvMap::new()); } let snapshot: FileSnapshot = serde_json::from_slice(&bytes) .map_err(|error| CliError::msg(format!("failed to decode lix file: {error}")))?; let mut kv = KvMap::new(); for entry in snapshot.entries { kv.insert( (entry.namespace, decode_bytes(&entry.key)?), decode_bytes(&entry.value)?, ); } Ok(kv) } fn write_kv_file(path: &Path, kv: &KvMap) -> Result<(), LixError> { let snapshot = FileSnapshot { entries: kv .iter() .map(|((namespace, key), value)| FileEntry { namespace: namespace.clone(), key: encode_bytes(key), value: encode_bytes(value), }) .collect(), }; let bytes = serde_json::to_vec(&snapshot).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("failed to encode lix file snapshot: {error}"), ) })?; fs::write(path, bytes).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("failed to write lix file '{}': {error}", path.display()), ) }) } fn scan_map(kv: &KvMap, namespace: &str, range: &KvScanRange, limit: Option) -> Vec { let mut pairs = kv .iter() .filter(|((candidate_namespace, key), _)| { candidate_namespace == namespace && key_matches_range(key, range) }) .map(|((_, key), value)| KvPair::new(key.clone(), value.clone())) .collect::>(); pairs.sort_by(|left, right| left.key.cmp(&right.key)); if let Some(limit) = limit { pairs.truncate(limit); } pairs } fn key_matches_range(key: &[u8], range: &KvScanRange) -> bool { match range { KvScanRange::Prefix(prefix) => key.starts_with(prefix), KvScanRange::Range { start, end } => start.as_slice() <= key && key < end.as_slice(), } } fn encode_bytes(bytes: &[u8]) -> String { base64::engine::general_purpose::STANDARD.encode(bytes) } fn decode_bytes(value: &str) -> Result, CliError> { base64::engine::general_purpose::STANDARD .decode(value) .map_err(|error| CliError::msg(format!("failed to decode lix file bytes: {error}"))) } fn lock_error(name: &str) -> LixError { LixError::new("LIX_ERROR_UNKNOWN", format!("{name} mutex was poisoned")) } #[cfg(test)] mod tests { use super::{init_lix_at, prepare_lix_output_path, resolve_db_path}; use crate::app::AppContext; use std::fs; use std::path::PathBuf; use std::time::{SystemTime, UNIX_EPOCH}; #[test] fn resolve_db_path_rejects_explicit_non_lix_path() { let temp_dir = unique_temp_dir(); fs::create_dir_all(&temp_dir).expect("temp dir should be created"); let path = temp_dir.join("project.sqlite"); fs::write(&path, b"not-lix").expect("seed file should be written"); let context = AppContext { lix_path: Some(path.clone()), no_hints: false, }; let error = resolve_db_path(&context).expect_err("non-.lix path should be rejected"); assert_eq!( error.to_string(), format!("expected a .lix file path: {}", path.display()) ); fs::remove_file(&path).expect("seed file should be removable"); fs::remove_dir_all(&temp_dir).expect("temp dir should be removable"); } #[test] fn init_lix_at_rejects_non_lix_path() { let temp_dir = unique_temp_dir(); let path = temp_dir.join("project.sqlite"); let error = init_lix_at(&path).expect_err("non-.lix init path should be rejected"); assert_eq!( error.to_string(), format!("expected a .lix file path: {}", path.display()) ); assert!( !temp_dir.exists(), "validator should reject before creating parent directories" ); } #[test] fn prepare_output_path_rejects_non_lix_path() { let temp_dir = unique_temp_dir(); let path = temp_dir.join("output.db"); let error = prepare_lix_output_path(&path, false) .expect_err("non-.lix output path should be rejected"); assert_eq!( error.to_string(), format!("expected a .lix file path: {}", path.display()) ); assert!( !temp_dir.exists(), "validator should reject before creating parent directories" ); } fn unique_temp_dir() -> PathBuf { let nanos = SystemTime::now() .duration_since(UNIX_EPOCH) .expect("system time should be after unix epoch") .as_nanos(); std::env::temp_dir().join(format!("lix-cli-db-test-{}-{nanos}", std::process::id())) } } ================================================ FILE: packages/cli/src/error.rs ================================================ use lix_rs_sdk::LixError; use std::fmt::{Display, Formatter}; #[derive(Debug)] pub enum CliError { InvalidArgs(&'static str), Message(String), Io { context: &'static str, source: std::io::Error, }, Lix { context: &'static str, source: LixError, }, } impl CliError { pub fn io(context: &'static str, source: std::io::Error) -> Self { Self::Io { context, source } } pub fn msg(message: impl Into) -> Self { Self::Message(message.into()) } pub fn from_lix(context: &'static str, source: LixError) -> Self { Self::Lix { context, source } } pub fn hint(&self) -> Option<&str> { match self { Self::Lix { source, .. } => source.hint.as_deref(), _ => None, } } } impl Display for CliError { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { match self { Self::InvalidArgs(message) => write!(f, "invalid arguments: {message}"), Self::Message(message) => write!(f, "{message}"), Self::Io { context, source } => write!(f, "{context}: {source}"), Self::Lix { context, source } => { write!(f, "{context}: {}", source.description) } } } } impl std::error::Error for CliError {} #[cfg(test)] mod tests { use super::*; #[test] fn hint_returns_none_for_non_lix_variants() { assert_eq!(CliError::InvalidArgs("bad").hint(), None); assert_eq!(CliError::msg("oops").hint(), None); let io_err = CliError::io( "reading", std::io::Error::new(std::io::ErrorKind::Other, "boom"), ); assert_eq!(io_err.hint(), None); } #[test] fn hint_returns_lix_hint_when_attached() { let lix_err = LixError::new("LIX_ERROR_FOO", "desc").with_hint("try lix_json(...)"); let cli_err = CliError::from_lix("sql execution failed", lix_err); assert_eq!(cli_err.hint(), Some("try lix_json(...)")); } #[test] fn hint_returns_none_when_lix_error_has_no_hint() { let lix_err = LixError::new("LIX_ERROR_FOO", "desc"); let cli_err = CliError::from_lix("sql execution failed", lix_err); assert_eq!(cli_err.hint(), None); } #[test] fn display_format_omits_hint_line() { // hints are rendered separately via `render_hints`, not via Display let lix_err = LixError::new("LIX_ERROR_FOO", "boom").with_hint("fix it"); let cli_err = CliError::from_lix("sql execution failed", lix_err); assert_eq!(cli_err.to_string(), "sql execution failed: boom"); } } ================================================ FILE: packages/cli/src/hints.rs ================================================ use crate::error::CliError; use lix_rs_sdk::{ExecuteResult, Lix, Value}; #[derive(Debug)] pub struct CommandOutput { pub hints: Vec, } impl CommandOutput { pub fn empty() -> Self { Self { hints: Vec::new() } } pub fn with_hints(hints: Vec) -> Self { Self { hints } } } // ── Hint generators (all hint text and conditions live here) ───────── pub fn hint_after_init() -> Vec { vec![ "Try inserting data with: lix sql execute \"INSERT INTO lix_key_value (key, value) VALUES ('hello', '\"world\"')\"".into(), "Store files with: lix sql execute \"INSERT INTO lix_file (path, data) VALUES ('/readme.txt', lix_text_encode('hello'))\"".into(), ] } pub fn hint_blob_in_result(result: &ExecuteResult) -> Vec { let has_blob = result .rows() .iter() .any(|row| row.values().iter().any(|v| matches!(v, Value::Blob(_)))); if has_blob { vec!["Tip: use lix_text_decode(data) to view text content".into()] } else { Vec::new() } } /// Extract an engine-produced hint from a `CliError`, if any. /// /// Returns an empty `Vec` for error variants that do not carry a `LixError` /// (e.g. `InvalidArgs`, `Message`, `Io`) or when the underlying `LixError` /// has no hint attached. pub fn hint_from_error(err: &CliError) -> Vec { err.hint().map(|h| vec![h.to_string()]).unwrap_or_default() } // ── Infrastructure ─────────────────────────────────────────────────── /// Query lix_key_value for 'lix_cli_hints'. Returns true unless value is explicitly "false". pub fn are_hints_enabled(lix: &Lix) -> bool { let result = crate::db::block_on(lix.execute( "SELECT value FROM lix_key_value WHERE key = 'lix_cli_hints'", &[], )); match result { Ok(result) => { if let Some(row) = result.rows().first() { if let Ok(value) = row.get::("value") { return value != "false"; } } true // key absent = hints ON } Err(_) => true, // on error, default to hints ON } } /// Print hints to stderr as "hint: {message}". pub fn render_hints(hints: &[String]) { for hint in hints { eprintln!("hint: {hint}"); } } #[cfg(test)] mod tests { use super::*; use lix_rs_sdk::LixError; #[test] fn hint_from_error_returns_empty_for_non_lix_variants() { assert!(hint_from_error(&CliError::msg("oops")).is_empty()); assert!(hint_from_error(&CliError::InvalidArgs("bad")).is_empty()); } #[test] fn hint_from_error_returns_empty_when_lix_error_has_no_hint() { let cli_err = CliError::from_lix("ctx", LixError::new("CODE", "desc")); assert!(hint_from_error(&cli_err).is_empty()); } #[test] fn hint_from_error_returns_lix_hint() { let cli_err = CliError::from_lix( "sql execution failed", LixError::new("CODE", "desc").with_hint("use lix_json(...)"), ); assert_eq!( hint_from_error(&cli_err), vec!["use lix_json(...)".to_string()] ); } } ================================================ FILE: packages/cli/src/lib.rs ================================================ pub mod app; pub mod cli; pub mod commands; pub mod db; pub mod error; pub mod hints; pub mod output; pub fn run() -> Result<(), error::CliError> { app::run() } ================================================ FILE: packages/cli/src/main.rs ================================================ fn main() { if lix_cli::run().is_err() { std::process::exit(1); } } ================================================ FILE: packages/cli/src/output/mod.rs ================================================ use base64::Engine as _; use comfy_table::{presets::UTF8_BORDERS_ONLY, Cell, ContentArrangement, Row, Table}; use lix_rs_sdk::{ExecuteResult, Value}; use serde_json::Value as JsonValue; pub fn print_execute_result_table(result: &ExecuteResult) { if result.columns().is_empty() && result.rows().is_empty() { println!("OK"); if result.rows_affected() > 0 { println!("({} rows affected)", result.rows_affected()); } return; } let mut table = Table::new(); table .load_preset(UTF8_BORDERS_ONLY) .set_content_arrangement(ContentArrangement::Dynamic); if !result.columns().is_empty() { let header = Row::from(result.columns().iter().map(Cell::new).collect::>()); table.set_header(header); } for row in result.rows() { let rendered = Row::from( row.values() .iter() .map(|value| Cell::new(value_to_text(value))) .collect::>(), ); table.add_row(rendered); } println!("{table}"); println!("({} rows)", result.rows().len()); } pub fn print_execute_result_json(result: &ExecuteResult) { let payload = execute_result_to_json(result); println!( "{}", serde_json::to_string(&payload).unwrap_or_else(|_| "{}".to_string()) ); } fn execute_result_to_json(result: &ExecuteResult) -> JsonValue { serde_json::json!({ "columns": result.columns(), "rows": result.rows().iter().map(|row| row_to_json(result.columns(), row)).collect::>(), "rowsAffected": result.rows_affected(), "notices": result.notices(), }) } fn row_to_json(columns: &[String], row: &lix_rs_sdk::Row) -> JsonValue { let mut object = serde_json::Map::new(); for (index, column) in columns.iter().enumerate() { let value = row .get_index(index) .map(value_to_json) .unwrap_or(JsonValue::Null); object.insert(column.clone(), value); } JsonValue::Object(object) } fn value_to_text(value: &Value) -> String { match value { Value::Null => "null".to_string(), Value::Boolean(v) => v.to_string(), Value::Integer(v) => v.to_string(), Value::Real(v) => v.to_string(), Value::Text(v) => v.clone(), Value::Json(v) => v.to_string(), Value::Blob(bytes) => bytes_to_hex(bytes), } } fn value_to_json(value: &Value) -> JsonValue { match value { Value::Null => JsonValue::Null, Value::Boolean(v) => JsonValue::Bool(*v), Value::Integer(v) => serde_json::json!(v), Value::Real(v) => serde_json::Number::from_f64(*v) .map(JsonValue::Number) .unwrap_or(JsonValue::Null), Value::Text(v) => JsonValue::String(v.clone()), Value::Json(v) => v.clone(), Value::Blob(bytes) => serde_json::json!({ "$blob": base64::engine::general_purpose::STANDARD.encode(bytes), }), } } fn bytes_to_hex(bytes: &[u8]) -> String { let mut out = String::with_capacity(bytes.len() * 2 + 2); out.push_str("0x"); for byte in bytes { out.push(hex_digit(byte >> 4)); out.push(hex_digit(byte & 0x0f)); } out } fn hex_digit(value: u8) -> char { match value { 0..=9 => (b'0' + value) as char, 10..=15 => (b'a' + (value - 10)) as char, _ => '0', } } #[cfg(test)] mod tests { use super::*; #[test] fn value_to_json_uses_blob_tagged_shape() { let value = Value::Blob(vec![0x01, 0x02, 0x03]); let json = value_to_json(&value); assert_eq!( json, serde_json::json!({ "$blob": "AQID" }) ); } #[test] fn value_to_json_uses_native_scalars() { assert_eq!(value_to_json(&Value::Null), JsonValue::Null); assert_eq!(value_to_json(&Value::Boolean(true)), JsonValue::Bool(true)); assert_eq!(value_to_json(&Value::Integer(7)), serde_json::json!(7)); assert_eq!(value_to_json(&Value::Real(2.5)), serde_json::json!(2.5)); assert_eq!( value_to_json(&Value::Text("hello".to_string())), JsonValue::String("hello".to_string()) ); assert_eq!( value_to_json(&Value::Json(serde_json::json!({"ok": true}))), serde_json::json!({"ok": true}) ); } #[test] fn execute_result_to_json_preserves_envelope_and_order() { let result = ExecuteResult::from_rows( vec!["n".to_string(), "payload".to_string()], vec![ vec![Value::Integer(1), Value::Text("a".to_string())], vec![Value::Integer(2), Value::Blob(vec![0x01, 0x02])], ], ); assert_eq!( execute_result_to_json(&result), serde_json::json!({ "columns": ["n", "payload"], "rows": [ {"n": 1, "payload": "a"}, {"n": 2, "payload": {"$blob": "AQI="}}, ], "rowsAffected": 0, "notices": [], }) ); } } ================================================ FILE: packages/engine/.gitignore ================================================ benches/results/ # local rust build output when invoked from this package target/ # criterion benchmark output criterion/ # local sqlite artifacts from benchmark runs *.sqlite *.sqlite-journal *.sqlite-wal *.sqlite-shm ================================================ FILE: packages/engine/AGENTS.md ================================================ ## Lix Engine - testing with sqlite simulation is enough for development. before committing, test the all simulations ================================================ FILE: packages/engine/Cargo.toml ================================================ [package] name = "lix_engine" version = "0.1.0" edition = "2021" [features] storage-benches = [] [[bench]] name = "storage" path = "benches/storage/main.rs" harness = false required-features = ["storage-benches"] [[bench]] name = "transaction" path = "benches/transaction/main.rs" harness = false required-features = ["storage-benches"] [[bench]] name = "physical_layout" path = "benches/physical_layout/main.rs" harness = false required-features = ["storage-benches"] [[bench]] name = "json_pointer_crud" path = "benches/json_pointer_crud/main.rs" harness = false required-features = ["storage-benches"] [[bench]] name = "optimization9_sql2" path = "benches/optimization9_sql2/main.rs" harness = false required-features = ["storage-benches"] [[bench]] name = "json_pointer_physical" path = "benches/json_pointer_physical/main.rs" harness = false required-features = ["storage-benches"] [dependencies] async-trait = "0.1" cel = { version = "0.12.0", features = ["json"] } chrono = { version = "0.4", default-features = false, features = ["clock", "std", "wasmbind"] } datafusion = { version = "53.0.0", default-features = false, features = [ "sql", "nested_expressions", "datetime_expressions", "regex_expressions", "string_expressions", "unicode_expressions", ] } flatbuffers = "=25.12.19" serde = { version = "1", features = ["derive"] } serde_json = "1" jsonschema = { version = "0.17", default-features = false, features = ["draft202012"] } globset = "0.4" uuid = { version = "1", features = ["v7", "std", "js"] } unicode-normalization = "0.1" precis-profiles = "0.1.13" futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", features = ["rt"] } blake3 = "1" fastcdc = "3" xxhash-rust = { version = "0.8", features = ["xxh3"] } base64 = "0.22" [dev-dependencies] criterion = { package = "codspeed-criterion-compat", version = "*" } iref = "4.0.0" paste = "1" rocksdb = { version = "0.22", default-features = false } rusqlite = { version = "0.32", features = ["bundled"] } tempfile = "3" tokio = { version = "1", features = ["rt", "macros", "sync"] } [target.'cfg(not(target_arch = "wasm32"))'.dependencies] zstd = "0.13" [target.'cfg(target_arch = "wasm32")'.dependencies] ruzstd = { version = "0.8", default-features = false, features = ["std"] } ================================================ FILE: packages/engine/benches/fixtures/pnpm-lock.fixture.json ================================================ {"lockfileVersion":"9.0","settings":{"autoInstallPeers":true,"excludeLinksFromLockfile":false},"importers":{".":{"devDependencies":{"@changesets/cli":{"specifier":"^2.29.7","version":"2.29.7(@types/node@24.10.2)"},"@vitest/coverage-v8":{"specifier":"^3.1.1","version":"3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)"},"nx":{"specifier":"^21.0.0","version":"21.4.1"},"nx-cloud":{"specifier":"^19.1.0","version":"19.1.0"},"vitest":{"specifier":"^3.1.1","version":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}}},"packages/js-kysely":{"dependencies":{"json-schema-to-ts":{"specifier":"^3.1.1","version":"3.1.1"},"kysely":{"specifier":"^0.28.7","version":"0.28.7"}},"devDependencies":{"@lix-js/sdk":{"specifier":"workspace:*","version":"link:../js-sdk"},"typescript":{"specifier":"^5.5.4","version":"5.9.3"},"vitest":{"specifier":"^4.0.18","version":"4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}}},"packages/js-sdk":{"devDependencies":{"better-sqlite3":{"specifier":"^12.9.0","version":"12.9.0"},"typescript":{"specifier":"^5.5.4","version":"5.9.3"},"vitest":{"specifier":"^4.0.18","version":"4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}}},"packages/react-utils":{"devDependencies":{"@lix-js/kysely":{"specifier":"workspace:*","version":"link:../js-kysely"},"@lix-js/sdk":{"specifier":"workspace:*","version":"link:../js-sdk"},"@testing-library/react":{"specifier":"^16.3.0","version":"16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)"},"@types/react":{"specifier":"^19.1.8","version":"19.2.7"},"@vitest/coverage-v8":{"specifier":"^3.2.4","version":"3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)"},"https-proxy-agent":{"specifier":"7.0.2","version":"7.0.2"},"jsdom":{"specifier":"^26.1.0","version":"26.1.0"},"oxlint":{"specifier":"^1.14.0","version":"1.26.0"},"prettier":{"specifier":"^3.3.3","version":"3.6.2"},"react":{"specifier":"19.2.0","version":"19.2.0"},"react-dom":{"specifier":"19.2.0","version":"19.2.0(react@19.2.0)"},"typescript":{"specifier":"^5.5.4","version":"5.8.3"},"vitest":{"specifier":"^3.2.4","version":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}}},"packages/website":{"dependencies":{"@cloudflare/vite-plugin":{"specifier":"^1.36.0","version":"1.36.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(workerd@1.20260504.1)(wrangler@4.88.0)"},"@lix-js/plugin-json":{"specifier":"1.0.1","version":"1.0.1(tslib@2.8.1)"},"@lix-js/sdk":{"specifier":"workspace:*","version":"link:../js-sdk"},"@opral/markdown-wc":{"specifier":"0.9.0","version":"0.9.0"},"@tailwindcss/vite":{"specifier":"^4.2.4","version":"4.2.4(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"@tanstack/react-router":{"specifier":"^1.169.2","version":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)"},"@tanstack/react-start":{"specifier":"^1.167.64","version":"1.167.64(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))"},"@tanstack/router-plugin":{"specifier":"^1.167.34","version":"1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))"},"lucide-react":{"specifier":"^0.544.0","version":"0.544.0(react@19.2.0)"},"posthog-js":{"specifier":"^1.321.2","version":"1.321.2"},"react":{"specifier":"^19.2.0","version":"19.2.0"},"react-dom":{"specifier":"^19.2.0","version":"19.2.0(react@19.2.0)"},"shiki":{"specifier":"^3.2.2","version":"3.15.0"},"tailwindcss":{"specifier":"^4.2.4","version":"4.2.4"}},"devDependencies":{"@testing-library/dom":{"specifier":"^10.4.0","version":"10.4.1"},"@testing-library/react":{"specifier":"^16.2.0","version":"16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)"},"@types/node":{"specifier":"^22.10.2","version":"22.15.33"},"@types/react":{"specifier":"^19.2.0","version":"19.2.7"},"@types/react-dom":{"specifier":"^19.2.0","version":"19.2.3(@types/react@19.2.7)"},"@vitejs/plugin-react":{"specifier":"^6.0.1","version":"6.0.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"@vitest/browser":{"specifier":"^4.1.5","version":"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)"},"@vitest/coverage-v8":{"specifier":"^4.1.5","version":"4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)"},"jsdom":{"specifier":"^27.0.0","version":"27.3.0(postcss@8.5.14)"},"prettier":{"specifier":"^3.6.0","version":"3.6.2"},"typescript":{"specifier":"^5.7.2","version":"5.8.3"},"vite":{"specifier":"^8.0.10","version":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"vite-plugin-static-copy":{"specifier":"^4.1.0","version":"4.1.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"vitest":{"specifier":"^4.1.5","version":"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"web-vitals":{"specifier":"^5.1.0","version":"5.1.0"},"wrangler":{"specifier":"^4.88.0","version":"4.88.0"}}}},"packages":{"@acemir/cssom@0.9.28":{"resolution":{"integrity":"sha512-LuS6IVEivI75vKN8S04qRD+YySP0RmU/cV8UNukhQZvprxF+76Z43TNo/a08eCodaGhT1Us8etqS1ZRY9/Or0A=="}},"@ampproject/remapping@2.3.0":{"resolution":{"integrity":"sha512-30iZtAPgz+LTIYoeivqYo853f02jBYSd5uGnGpkFV0M3xOt9aN73erkgYAmZU43x4VfqcnLxW9Kpg3R5LC4YYw=="},"engines":{"node":">=6.0.0"}},"@antfu/install-pkg@1.1.0":{"resolution":{"integrity":"sha512-MGQsmw10ZyI+EJo45CdSER4zEb+p31LpDAFp2Z3gkSd1yqVZGi0Ebx++YTEMonJy4oChEMLsxZ64j8FH6sSqtQ=="}},"@antfu/utils@9.3.0":{"resolution":{"integrity":"sha512-9hFT4RauhcUzqOE4f1+frMKLZrgNog5b06I7VmZQV1BkvwvqrbC8EBZf3L1eEL2AKb6rNKjER0sEvJiSP1FXEA=="}},"@asamuzakjp/css-color@3.1.4":{"resolution":{"integrity":"sha512-SeuBV4rnjpFNjI8HSgKUwteuFdkHwkboq31HWzznuqgySQir+jSTczoWVVL4jvOjKjuH80fMDG0Fvg1Sb+OJsA=="}},"@asamuzakjp/css-color@4.1.0":{"resolution":{"integrity":"sha512-9xiBAtLn4aNsa4mDnpovJvBn72tNEIACyvlqaNJ+ADemR+yeMJWnBudOi2qGDviJa7SwcDOU/TRh5dnET7qk0w=="}},"@asamuzakjp/dom-selector@6.7.6":{"resolution":{"integrity":"sha512-hBaJER6A9MpdG3WgdlOolHmbOYvSk46y7IQN/1+iqiCuUu6iWdQrs9DGKF8ocqsEqWujWf/V7b7vaDgiUmIvUg=="}},"@asamuzakjp/nwsapi@2.3.9":{"resolution":{"integrity":"sha512-n8GuYSrI9bF7FFZ/SjhwevlHc8xaVlb/7HmHelnc/PZXBD2ZR49NnN9sMMuDdEGPeeRQ5d0hqlSlEpgCX3Wl0Q=="}},"@babel/code-frame@7.27.1":{"resolution":{"integrity":"sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg=="},"engines":{"node":">=6.9.0"}},"@babel/compat-data@7.28.0":{"resolution":{"integrity":"sha512-60X7qkglvrap8mn1lh2ebxXdZYtUcpd7gsmy9kLaBJ4i/WdY8PqTSdxyA8qraikqKQK5C1KRBKXqznrVapyNaw=="},"engines":{"node":">=6.9.0"}},"@babel/core@7.28.5":{"resolution":{"integrity":"sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw=="},"engines":{"node":">=6.9.0"}},"@babel/generator@7.28.5":{"resolution":{"integrity":"sha512-3EwLFhZ38J4VyIP6WNtt2kUdW9dokXA9Cr4IVIFHuCpZ3H8/YFOl5JjZHisrn1fATPBmKKqXzDFvh9fUwHz6CQ=="},"engines":{"node":">=6.9.0"}},"@babel/helper-compilation-targets@7.27.2":{"resolution":{"integrity":"sha512-2+1thGUUWWjLTYTHZWK1n8Yga0ijBz1XAhUXcKy81rd5g6yh7hGqMp45v7cadSbEHc9G3OTv45SyneRN3ps4DQ=="},"engines":{"node":">=6.9.0"}},"@babel/helper-globals@7.28.0":{"resolution":{"integrity":"sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw=="},"engines":{"node":">=6.9.0"}},"@babel/helper-module-imports@7.27.1":{"resolution":{"integrity":"sha512-0gSFWUPNXNopqtIPQvlD5WgXYI5GY2kP2cCvoT8kczjbfcfuIljTbcWrulD1CIPIX2gt1wghbDy08yE1p+/r3w=="},"engines":{"node":">=6.9.0"}},"@babel/helper-module-transforms@7.28.3":{"resolution":{"integrity":"sha512-gytXUbs8k2sXS9PnQptz5o0QnpLL51SwASIORY6XaBKF88nsOT0Zw9szLqlSGQDP/4TljBAD5y98p2U1fqkdsw=="},"engines":{"node":">=6.9.0"},"peerDependencies":{"@babel/core":"^7.0.0"}},"@babel/helper-plugin-utils@7.27.1":{"resolution":{"integrity":"sha512-1gn1Up5YXka3YYAHGKpbideQ5Yjf1tDa9qYcgysz+cNCXukyLl6DjPXhD3VRwSb8c0J9tA4b2+rHEZtc6R0tlw=="},"engines":{"node":">=6.9.0"}},"@babel/helper-string-parser@7.27.1":{"resolution":{"integrity":"sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA=="},"engines":{"node":">=6.9.0"}},"@babel/helper-validator-identifier@7.28.5":{"resolution":{"integrity":"sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q=="},"engines":{"node":">=6.9.0"}},"@babel/helper-validator-option@7.27.1":{"resolution":{"integrity":"sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg=="},"engines":{"node":">=6.9.0"}},"@babel/helpers@7.28.4":{"resolution":{"integrity":"sha512-HFN59MmQXGHVyYadKLVumYsA9dBFun/ldYxipEjzA4196jpLZd8UjEEBLkbEkvfYreDqJhZxYAWFPtrfhNpj4w=="},"engines":{"node":">=6.9.0"}},"@babel/parser@7.28.5":{"resolution":{"integrity":"sha512-KKBU1VGYR7ORr3At5HAtUQ+TV3SzRCXmA/8OdDZiLDBIZxVyzXuztPjfLd3BV1PRAQGCMWWSHYhL0F8d5uHBDQ=="},"engines":{"node":">=6.0.0"},"hasBin":true},"@babel/parser@7.29.3":{"resolution":{"integrity":"sha512-b3ctpQwp+PROvU/cttc4OYl4MzfJUWy6FZg+PMXfzmt/+39iHVF0sDfqay8TQM3JA2EUOyKcFZt75jWriQijsA=="},"engines":{"node":">=6.0.0"},"hasBin":true},"@babel/plugin-syntax-jsx@7.27.1":{"resolution":{"integrity":"sha512-y8YTNIeKoyhGd9O0Jiyzyyqk8gdjnumGTQPsz0xOZOQ2RmkVJeZ1vmmfIvFEKqucBG6axJGBZDE/7iI5suUI/w=="},"engines":{"node":">=6.9.0"},"peerDependencies":{"@babel/core":"^7.0.0-0"}},"@babel/plugin-syntax-typescript@7.27.1":{"resolution":{"integrity":"sha512-xfYCBMxveHrRMnAWl1ZlPXOZjzkN82THFvLhQhFXFt81Z5HnN+EtUkZhv/zcKpmT3fzmWZB0ywiBrbC3vogbwQ=="},"engines":{"node":">=6.9.0"},"peerDependencies":{"@babel/core":"^7.0.0-0"}},"@babel/runtime@7.28.4":{"resolution":{"integrity":"sha512-Q/N6JNWvIvPnLDvjlE1OUBLPQHH6l3CltCEsHIujp45zQUSSh8K+gHnaEX45yAT1nyngnINhvWtzN+Nb9D8RAQ=="},"engines":{"node":">=6.9.0"}},"@babel/template@7.27.2":{"resolution":{"integrity":"sha512-LPDZ85aEJyYSd18/DkjNh4/y1ntkE5KwUHWTiqgRxruuZL2F1yuHligVHLvcHY2vMHXttKFpJn6LwfI7cw7ODw=="},"engines":{"node":">=6.9.0"}},"@babel/traverse@7.28.5":{"resolution":{"integrity":"sha512-TCCj4t55U90khlYkVV/0TfkJkAkUg3jZFA3Neb7unZT8CPok7iiRfaX0F+WnqWqt7OxhOn0uBKXCw4lbL8W0aQ=="},"engines":{"node":">=6.9.0"}},"@babel/types@7.28.5":{"resolution":{"integrity":"sha512-qQ5m48eI/MFLQ5PxQj4PFaprjyCTLI37ElWMmNs0K8Lk3dVeOdNpB3ks8jc7yM5CDmVC73eMVk/trk3fgmrUpA=="},"engines":{"node":">=6.9.0"}},"@babel/types@7.29.0":{"resolution":{"integrity":"sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A=="},"engines":{"node":">=6.9.0"}},"@bcoe/v8-coverage@1.0.2":{"resolution":{"integrity":"sha512-6zABk/ECA/QYSCQ1NGiVwwbQerUCZ+TQbp64Q3AgmfNvurHH0j8TtXa1qbShXA6qqkpAj4V5W8pP6mLe1mcMqA=="},"engines":{"node":">=18"}},"@blazediff/core@1.9.1":{"resolution":{"integrity":"sha512-ehg3jIkYKulZh+8om/O25vkvSsXXwC+skXmyA87FFx6A/45eqOkZsBltMw/TVteb0mloiGT8oGRTcjRAz66zaA=="}},"@braintree/sanitize-url@7.1.1":{"resolution":{"integrity":"sha512-i1L7noDNxtFyL5DmZafWy1wRVhGehQmzZaz1HiN5e7iylJMSZR7ekOV7NsIqa5qBldlLrsKv4HbgFUVlQrz8Mw=="}},"@bufbuild/protobuf@2.12.0":{"resolution":{"integrity":"sha512-B/XlCaFIP8LOwzo+bz5uFzATYokcwCKQcghqnlfwSmM5eX/qTkvDBnDPs+gXtX/RyjxJ4DRikECcPJbyALA8FA=="}},"@bundled-es-modules/cookie@2.0.1":{"resolution":{"integrity":"sha512-8o+5fRPLNbjbdGRRmJj3h6Hh1AQJf2dk3qQ/5ZFb+PXkRNiSoMGGUKlsgLfrxneb72axVJyIYji64E2+nNfYyw=="}},"@bundled-es-modules/statuses@1.0.1":{"resolution":{"integrity":"sha512-yn7BklA5acgcBr+7w064fGV+SGIFySjCKpqjcWgBAIfrAkY+4GQTJJHQMeT3V/sgz23VTEVV8TtOmkvJAhFVfg=="}},"@bundled-es-modules/tough-cookie@0.1.6":{"resolution":{"integrity":"sha512-dvMHbL464C0zI+Yqxbz6kZ5TOEp7GLW+pry/RWndAR8MJQAXZ2rPmIs8tziTZjeIyhSNZgZbCePtfSbdWqStJw=="}},"@changesets/apply-release-plan@7.0.13":{"resolution":{"integrity":"sha512-BIW7bofD2yAWoE8H4V40FikC+1nNFEKBisMECccS16W1rt6qqhNTBDmIw5HaqmMgtLNz9e7oiALiEUuKrQ4oHg=="}},"@changesets/assemble-release-plan@6.0.9":{"resolution":{"integrity":"sha512-tPgeeqCHIwNo8sypKlS3gOPmsS3wP0zHt67JDuL20P4QcXiw/O4Hl7oXiuLnP9yg+rXLQ2sScdV1Kkzde61iSQ=="}},"@changesets/changelog-git@0.2.1":{"resolution":{"integrity":"sha512-x/xEleCFLH28c3bQeQIyeZf8lFXyDFVn1SgcBiR2Tw/r4IAWlk1fzxCEZ6NxQAjF2Nwtczoen3OA2qR+UawQ8Q=="}},"@changesets/cli@2.29.7":{"resolution":{"integrity":"sha512-R7RqWoaksyyKXbKXBTbT4REdy22yH81mcFK6sWtqSanxUCbUi9Uf+6aqxZtDQouIqPdem2W56CdxXgsxdq7FLQ=="},"hasBin":true},"@changesets/config@3.1.1":{"resolution":{"integrity":"sha512-bd+3Ap2TKXxljCggI0mKPfzCQKeV/TU4yO2h2C6vAihIo8tzseAn2e7klSuiyYYXvgu53zMN1OeYMIQkaQoWnA=="}},"@changesets/errors@0.2.0":{"resolution":{"integrity":"sha512-6BLOQUscTpZeGljvyQXlWOItQyU71kCdGz7Pi8H8zdw6BI0g3m43iL4xKUVPWtG+qrrL9DTjpdn8eYuCQSRpow=="}},"@changesets/get-dependents-graph@2.1.3":{"resolution":{"integrity":"sha512-gphr+v0mv2I3Oxt19VdWRRUxq3sseyUpX9DaHpTUmLj92Y10AGy+XOtV+kbM6L/fDcpx7/ISDFK6T8A/P3lOdQ=="}},"@changesets/get-release-plan@4.0.13":{"resolution":{"integrity":"sha512-DWG1pus72FcNeXkM12tx+xtExyH/c9I1z+2aXlObH3i9YA7+WZEVaiHzHl03thpvAgWTRaH64MpfHxozfF7Dvg=="}},"@changesets/get-version-range-type@0.4.0":{"resolution":{"integrity":"sha512-hwawtob9DryoGTpixy1D3ZXbGgJu1Rhr+ySH2PvTLHvkZuQ7sRT4oQwMh0hbqZH1weAooedEjRsbrWcGLCeyVQ=="}},"@changesets/git@3.0.4":{"resolution":{"integrity":"sha512-BXANzRFkX+XcC1q/d27NKvlJ1yf7PSAgi8JG6dt8EfbHFHi4neau7mufcSca5zRhwOL8j9s6EqsxmT+s+/E6Sw=="}},"@changesets/logger@0.1.1":{"resolution":{"integrity":"sha512-OQtR36ZlnuTxKqoW4Sv6x5YIhOmClRd5pWsjZsddYxpWs517R0HkyiefQPIytCVh4ZcC5x9XaG8KTdd5iRQUfg=="}},"@changesets/parse@0.4.1":{"resolution":{"integrity":"sha512-iwksMs5Bf/wUItfcg+OXrEpravm5rEd9Bf4oyIPL4kVTmJQ7PNDSd6MDYkpSJR1pn7tz/k8Zf2DhTCqX08Ou+Q=="}},"@changesets/pre@2.0.2":{"resolution":{"integrity":"sha512-HaL/gEyFVvkf9KFg6484wR9s0qjAXlZ8qWPDkTyKF6+zqjBe/I2mygg3MbpZ++hdi0ToqNUF8cjj7fBy0dg8Ug=="}},"@changesets/read@0.6.5":{"resolution":{"integrity":"sha512-UPzNGhsSjHD3Veb0xO/MwvasGe8eMyNrR/sT9gR8Q3DhOQZirgKhhXv/8hVsI0QpPjR004Z9iFxoJU6in3uGMg=="}},"@changesets/should-skip-package@0.1.2":{"resolution":{"integrity":"sha512-qAK/WrqWLNCP22UDdBTMPH5f41elVDlsNyat180A33dWxuUDyNpg6fPi/FyTZwRriVjg0L8gnjJn2F9XAoF0qw=="}},"@changesets/types@4.1.0":{"resolution":{"integrity":"sha512-LDQvVDv5Kb50ny2s25Fhm3d9QSZimsoUGBsUioj6MC3qbMUCuC8GPIvk/M6IvXx3lYhAs0lwWUQLb+VIEUCECw=="}},"@changesets/types@6.1.0":{"resolution":{"integrity":"sha512-rKQcJ+o1nKNgeoYRHKOS07tAMNd3YSN0uHaJOZYjBAgxfV7TUE7JE+z4BzZdQwb5hKaYbayKN5KrYV7ODb2rAA=="}},"@changesets/write@0.4.0":{"resolution":{"integrity":"sha512-CdTLvIOPiCNuH71pyDu3rA+Q0n65cmAbXnwWH84rKGiFumFzkmHNT8KHTMEchcxN+Kl8I54xGUhJ7l3E7X396Q=="}},"@chevrotain/cst-dts-gen@11.0.3":{"resolution":{"integrity":"sha512-BvIKpRLeS/8UbfxXxgC33xOumsacaeCKAjAeLyOn7Pcp95HiRbrpl14S+9vaZLolnbssPIUuiUd8IvgkRyt6NQ=="}},"@chevrotain/gast@11.0.3":{"resolution":{"integrity":"sha512-+qNfcoNk70PyS/uxmj3li5NiECO+2YKZZQMbmjTqRI3Qchu8Hig/Q9vgkHpI3alNjr7M+a2St5pw5w5F6NL5/Q=="}},"@chevrotain/regexp-to-ast@11.0.3":{"resolution":{"integrity":"sha512-1fMHaBZxLFvWI067AVbGJav1eRY7N8DDvYCTwGBiE/ytKBgP8azTdgyrKyWZ9Mfh09eHWb5PgTSO8wi7U824RA=="}},"@chevrotain/types@11.0.3":{"resolution":{"integrity":"sha512-gsiM3G8b58kZC2HaWR50gu6Y1440cHiJ+i3JUvcp/35JchYejb2+5MVeJK0iKThYpAa/P2PYFV4hoi44HD+aHQ=="}},"@chevrotain/utils@11.0.3":{"resolution":{"integrity":"sha512-YslZMgtJUyuMbZ+aKvfF3x1f5liK4mWNxghFRv7jqRR9C3R3fAOGTTKvxXDa2Y1s9zSbcpuO0cAxDYsc9SrXoQ=="}},"@cloudflare/kv-asset-handler@0.5.0":{"resolution":{"integrity":"sha512-jxQYkj8dSIzc0cD6cMMNdOc1UVjqSqu8BZdor5s8cGjW2I8BjODt/kWPVdY+u9zj3ms75Q5qaZgnxUad83+eAg=="},"engines":{"node":">=22.0.0"}},"@cloudflare/unenv-preset@2.16.1":{"resolution":{"integrity":"sha512-ECxObrMfyTl5bhQf/lZCXwo5G6xX9IAUo+nDMKK4SZ8m4Jvvxp52vilxyySSWh2YTZz8+HQ07qGH/2rEom1vDw=="},"peerDependencies":{"unenv":"2.0.0-rc.24","workerd":">1.20260305.0 <2.0.0-0"},"peerDependenciesMeta":{"workerd":{"optional":true}}},"@cloudflare/vite-plugin@1.36.0":{"resolution":{"integrity":"sha512-Rkfa3wAbJ1lqCquWX453x4YlngO+OjNmCQvjb4D5JyMW7KprX6fEJE1NQ06giJDonEz0306EASELF93pRADibA=="},"peerDependencies":{"vite":"^6.1.0 || ^7.0.0 || ^8.0.0","wrangler":"^4.88.0"}},"@cloudflare/workerd-darwin-64@1.20260504.1":{"resolution":{"integrity":"sha512-IOMjYoftNRXabFt+QzY2Bo2mR2TNl8xsGvE0HnQ+K0S2c61VOUGUkr9gpJjnwrJ65yA9Qed4xfg0RRqXHO+nfA=="},"engines":{"node":">=16"},"cpu":["x64"],"os":["darwin"]},"@cloudflare/workerd-darwin-arm64@1.20260504.1":{"resolution":{"integrity":"sha512-7iMXxIU0N5KklZpQm2kuwTm0XtrpHXNqhejJyGquky8gSTnm31zBdutjMekH8VRr6ckbvZIl6lvqXzXdfOEojg=="},"engines":{"node":">=16"},"cpu":["arm64"],"os":["darwin"]},"@cloudflare/workerd-linux-64@1.20260504.1":{"resolution":{"integrity":"sha512-YLB0EH5FQV++oWlalFgPF3p2Bp3dn/D6RWNMw0ukEC8gKnNX6o61A+dlFUl8hRD35ja1zKRxGFUojs4U2+MoJA=="},"engines":{"node":">=16"},"cpu":["x64"],"os":["linux"]},"@cloudflare/workerd-linux-arm64@1.20260504.1":{"resolution":{"integrity":"sha512-FAh/82jDXDArfn9xDih6f/IJfF2SHXBb4nFeQAyHyvXrn18zM6Q3yl2Vj0U7LybbNbmu7TNGghwaM2NoSQS+0A=="},"engines":{"node":">=16"},"cpu":["arm64"],"os":["linux"]},"@cloudflare/workerd-windows-64@1.20260504.1":{"resolution":{"integrity":"sha512-QUg/B3dfrK/KHHHhiJzdkLkTg5mG7lA3t8iplbBoUa3XKCLOHOOXhbU4WSYlLqg8YnsQ6XLZ1HVA99fmZhJh7A=="},"engines":{"node":">=16"},"cpu":["x64"],"os":["win32"]},"@cspotcode/source-map-support@0.8.1":{"resolution":{"integrity":"sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw=="},"engines":{"node":">=12"}},"@csstools/color-helpers@5.1.0":{"resolution":{"integrity":"sha512-S11EXWJyy0Mz5SYvRmY8nJYTFFd1LCNV+7cXyAgQtOOuzb4EsgfqDufL+9esx72/eLhsRdGZwaldu/h+E4t4BA=="},"engines":{"node":">=18"}},"@csstools/css-calc@2.1.4":{"resolution":{"integrity":"sha512-3N8oaj+0juUw/1H3YwmDDJXCgTB1gKU6Hc/bB502u9zR0q2vd786XJH9QfrKIEgFlZmhZiq6epXl4rHqhzsIgQ=="},"engines":{"node":">=18"},"peerDependencies":{"@csstools/css-parser-algorithms":"^3.0.5","@csstools/css-tokenizer":"^3.0.4"}},"@csstools/css-color-parser@3.1.0":{"resolution":{"integrity":"sha512-nbtKwh3a6xNVIp/VRuXV64yTKnb1IjTAEEh3irzS+HkKjAOYLTGNb9pmVNntZ8iVBHcWDA2Dof0QtPgFI1BaTA=="},"engines":{"node":">=18"},"peerDependencies":{"@csstools/css-parser-algorithms":"^3.0.5","@csstools/css-tokenizer":"^3.0.4"}},"@csstools/css-parser-algorithms@3.0.5":{"resolution":{"integrity":"sha512-DaDeUkXZKjdGhgYaHNJTV9pV7Y9B3b644jCLs9Upc3VeNGg6LWARAT6O+Q+/COo+2gg/bM5rhpMAtf70WqfBdQ=="},"engines":{"node":">=18"},"peerDependencies":{"@csstools/css-tokenizer":"^3.0.4"}},"@csstools/css-syntax-patches-for-csstree@1.0.14":{"resolution":{"integrity":"sha512-zSlIxa20WvMojjpCSy8WrNpcZ61RqfTfX3XTaOeVlGJrt/8HF3YbzgFZa01yTbT4GWQLwfTcC3EB8i3XnB647Q=="},"engines":{"node":">=18"},"peerDependencies":{"postcss":"^8.4"}},"@csstools/css-tokenizer@3.0.4":{"resolution":{"integrity":"sha512-Vd/9EVDiu6PPJt9yAh6roZP6El1xHrdvIVGjyBsHR0RYwNHgL7FJPyIIW4fANJNG6FtyZfvlRPpFI4ZM/lubvw=="},"engines":{"node":">=18"}},"@emnapi/core@1.10.0":{"resolution":{"integrity":"sha512-yq6OkJ4p82CAfPl0u9mQebQHKPJkY7WrIuk205cTYnYe+k2Z8YBh11FrbRG/H6ihirqcacOgl2BIO8oyMQLeXw=="}},"@emnapi/core@1.4.5":{"resolution":{"integrity":"sha512-XsLw1dEOpkSX/WucdqUhPWP7hDxSvZiY+fsUC14h+FtQ2Ifni4znbBt8punRX+Uj2JG/uDb8nEHVKvrVlvdZ5Q=="}},"@emnapi/runtime@1.10.0":{"resolution":{"integrity":"sha512-ewvYlk86xUoGI0zQRNq/mC+16R1QeDlKQy21Ki3oSYXNgLb45GV1P6A0M+/s6nyCuNDqe5VpaY84BzXGwVbwFA=="}},"@emnapi/runtime@1.4.5":{"resolution":{"integrity":"sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg=="}},"@emnapi/wasi-threads@1.0.4":{"resolution":{"integrity":"sha512-PJR+bOmMOPH8AtcTGAyYNiuJ3/Fcoj2XN/gBEWzDIKh254XO+mM9XoXHk5GNEhodxeMznbg7BlRojVbKN+gC6g=="}},"@emnapi/wasi-threads@1.2.1":{"resolution":{"integrity":"sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w=="}},"@esbuild/aix-ppc64@0.25.12":{"resolution":{"integrity":"sha512-Hhmwd6CInZ3dwpuGTF8fJG6yoWmsToE+vYgD4nytZVxcu1ulHpUQRAB1UJ8+N1Am3Mz4+xOByoQoSZf4D+CpkA=="},"engines":{"node":">=18"},"cpu":["ppc64"],"os":["aix"]},"@esbuild/aix-ppc64@0.27.3":{"resolution":{"integrity":"sha512-9fJMTNFTWZMh5qwrBItuziu834eOCUcEqymSH7pY+zoMVEZg3gcPuBNxH1EvfVYe9h0x/Ptw8KBzv7qxb7l8dg=="},"engines":{"node":">=18"},"cpu":["ppc64"],"os":["aix"]},"@esbuild/android-arm64@0.25.12":{"resolution":{"integrity":"sha512-6AAmLG7zwD1Z159jCKPvAxZd4y/VTO0VkprYy+3N2FtJ8+BQWFXU+OxARIwA46c5tdD9SsKGZ/1ocqBS/gAKHg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["android"]},"@esbuild/android-arm64@0.27.3":{"resolution":{"integrity":"sha512-YdghPYUmj/FX2SYKJ0OZxf+iaKgMsKHVPF1MAq/P8WirnSpCStzKJFjOjzsW0QQ7oIAiccHdcqjbHmJxRb/dmg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["android"]},"@esbuild/android-arm@0.25.12":{"resolution":{"integrity":"sha512-VJ+sKvNA/GE7Ccacc9Cha7bpS8nyzVv0jdVgwNDaR4gDMC/2TTRc33Ip8qrNYUcpkOHUT5OZ0bUcNNVZQ9RLlg=="},"engines":{"node":">=18"},"cpu":["arm"],"os":["android"]},"@esbuild/android-arm@0.27.3":{"resolution":{"integrity":"sha512-i5D1hPY7GIQmXlXhs2w8AWHhenb00+GxjxRncS2ZM7YNVGNfaMxgzSGuO8o8SJzRc/oZwU2bcScvVERk03QhzA=="},"engines":{"node":">=18"},"cpu":["arm"],"os":["android"]},"@esbuild/android-x64@0.25.12":{"resolution":{"integrity":"sha512-5jbb+2hhDHx5phYR2By8GTWEzn6I9UqR11Kwf22iKbNpYrsmRB18aX/9ivc5cabcUiAT/wM+YIZ6SG9QO6a8kg=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["android"]},"@esbuild/android-x64@0.27.3":{"resolution":{"integrity":"sha512-IN/0BNTkHtk8lkOM8JWAYFg4ORxBkZQf9zXiEOfERX/CzxW3Vg1ewAhU7QSWQpVIzTW+b8Xy+lGzdYXV6UZObQ=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["android"]},"@esbuild/darwin-arm64@0.25.12":{"resolution":{"integrity":"sha512-N3zl+lxHCifgIlcMUP5016ESkeQjLj/959RxxNYIthIg+CQHInujFuXeWbWMgnTo4cp5XVHqFPmpyu9J65C1Yg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["darwin"]},"@esbuild/darwin-arm64@0.27.3":{"resolution":{"integrity":"sha512-Re491k7ByTVRy0t3EKWajdLIr0gz2kKKfzafkth4Q8A5n1xTHrkqZgLLjFEHVD+AXdUGgQMq+Godfq45mGpCKg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["darwin"]},"@esbuild/darwin-x64@0.25.12":{"resolution":{"integrity":"sha512-HQ9ka4Kx21qHXwtlTUVbKJOAnmG1ipXhdWTmNXiPzPfWKpXqASVcWdnf2bnL73wgjNrFXAa3yYvBSd9pzfEIpA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["darwin"]},"@esbuild/darwin-x64@0.27.3":{"resolution":{"integrity":"sha512-vHk/hA7/1AckjGzRqi6wbo+jaShzRowYip6rt6q7VYEDX4LEy1pZfDpdxCBnGtl+A5zq8iXDcyuxwtv3hNtHFg=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["darwin"]},"@esbuild/freebsd-arm64@0.25.12":{"resolution":{"integrity":"sha512-gA0Bx759+7Jve03K1S0vkOu5Lg/85dou3EseOGUes8flVOGxbhDDh/iZaoek11Y8mtyKPGF3vP8XhnkDEAmzeg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["freebsd"]},"@esbuild/freebsd-arm64@0.27.3":{"resolution":{"integrity":"sha512-ipTYM2fjt3kQAYOvo6vcxJx3nBYAzPjgTCk7QEgZG8AUO3ydUhvelmhrbOheMnGOlaSFUoHXB6un+A7q4ygY9w=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["freebsd"]},"@esbuild/freebsd-x64@0.25.12":{"resolution":{"integrity":"sha512-TGbO26Yw2xsHzxtbVFGEXBFH0FRAP7gtcPE7P5yP7wGy7cXK2oO7RyOhL5NLiqTlBh47XhmIUXuGciXEqYFfBQ=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["freebsd"]},"@esbuild/freebsd-x64@0.27.3":{"resolution":{"integrity":"sha512-dDk0X87T7mI6U3K9VjWtHOXqwAMJBNN2r7bejDsc+j03SEjtD9HrOl8gVFByeM0aJksoUuUVU9TBaZa2rgj0oA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["freebsd"]},"@esbuild/linux-arm64@0.25.12":{"resolution":{"integrity":"sha512-8bwX7a8FghIgrupcxb4aUmYDLp8pX06rGh5HqDT7bB+8Rdells6mHvrFHHW2JAOPZUbnjUpKTLg6ECyzvas2AQ=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["linux"]},"@esbuild/linux-arm64@0.27.3":{"resolution":{"integrity":"sha512-sZOuFz/xWnZ4KH3YfFrKCf1WyPZHakVzTiqji3WDc0BCl2kBwiJLCXpzLzUBLgmp4veFZdvN5ChW4Eq/8Fc2Fg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["linux"]},"@esbuild/linux-arm@0.25.12":{"resolution":{"integrity":"sha512-lPDGyC1JPDou8kGcywY0YILzWlhhnRjdof3UlcoqYmS9El818LLfJJc3PXXgZHrHCAKs/Z2SeZtDJr5MrkxtOw=="},"engines":{"node":">=18"},"cpu":["arm"],"os":["linux"]},"@esbuild/linux-arm@0.27.3":{"resolution":{"integrity":"sha512-s6nPv2QkSupJwLYyfS+gwdirm0ukyTFNl3KTgZEAiJDd+iHZcbTPPcWCcRYH+WlNbwChgH2QkE9NSlNrMT8Gfw=="},"engines":{"node":">=18"},"cpu":["arm"],"os":["linux"]},"@esbuild/linux-ia32@0.25.12":{"resolution":{"integrity":"sha512-0y9KrdVnbMM2/vG8KfU0byhUN+EFCny9+8g202gYqSSVMonbsCfLjUO+rCci7pM0WBEtz+oK/PIwHkzxkyharA=="},"engines":{"node":">=18"},"cpu":["ia32"],"os":["linux"]},"@esbuild/linux-ia32@0.27.3":{"resolution":{"integrity":"sha512-yGlQYjdxtLdh0a3jHjuwOrxQjOZYD/C9PfdbgJJF3TIZWnm/tMd/RcNiLngiu4iwcBAOezdnSLAwQDPqTmtTYg=="},"engines":{"node":">=18"},"cpu":["ia32"],"os":["linux"]},"@esbuild/linux-loong64@0.25.12":{"resolution":{"integrity":"sha512-h///Lr5a9rib/v1GGqXVGzjL4TMvVTv+s1DPoxQdz7l/AYv6LDSxdIwzxkrPW438oUXiDtwM10o9PmwS/6Z0Ng=="},"engines":{"node":">=18"},"cpu":["loong64"],"os":["linux"]},"@esbuild/linux-loong64@0.27.3":{"resolution":{"integrity":"sha512-WO60Sn8ly3gtzhyjATDgieJNet/KqsDlX5nRC5Y3oTFcS1l0KWba+SEa9Ja1GfDqSF1z6hif/SkpQJbL63cgOA=="},"engines":{"node":">=18"},"cpu":["loong64"],"os":["linux"]},"@esbuild/linux-mips64el@0.25.12":{"resolution":{"integrity":"sha512-iyRrM1Pzy9GFMDLsXn1iHUm18nhKnNMWscjmp4+hpafcZjrr2WbT//d20xaGljXDBYHqRcl8HnxbX6uaA/eGVw=="},"engines":{"node":">=18"},"cpu":["mips64el"],"os":["linux"]},"@esbuild/linux-mips64el@0.27.3":{"resolution":{"integrity":"sha512-APsymYA6sGcZ4pD6k+UxbDjOFSvPWyZhjaiPyl/f79xKxwTnrn5QUnXR5prvetuaSMsb4jgeHewIDCIWljrSxw=="},"engines":{"node":">=18"},"cpu":["mips64el"],"os":["linux"]},"@esbuild/linux-ppc64@0.25.12":{"resolution":{"integrity":"sha512-9meM/lRXxMi5PSUqEXRCtVjEZBGwB7P/D4yT8UG/mwIdze2aV4Vo6U5gD3+RsoHXKkHCfSxZKzmDssVlRj1QQA=="},"engines":{"node":">=18"},"cpu":["ppc64"],"os":["linux"]},"@esbuild/linux-ppc64@0.27.3":{"resolution":{"integrity":"sha512-eizBnTeBefojtDb9nSh4vvVQ3V9Qf9Df01PfawPcRzJH4gFSgrObw+LveUyDoKU3kxi5+9RJTCWlj4FjYXVPEA=="},"engines":{"node":">=18"},"cpu":["ppc64"],"os":["linux"]},"@esbuild/linux-riscv64@0.25.12":{"resolution":{"integrity":"sha512-Zr7KR4hgKUpWAwb1f3o5ygT04MzqVrGEGXGLnj15YQDJErYu/BGg+wmFlIDOdJp0PmB0lLvxFIOXZgFRrdjR0w=="},"engines":{"node":">=18"},"cpu":["riscv64"],"os":["linux"]},"@esbuild/linux-riscv64@0.27.3":{"resolution":{"integrity":"sha512-3Emwh0r5wmfm3ssTWRQSyVhbOHvqegUDRd0WhmXKX2mkHJe1SFCMJhagUleMq+Uci34wLSipf8Lagt4LlpRFWQ=="},"engines":{"node":">=18"},"cpu":["riscv64"],"os":["linux"]},"@esbuild/linux-s390x@0.25.12":{"resolution":{"integrity":"sha512-MsKncOcgTNvdtiISc/jZs/Zf8d0cl/t3gYWX8J9ubBnVOwlk65UIEEvgBORTiljloIWnBzLs4qhzPkJcitIzIg=="},"engines":{"node":">=18"},"cpu":["s390x"],"os":["linux"]},"@esbuild/linux-s390x@0.27.3":{"resolution":{"integrity":"sha512-pBHUx9LzXWBc7MFIEEL0yD/ZVtNgLytvx60gES28GcWMqil8ElCYR4kvbV2BDqsHOvVDRrOxGySBM9Fcv744hw=="},"engines":{"node":">=18"},"cpu":["s390x"],"os":["linux"]},"@esbuild/linux-x64@0.25.12":{"resolution":{"integrity":"sha512-uqZMTLr/zR/ed4jIGnwSLkaHmPjOjJvnm6TVVitAa08SLS9Z0VM8wIRx7gWbJB5/J54YuIMInDquWyYvQLZkgw=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["linux"]},"@esbuild/linux-x64@0.27.3":{"resolution":{"integrity":"sha512-Czi8yzXUWIQYAtL/2y6vogER8pvcsOsk5cpwL4Gk5nJqH5UZiVByIY8Eorm5R13gq+DQKYg0+JyQoytLQas4dA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["linux"]},"@esbuild/netbsd-arm64@0.25.12":{"resolution":{"integrity":"sha512-xXwcTq4GhRM7J9A8Gv5boanHhRa/Q9KLVmcyXHCTaM4wKfIpWkdXiMog/KsnxzJ0A1+nD+zoecuzqPmCRyBGjg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["netbsd"]},"@esbuild/netbsd-arm64@0.27.3":{"resolution":{"integrity":"sha512-sDpk0RgmTCR/5HguIZa9n9u+HVKf40fbEUt+iTzSnCaGvY9kFP0YKBWZtJaraonFnqef5SlJ8/TiPAxzyS+UoA=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["netbsd"]},"@esbuild/netbsd-x64@0.25.12":{"resolution":{"integrity":"sha512-Ld5pTlzPy3YwGec4OuHh1aCVCRvOXdH8DgRjfDy/oumVovmuSzWfnSJg+VtakB9Cm0gxNO9BzWkj6mtO1FMXkQ=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["netbsd"]},"@esbuild/netbsd-x64@0.27.3":{"resolution":{"integrity":"sha512-P14lFKJl/DdaE00LItAukUdZO5iqNH7+PjoBm+fLQjtxfcfFE20Xf5CrLsmZdq5LFFZzb5JMZ9grUwvtVYzjiA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["netbsd"]},"@esbuild/openbsd-arm64@0.25.12":{"resolution":{"integrity":"sha512-fF96T6KsBo/pkQI950FARU9apGNTSlZGsv1jZBAlcLL1MLjLNIWPBkj5NlSz8aAzYKg+eNqknrUJ24QBybeR5A=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["openbsd"]},"@esbuild/openbsd-arm64@0.27.3":{"resolution":{"integrity":"sha512-AIcMP77AvirGbRl/UZFTq5hjXK+2wC7qFRGoHSDrZ5v5b8DK/GYpXW3CPRL53NkvDqb9D+alBiC/dV0Fb7eJcw=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["openbsd"]},"@esbuild/openbsd-x64@0.25.12":{"resolution":{"integrity":"sha512-MZyXUkZHjQxUvzK7rN8DJ3SRmrVrke8ZyRusHlP+kuwqTcfWLyqMOE3sScPPyeIXN/mDJIfGXvcMqCgYKekoQw=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["openbsd"]},"@esbuild/openbsd-x64@0.27.3":{"resolution":{"integrity":"sha512-DnW2sRrBzA+YnE70LKqnM3P+z8vehfJWHXECbwBmH/CU51z6FiqTQTHFenPlHmo3a8UgpLyH3PT+87OViOh1AQ=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["openbsd"]},"@esbuild/openharmony-arm64@0.25.12":{"resolution":{"integrity":"sha512-rm0YWsqUSRrjncSXGA7Zv78Nbnw4XL6/dzr20cyrQf7ZmRcsovpcRBdhD43Nuk3y7XIoW2OxMVvwuRvk9XdASg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["openharmony"]},"@esbuild/openharmony-arm64@0.27.3":{"resolution":{"integrity":"sha512-NinAEgr/etERPTsZJ7aEZQvvg/A6IsZG/LgZy+81wON2huV7SrK3e63dU0XhyZP4RKGyTm7aOgmQk0bGp0fy2g=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["openharmony"]},"@esbuild/sunos-x64@0.25.12":{"resolution":{"integrity":"sha512-3wGSCDyuTHQUzt0nV7bocDy72r2lI33QL3gkDNGkod22EsYl04sMf0qLb8luNKTOmgF/eDEDP5BFNwoBKH441w=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["sunos"]},"@esbuild/sunos-x64@0.27.3":{"resolution":{"integrity":"sha512-PanZ+nEz+eWoBJ8/f8HKxTTD172SKwdXebZ0ndd953gt1HRBbhMsaNqjTyYLGLPdoWHy4zLU7bDVJztF5f3BHA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["sunos"]},"@esbuild/win32-arm64@0.25.12":{"resolution":{"integrity":"sha512-rMmLrur64A7+DKlnSuwqUdRKyd3UE7oPJZmnljqEptesKM8wx9J8gx5u0+9Pq0fQQW8vqeKebwNXdfOyP+8Bsg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["win32"]},"@esbuild/win32-arm64@0.27.3":{"resolution":{"integrity":"sha512-B2t59lWWYrbRDw/tjiWOuzSsFh1Y/E95ofKz7rIVYSQkUYBjfSgf6oeYPNWHToFRr2zx52JKApIcAS/D5TUBnA=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["win32"]},"@esbuild/win32-ia32@0.25.12":{"resolution":{"integrity":"sha512-HkqnmmBoCbCwxUKKNPBixiWDGCpQGVsrQfJoVGYLPT41XWF8lHuE5N6WhVia2n4o5QK5M4tYr21827fNhi4byQ=="},"engines":{"node":">=18"},"cpu":["ia32"],"os":["win32"]},"@esbuild/win32-ia32@0.27.3":{"resolution":{"integrity":"sha512-QLKSFeXNS8+tHW7tZpMtjlNb7HKau0QDpwm49u0vUp9y1WOF+PEzkU84y9GqYaAVW8aH8f3GcBck26jh54cX4Q=="},"engines":{"node":">=18"},"cpu":["ia32"],"os":["win32"]},"@esbuild/win32-x64@0.25.12":{"resolution":{"integrity":"sha512-alJC0uCZpTFrSL0CCDjcgleBXPnCrEAhTBILpeAp7M/OFgoqtAetfBzX0xM00MUsVVPpVjlPuMbREqnZCXaTnA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["win32"]},"@esbuild/win32-x64@0.27.3":{"resolution":{"integrity":"sha512-4uJGhsxuptu3OcpVAzli+/gWusVGwZZHTlS63hh++ehExkVT8SgiEf7/uC/PclrPPkLhZqGgCTjd0VWLo6xMqA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["win32"]},"@iconify/types@2.0.0":{"resolution":{"integrity":"sha512-+wluvCrRhXrhyOmRDJ3q8mux9JkKy5SJ/v8ol2tu4FVjyYvtEzkc/3pK15ET6RKg4b4w4BmTk1+gsCUhf21Ykg=="}},"@iconify/utils@3.0.2":{"resolution":{"integrity":"sha512-EfJS0rLfVuRuJRn4psJHtK2A9TqVnkxPpHY6lYHiB9+8eSuudsxbwMiavocG45ujOo6FJ+CIRlRnlOGinzkaGQ=="}},"@img/colour@1.1.0":{"resolution":{"integrity":"sha512-Td76q7j57o/tLVdgS746cYARfSyxk8iEfRxewL9h4OMzYhbW4TAcppl0mT4eyqXddh6L/jwoM75mo7ixa/pCeQ=="},"engines":{"node":">=18"}},"@img/sharp-darwin-arm64@0.34.5":{"resolution":{"integrity":"sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm64"],"os":["darwin"]},"@img/sharp-darwin-x64@0.34.5":{"resolution":{"integrity":"sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["x64"],"os":["darwin"]},"@img/sharp-libvips-darwin-arm64@1.2.4":{"resolution":{"integrity":"sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g=="},"cpu":["arm64"],"os":["darwin"]},"@img/sharp-libvips-darwin-x64@1.2.4":{"resolution":{"integrity":"sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg=="},"cpu":["x64"],"os":["darwin"]},"@img/sharp-libvips-linux-arm64@1.2.4":{"resolution":{"integrity":"sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw=="},"cpu":["arm64"],"os":["linux"]},"@img/sharp-libvips-linux-arm@1.2.4":{"resolution":{"integrity":"sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A=="},"cpu":["arm"],"os":["linux"]},"@img/sharp-libvips-linux-ppc64@1.2.4":{"resolution":{"integrity":"sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA=="},"cpu":["ppc64"],"os":["linux"]},"@img/sharp-libvips-linux-riscv64@1.2.4":{"resolution":{"integrity":"sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA=="},"cpu":["riscv64"],"os":["linux"]},"@img/sharp-libvips-linux-s390x@1.2.4":{"resolution":{"integrity":"sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ=="},"cpu":["s390x"],"os":["linux"]},"@img/sharp-libvips-linux-x64@1.2.4":{"resolution":{"integrity":"sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw=="},"cpu":["x64"],"os":["linux"]},"@img/sharp-libvips-linuxmusl-arm64@1.2.4":{"resolution":{"integrity":"sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw=="},"cpu":["arm64"],"os":["linux"]},"@img/sharp-libvips-linuxmusl-x64@1.2.4":{"resolution":{"integrity":"sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg=="},"cpu":["x64"],"os":["linux"]},"@img/sharp-linux-arm64@0.34.5":{"resolution":{"integrity":"sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm64"],"os":["linux"]},"@img/sharp-linux-arm@0.34.5":{"resolution":{"integrity":"sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm"],"os":["linux"]},"@img/sharp-linux-ppc64@0.34.5":{"resolution":{"integrity":"sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["ppc64"],"os":["linux"]},"@img/sharp-linux-riscv64@0.34.5":{"resolution":{"integrity":"sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["riscv64"],"os":["linux"]},"@img/sharp-linux-s390x@0.34.5":{"resolution":{"integrity":"sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["s390x"],"os":["linux"]},"@img/sharp-linux-x64@0.34.5":{"resolution":{"integrity":"sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["x64"],"os":["linux"]},"@img/sharp-linuxmusl-arm64@0.34.5":{"resolution":{"integrity":"sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm64"],"os":["linux"]},"@img/sharp-linuxmusl-x64@0.34.5":{"resolution":{"integrity":"sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["x64"],"os":["linux"]},"@img/sharp-wasm32@0.34.5":{"resolution":{"integrity":"sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["wasm32"]},"@img/sharp-win32-arm64@0.34.5":{"resolution":{"integrity":"sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm64"],"os":["win32"]},"@img/sharp-win32-ia32@0.34.5":{"resolution":{"integrity":"sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["ia32"],"os":["win32"]},"@img/sharp-win32-x64@0.34.5":{"resolution":{"integrity":"sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["x64"],"os":["win32"]},"@inquirer/ansi@1.0.2":{"resolution":{"integrity":"sha512-S8qNSZiYzFd0wAcyG5AXCvUHC5Sr7xpZ9wZ2py9XR88jUz8wooStVx5M6dRzczbBWjic9NP7+rY0Xi7qqK/aMQ=="},"engines":{"node":">=18"}},"@inquirer/confirm@5.1.21":{"resolution":{"integrity":"sha512-KR8edRkIsUayMXV+o3Gv+q4jlhENF9nMYUZs9PA2HzrXeHI8M5uDag70U7RJn9yyiMZSbtF5/UexBtAVtZGSbQ=="},"engines":{"node":">=18"},"peerDependencies":{"@types/node":">=18"},"peerDependenciesMeta":{"@types/node":{"optional":true}}},"@inquirer/core@10.3.2":{"resolution":{"integrity":"sha512-43RTuEbfP8MbKzedNqBrlhhNKVwoK//vUFNW3Q3vZ88BLcrs4kYpGg+B2mm5p2K/HfygoCxuKwJJiv8PbGmE0A=="},"engines":{"node":">=18"},"peerDependencies":{"@types/node":">=18"},"peerDependenciesMeta":{"@types/node":{"optional":true}}},"@inquirer/external-editor@1.0.1":{"resolution":{"integrity":"sha512-Oau4yL24d2B5IL4ma4UpbQigkVhzPDXLoqy1ggK4gnHg/stmkffJE4oOXHXF3uz0UEpywG68KcyXsyYpA1Re/Q=="},"engines":{"node":">=18"},"peerDependencies":{"@types/node":">=18"},"peerDependenciesMeta":{"@types/node":{"optional":true}}},"@inquirer/figures@1.0.15":{"resolution":{"integrity":"sha512-t2IEY+unGHOzAaVM5Xx6DEWKeXlDDcNPeDyUpsRc6CUhBfU3VQOEl+Vssh7VNp1dR8MdUJBWhuObjXCsVpjN5g=="},"engines":{"node":">=18"}},"@inquirer/type@3.0.10":{"resolution":{"integrity":"sha512-BvziSRxfz5Ov8ch0z/n3oijRSEcEsHnhggm4xFZe93DHcUCTlutlq9Ox4SVENAfcRD22UQq7T/atg9Wr3k09eA=="},"engines":{"node":">=18"},"peerDependencies":{"@types/node":">=18"},"peerDependenciesMeta":{"@types/node":{"optional":true}}},"@isaacs/cliui@8.0.2":{"resolution":{"integrity":"sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA=="},"engines":{"node":">=12"}},"@istanbuljs/schema@0.1.3":{"resolution":{"integrity":"sha512-ZXRY4jNvVgSVQ8DL3LTcakaAtXwTVUxE81hslsyD2AtoXW/wVob10HkOJ1X/pAlcI7D+2YoZKg5do8G/w6RYgA=="},"engines":{"node":">=8"}},"@jest/diff-sequences@30.0.1":{"resolution":{"integrity":"sha512-n5H8QLDJ47QqbCNn5SuFjCRDrOLEZ0h8vAHCK5RL9Ls7Xa8AQLa/YxAc9UjFqoEDM48muwtBGjtMY5cr0PLDCw=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"@jest/get-type@30.1.0":{"resolution":{"integrity":"sha512-eMbZE2hUnx1WV0pmURZY9XoXPkUYjpc55mb0CrhtdWLtzMQPFvu/rZkTLZFTsdaVQa+Tr4eWAteqcUzoawq/uA=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"@jest/schemas@30.0.5":{"resolution":{"integrity":"sha512-DmdYgtezMkh3cpU8/1uyXakv3tJRcmcXxBOcO0tbaozPwpmh4YMsnWrQm9ZmZMfa5ocbxzbFk6O4bDPEc/iAnA=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"@jridgewell/gen-mapping@0.3.13":{"resolution":{"integrity":"sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA=="}},"@jridgewell/remapping@2.3.5":{"resolution":{"integrity":"sha512-LI9u/+laYG4Ds1TDKSJW2YPrIlcVYOwi2fUC6xB43lueCjgxV4lffOCZCtYFiH6TNOX+tQKXx97T4IKHbhyHEQ=="}},"@jridgewell/resolve-uri@3.1.2":{"resolution":{"integrity":"sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw=="},"engines":{"node":">=6.0.0"}},"@jridgewell/source-map@0.3.11":{"resolution":{"integrity":"sha512-ZMp1V8ZFcPG5dIWnQLr3NSI1MiCU7UETdS/A0G8V/XWHvJv3ZsFqutJn1Y5RPmAPX6F3BiE397OqveU/9NCuIA=="}},"@jridgewell/sourcemap-codec@1.5.5":{"resolution":{"integrity":"sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og=="}},"@jridgewell/trace-mapping@0.3.30":{"resolution":{"integrity":"sha512-GQ7Nw5G2lTu/BtHTKfXhKHok2WGetd4XYcVKGx00SjAk8GMwgJM3zr6zORiPGuOE+/vkc90KtTosSSvaCjKb2Q=="}},"@jridgewell/trace-mapping@0.3.31":{"resolution":{"integrity":"sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw=="}},"@jridgewell/trace-mapping@0.3.9":{"resolution":{"integrity":"sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ=="}},"@jsonjoy.com/buffers@17.63.0":{"resolution":{"integrity":"sha512-IZB5WQRVNPEbuqouOQxZHl59AL6/ff+gmM20+xAx4SRX6DjZnQAxs03pQ2J6g5ssN+pzmShrBuGeksjlcZ3HCw=="},"engines":{"node":">=10.0"},"peerDependencies":{"tslib":"2"}},"@jsonjoy.com/codegen@17.63.0":{"resolution":{"integrity":"sha512-vQ18JiRQ8YfZQwzwCQs88rR5eGuy6AFfu+anz9RTvHQs9L4AE8dGA/mLzu6teh6CiSQTo2TNOQbqRh4Vy+7LEQ=="},"engines":{"node":">=10.0"},"peerDependencies":{"tslib":"2"}},"@jsonjoy.com/json-pointer@17.63.0":{"resolution":{"integrity":"sha512-wAW7rQsGW2zWtE+77cXU8lXsoXYCKa9eHptK3a2CCoNTm5YpPA3dev6LuEyaTDYKdF4DTjtwREv2PpjJidHE5w=="},"engines":{"node":">=10.0"},"peerDependencies":{"tslib":"2"}},"@jsonjoy.com/util@17.63.0":{"resolution":{"integrity":"sha512-AhpTIOFvuixKwem4d+ey4In78KJLCrDIUyp0IQ8xgpbs0IjNPTTfT3nXXbYMgJGxjegmqa9otl9nqbCvxOaiXw=="},"engines":{"node":">=10.0"},"peerDependencies":{"tslib":"2"}},"@lix-js/plugin-json@1.0.1":{"resolution":{"integrity":"sha512-pCqzG08D8jLtVy8RnITPZIy92XNlRAJWLrlRrzh3ttwS/PWM/iXiOPPuzvb23MoFhYxerzJ8uDGXhEXfVagY2w=="}},"@lix-js/sdk@0.5.1":{"resolution":{"integrity":"sha512-FiDGp6BznOLdzNOCUC5OvTJ6KfdKGk8wd5edD1dhU46quS4vi4EkHjS/N+12PSpCfl/p3wBWSQD6vzvZcIHTFg=="},"engines":{"node":">=22"}},"@lix-js/server-protocol-schema@0.1.1":{"resolution":{"integrity":"sha512-jBeALB6prAbtr5q4vTuxnRZZv1M2rKe8iNqRQhFJ4Tv7150unEa0vKyz0hs8Gl3fUGsWaNJBh3J8++fpbrpRBQ=="}},"@manypkg/find-root@1.1.0":{"resolution":{"integrity":"sha512-mki5uBvhHzO8kYYix/WRy2WX8S3B5wdVSc9D6KcU5lQNglP2yt58/VfLuAK49glRXChosY8ap2oJ1qgma3GUVA=="}},"@manypkg/get-packages@1.1.3":{"resolution":{"integrity":"sha512-fo+QhuU3qE/2TQMQmbVMqaQ6EWbMhi4ABWP+O4AM1NqPBuy0OrApV5LO6BrrgnhtAHS2NH6RrVk9OL181tTi8A=="}},"@marcbachmann/cel-js@2.5.2":{"resolution":{"integrity":"sha512-QnvFBFQ+2T8gX4H4pmcgIfs3gXwfhRjv7hYoRRDLwKeXxgPEZ+zvExe1pGtPs8xPWHu4ng0CmllNpVHWi4kB9A=="},"engines":{"node":">=20.19.0"}},"@mermaid-js/parser@0.6.3":{"resolution":{"integrity":"sha512-lnjOhe7zyHjc+If7yT4zoedx2vo4sHaTmtkl1+or8BRTnCtDmcTpAjpzDSfCZrshM5bCoz0GyidzadJAH1xobA=="}},"@mswjs/interceptors@0.39.8":{"resolution":{"integrity":"sha512-2+BzZbjRO7Ct61k8fMNHEtoKjeWI9pIlHFTqBwZ5icHpqszIgEZbjb1MW5Z0+bITTCTl3gk4PDBxs9tA/csXvA=="},"engines":{"node":">=18"}},"@napi-rs/wasm-runtime@0.2.4":{"resolution":{"integrity":"sha512-9zESzOO5aDByvhIAsOy9TbpZ0Ur2AJbUI7UT73kcUTS2mxAMHOBaa1st/jAymNoCtvrit99kkzT1FZuXVcgfIQ=="}},"@napi-rs/wasm-runtime@1.1.4":{"resolution":{"integrity":"sha512-3NQNNgA1YSlJb/kMH1ildASP9HW7/7kYnRI2szWJaofaS1hWmbGI4H+d3+22aGzXXN9IJ+n+GiFVcGipJP18ow=="},"peerDependencies":{"@emnapi/core":"^1.7.1","@emnapi/runtime":"^1.7.1"}},"@nodelib/fs.scandir@2.1.5":{"resolution":{"integrity":"sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g=="},"engines":{"node":">= 8"}},"@nodelib/fs.stat@2.0.5":{"resolution":{"integrity":"sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A=="},"engines":{"node":">= 8"}},"@nodelib/fs.walk@1.2.8":{"resolution":{"integrity":"sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg=="},"engines":{"node":">= 8"}},"@nrwl/nx-cloud@19.1.0":{"resolution":{"integrity":"sha512-krngXVPfX0Zf6+zJDtcI59/Pt3JfcMPMZ9C/+/x6rvz4WGgyv1s0MI4crEUM0Lx5ZpS4QI0WNDCFVQSfGEBXUg=="}},"@nx/nx-darwin-arm64@21.4.1":{"resolution":{"integrity":"sha512-9BbkQnxGEDNX2ESbW4Zdrq1i09y6HOOgTuGbMJuy4e8F8rU/motMUqOpwmFgLHkLgPNZiOC2VXht3or/kQcpOg=="},"cpu":["arm64"],"os":["darwin"]},"@nx/nx-darwin-x64@21.4.1":{"resolution":{"integrity":"sha512-dnkmap1kc6aLV8CW1ihjsieZyaDDjlIB5QA2reTCLNSdTV446K6Fh0naLdaoG4ZkF27zJA/qBOuAaLzRHFJp3g=="},"cpu":["x64"],"os":["darwin"]},"@nx/nx-freebsd-x64@21.4.1":{"resolution":{"integrity":"sha512-RpxDBGOPeDqJjpbV7F3lO/w1aIKfLyG/BM0OpJfTgFVpUIl50kMj5M1m4W9A8kvYkfOD9pDbUaWszom7d57yjg=="},"cpu":["x64"],"os":["freebsd"]},"@nx/nx-linux-arm-gnueabihf@21.4.1":{"resolution":{"integrity":"sha512-2OyBoag2738XWmWK3ZLBuhaYb7XmzT3f8HzomggLDJoDhwDekjgRoNbTxogAAj6dlXSeuPjO81BSlIfXQcth3w=="},"cpu":["arm"],"os":["linux"]},"@nx/nx-linux-arm64-gnu@21.4.1":{"resolution":{"integrity":"sha512-2pg7/zjBDioUWJ3OY8Ixqy64eokKT5sh4iq1bk22bxOCf676aGrAu6khIxy4LBnPIdO0ZOK7KCJ7xOFP4phZqA=="},"cpu":["arm64"],"os":["linux"]},"@nx/nx-linux-arm64-musl@21.4.1":{"resolution":{"integrity":"sha512-whNxh12au/inQtkZju1ZfXSqDS0hCh/anzVCXfLYWFstdwv61XiRmFCSHeN0gRDthlncXFdgKoT1bGG5aMYLtA=="},"cpu":["arm64"],"os":["linux"]},"@nx/nx-linux-x64-gnu@21.4.1":{"resolution":{"integrity":"sha512-UHw57rzLio0AUDXV3l+xcxT3LjuXil7SHj+H8aYmXTpXktctQU2eYGOs5ATqJ1avVQRSejJugHF0i8oLErC28A=="},"cpu":["x64"],"os":["linux"]},"@nx/nx-linux-x64-musl@21.4.1":{"resolution":{"integrity":"sha512-qqE2Gy/DwOLIyePjM7GLHp/nDLZJnxHmqTeCiTQCp/BdbmqjRkSUz5oL+Uua0SNXaTu5hjAfvjXAhSTgBwVO6g=="},"cpu":["x64"],"os":["linux"]},"@nx/nx-win32-arm64-msvc@21.4.1":{"resolution":{"integrity":"sha512-NtEzMiRrSm2DdL4ntoDdjeze8DBrfZvLtx3Dq6+XmOhwnigR6umfWfZ6jbluZpuSQcxzQNVifqirdaQKYaYwDQ=="},"cpu":["arm64"],"os":["win32"]},"@nx/nx-win32-x64-msvc@21.4.1":{"resolution":{"integrity":"sha512-gpG+Y4G/mxGrfkUls6IZEuuBxRaKLMSEoVFLMb9JyyaLEDusn+HJ1m90XsOedjNLBHGMFigsd/KCCsXfFn4njg=="},"cpu":["x64"],"os":["win32"]},"@oozcitak/dom@2.0.2":{"resolution":{"integrity":"sha512-GjpKhkSYC3Mj4+lfwEyI1dqnsKTgwGy48ytZEhm4A/xnH/8z9M3ZVXKr/YGQi3uCLs1AEBS+x5T2JPiueEDW8w=="},"engines":{"node":">=20.0"}},"@oozcitak/infra@2.0.2":{"resolution":{"integrity":"sha512-2g+E7hoE2dgCz/APPOEK5s3rMhJvNxSMBrP+U+j1OWsIbtSpWxxlUjq1lU8RIsFJNYv7NMlnVsCuHcUzJW+8vA=="},"engines":{"node":">=20.0"}},"@oozcitak/url@3.0.0":{"resolution":{"integrity":"sha512-ZKfET8Ak1wsLAiLWNfFkZc/BraDccuTJKR6svTYc7sVjbR+Iu0vtXdiDMY4o6jaFl5TW2TlS7jbLl4VovtAJWQ=="},"engines":{"node":">=20.0"}},"@oozcitak/util@10.0.0":{"resolution":{"integrity":"sha512-hAX0pT/73190NLqBPPWSdBVGtbY6VOhWYK3qqHqtXQ1gK7kS2yz4+ivsN07hpJ6I3aeMtKP6J6npsEKOAzuTLA=="},"engines":{"node":">=20.0"}},"@open-draft/deferred-promise@2.2.0":{"resolution":{"integrity":"sha512-CecwLWx3rhxVQF6V4bAgPS5t+So2sTbPgAzafKkVizyi7tlwpcFpdFqq+wqF2OwNBmqFuu6tOyouTuxgpMfzmA=="}},"@open-draft/logger@0.3.0":{"resolution":{"integrity":"sha512-X2g45fzhxH238HKO4xbSr7+wBS8Fvw6ixhTDuvLd5mqh6bJJCFAPwU9mPDxbcrRtfxv4u5IHCEH77BmxvXmmxQ=="}},"@open-draft/until@2.1.0":{"resolution":{"integrity":"sha512-U69T3ItWHvLwGg5eJ0n3I62nWuE6ilHlmz7zM0npLBRvPRd7e6NYmg54vvRtP5mZG7kZqZCFVdsTWo7BPtBujg=="}},"@opentelemetry/api-logs@0.208.0":{"resolution":{"integrity":"sha512-CjruKY9V6NMssL/T1kAFgzosF1v9o6oeN+aX5JB/C/xPNtmgIJqcXHG7fA82Ou1zCpWGl4lROQUKwUNE1pMCyg=="},"engines":{"node":">=8.0.0"}},"@opentelemetry/api@1.9.0":{"resolution":{"integrity":"sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg=="},"engines":{"node":">=8.0.0"}},"@opentelemetry/core@2.2.0":{"resolution":{"integrity":"sha512-FuabnnUm8LflnieVxs6eP7Z383hgQU4W1e3KJS6aOG3RxWxcHyBxH8fDMHNgu/gFx/M2jvTOW/4/PHhLz6bjWw=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.0.0 <1.10.0"}},"@opentelemetry/core@2.4.0":{"resolution":{"integrity":"sha512-KtcyFHssTn5ZgDu6SXmUznS80OFs/wN7y6MyFRRcKU6TOw8hNcGxKvt8hsdaLJfhzUszNSjURetq5Qpkad14Gw=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.0.0 <1.10.0"}},"@opentelemetry/exporter-logs-otlp-http@0.208.0":{"resolution":{"integrity":"sha512-jOv40Bs9jy9bZVLo/i8FwUiuCvbjWDI+ZW13wimJm4LjnlwJxGgB+N/VWOZUTpM+ah/awXeQqKdNlpLf2EjvYg=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":"^1.3.0"}},"@opentelemetry/otlp-exporter-base@0.208.0":{"resolution":{"integrity":"sha512-gMd39gIfVb2OgxldxUtOwGJYSH8P1kVFFlJLuut32L6KgUC4gl1dMhn+YC2mGn0bDOiQYSk/uHOdSjuKp58vvA=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":"^1.3.0"}},"@opentelemetry/otlp-transformer@0.208.0":{"resolution":{"integrity":"sha512-DCFPY8C6lAQHUNkzcNT9R+qYExvsk6C5Bto2pbNxgicpcSWbe2WHShLxkOxIdNcBiYPdVHv/e7vH7K6TI+C+fQ=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":"^1.3.0"}},"@opentelemetry/resources@2.2.0":{"resolution":{"integrity":"sha512-1pNQf/JazQTMA0BiO5NINUzH0cbLbbl7mntLa4aJNmCCXSj0q03T5ZXXL0zw4G55TjdL9Tz32cznGClf+8zr5A=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.3.0 <1.10.0"}},"@opentelemetry/resources@2.4.0":{"resolution":{"integrity":"sha512-RWvGLj2lMDZd7M/5tjkI/2VHMpXebLgPKvBUd9LRasEWR2xAynDwEYZuLvY9P2NGG73HF07jbbgWX2C9oavcQg=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.3.0 <1.10.0"}},"@opentelemetry/sdk-logs@0.208.0":{"resolution":{"integrity":"sha512-QlAyL1jRpOeaqx7/leG1vJMp84g0xKP6gJmfELBpnI4O/9xPX+Hu5m1POk9Kl+veNkyth5t19hRlN6tNY1sjbA=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.4.0 <1.10.0"}},"@opentelemetry/sdk-metrics@2.2.0":{"resolution":{"integrity":"sha512-G5KYP6+VJMZzpGipQw7Giif48h6SGQ2PFKEYCybeXJsOCB4fp8azqMAAzE5lnnHK3ZVwYQrgmFbsUJO/zOnwGw=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.9.0 <1.10.0"}},"@opentelemetry/sdk-trace-base@2.2.0":{"resolution":{"integrity":"sha512-xWQgL0Bmctsalg6PaXExmzdedSp3gyKV8mQBwK/j9VGdCDu2fmXIb2gAehBKbkXCpJ4HPkgv3QfoJWRT4dHWbw=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.3.0 <1.10.0"}},"@opentelemetry/semantic-conventions@1.38.0":{"resolution":{"integrity":"sha512-kocjix+/sSggfJhwXqClZ3i9Y/MI0fp7b+g7kCRm6psy2dsf8uApTRclwG18h8Avm7C9+fnt+O36PspJ/OzoWg=="},"engines":{"node":">=14"}},"@opral/markdown-wc@0.9.0":{"resolution":{"integrity":"sha512-m5I3WklqED3mTcUOR3J9CRFIttMYsCmSCZnZYXNdL0Oj0EtSVWXPetPhKsHTEK+MrWPaqfsiKIFq6+l7dKgtNg=="},"peerDependencies":{"@tiptap/core":"^3.0.0"},"peerDependenciesMeta":{"@tiptap/core":{"optional":true}}},"@opral/zettel-ast@0.1.0":{"resolution":{"integrity":"sha512-pZDiecYrpSxw7miv4ZSufCRB9sqFMXRa0Rf+LQcoEEh0VOBI6beOmvB+iXmWJ7vxMQINuS7yfsvm5ZyrTm/W5A=="},"engines":{"node":">=20"}},"@oxc-project/types@0.127.0":{"resolution":{"integrity":"sha512-aIYXQBo4lCbO4z0R3FHeucQHpF46l2LbMdxRvqvuRuW2OxdnSkcng5B8+K12spgLDj93rtN3+J2Vac/TIO+ciQ=="}},"@oxlint/darwin-arm64@1.26.0":{"resolution":{"integrity":"sha512-kTmm1opqyn7iZopWHO3Ml4D/44pA5eknZBepgxCnTaPrW8XgCEUI85Q5AvOOvoNve8NziTYb8ax+CyuGJIgn/Q=="},"cpu":["arm64"],"os":["darwin"]},"@oxlint/darwin-x64@1.26.0":{"resolution":{"integrity":"sha512-/hMfZ9j7ZzVPRmMm02PHNc6MIMk0QYv5VowZJRIp40YLqLPvFfGNGZBj8e1fDVgZMFEGWDQK3yrt1uBKxXAK4Q=="},"cpu":["x64"],"os":["darwin"]},"@oxlint/linux-arm64-gnu@1.26.0":{"resolution":{"integrity":"sha512-iv4wdrwdCa8bhJxOpKlvfxqTs0LgW5tKBUMvH9B13zREHm1xT9JRZ8cQbbKiyC6LNdggwu5S6TSvODgAu7/DlA=="},"cpu":["arm64"],"os":["linux"]},"@oxlint/linux-arm64-musl@1.26.0":{"resolution":{"integrity":"sha512-a3gTbnN1JzedxqYeGTkg38BAs/r3Krd2DPNs/MF7nnHthT3RzkPUk47isMePLuNc4e/Weljn7m2m/Onx22tiNg=="},"cpu":["arm64"],"os":["linux"]},"@oxlint/linux-x64-gnu@1.26.0":{"resolution":{"integrity":"sha512-cCAyqyuKpFImjlgiBuuwSF+aDBW2h19/aCmHMTMSp6KXwhoQK7/Xx7/EhZKP5wiQJzVUYq5fXr0D8WmpLGsjRg=="},"cpu":["x64"],"os":["linux"]},"@oxlint/linux-x64-musl@1.26.0":{"resolution":{"integrity":"sha512-8VOJ4vQo0G1tNdaghxrWKjKZGg73tv+FoMDrtNYuUesqBHZN68FkYCsgPwEsacLhCmtoZrkF3ePDWDuWEpDyAg=="},"cpu":["x64"],"os":["linux"]},"@oxlint/win32-arm64@1.26.0":{"resolution":{"integrity":"sha512-N8KUtzP6gfEHKvaIBZCS9g8wRfqV5v55a/B8iJjIEhtMehcEM+UX+aYRsQ4dy5oBCrK3FEp4Yy/jHgb0moLm3Q=="},"cpu":["arm64"],"os":["win32"]},"@oxlint/win32-x64@1.26.0":{"resolution":{"integrity":"sha512-7tCyG0laduNQ45vzB9blVEGq/6DOvh7AFmiUAana8mTp0zIKQQmwJ21RqhazH0Rk7O6lL7JYzKcu+zaJHGpRLA=="},"cpu":["x64"],"os":["win32"]},"@pkgjs/parseargs@0.11.0":{"resolution":{"integrity":"sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg=="},"engines":{"node":">=14"}},"@polka/url@1.0.0-next.29":{"resolution":{"integrity":"sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww=="}},"@poppinss/colors@4.1.5":{"resolution":{"integrity":"sha512-FvdDqtcRCtz6hThExcFOgW0cWX+xwSMWcRuQe5ZEb2m7cVQOAVZOIMt+/v9RxGiD9/OY16qJBXK4CVKWAPalBw=="}},"@poppinss/dumper@0.6.5":{"resolution":{"integrity":"sha512-NBdYIb90J7LfOI32dOewKI1r7wnkiH6m920puQ3qHUeZkxNkQiFnXVWoE6YtFSv6QOiPPf7ys6i+HWWecDz7sw=="}},"@poppinss/exception@1.2.2":{"resolution":{"integrity":"sha512-m7bpKCD4QMlFCjA/nKTs23fuvoVFoA83brRKmObCUNmi/9tVu8Ve3w4YQAnJu4q3Tjf5fr685HYIC/IA2zHRSg=="}},"@posthog/core@1.9.1":{"resolution":{"integrity":"sha512-kRb1ch2dhQjsAapZmu6V66551IF2LnCbc1rnrQqnR7ArooVyJN9KOPXre16AJ3ObJz2eTfuP7x25BMyS2Y5Exw=="}},"@posthog/types@1.321.2":{"resolution":{"integrity":"sha512-nsMeHlVNlTB68JyV3/0+5FDreiTpUCStDH8ZUH/Hfsbw1howyf9a7DyURTwwhXdnyO0DksEFUIX+4IKCJs/H9g=="}},"@promptbook/utils@0.69.5":{"resolution":{"integrity":"sha512-xm5Ti/Hp3o4xHrsK9Yy3MS6KbDxYbq485hDsFvxqaNA7equHLPdo8H8faTitTeb14QCDfLW4iwCxdVYu5sn6YQ=="}},"@protobufjs/aspromise@1.1.2":{"resolution":{"integrity":"sha512-j+gKExEuLmKwvz3OgROXtrJ2UG2x8Ch2YZUxahh+s1F2HZ+wAceUNLkvy6zKCPVRkU++ZWQrdxsUeQXmcg4uoQ=="}},"@protobufjs/base64@1.1.2":{"resolution":{"integrity":"sha512-AZkcAA5vnN/v4PDqKyMR5lx7hZttPDgClv83E//FMNhR2TMcLUhfRUBHCmSl0oi9zMgDDqRUJkSxO3wm85+XLg=="}},"@protobufjs/codegen@2.0.4":{"resolution":{"integrity":"sha512-YyFaikqM5sH0ziFZCN3xDC7zeGaB/d0IUb9CATugHWbd1FRFwWwt4ld4OYMPWu5a3Xe01mGAULCdqhMlPl29Jg=="}},"@protobufjs/eventemitter@1.1.0":{"resolution":{"integrity":"sha512-j9ednRT81vYJ9OfVuXG6ERSTdEL1xVsNgqpkxMsbIabzSo3goCjDIveeGv5d03om39ML71RdmrGNjG5SReBP/Q=="}},"@protobufjs/fetch@1.1.0":{"resolution":{"integrity":"sha512-lljVXpqXebpsijW71PZaCYeIcE5on1w5DlQy5WH6GLbFryLUrBD4932W/E2BSpfRJWseIL4v/KPgBFxDOIdKpQ=="}},"@protobufjs/float@1.0.2":{"resolution":{"integrity":"sha512-Ddb+kVXlXst9d+R9PfTIxh1EdNkgoRe5tOX6t01f1lYWOvJnSPDBlG241QLzcyPdoNTsblLUdujGSE4RzrTZGQ=="}},"@protobufjs/inquire@1.1.0":{"resolution":{"integrity":"sha512-kdSefcPdruJiFMVSbn801t4vFK7KB/5gd2fYvrxhuJYg8ILrmn9SKSX2tZdV6V+ksulWqS7aXjBcRXl3wHoD9Q=="}},"@protobufjs/path@1.1.2":{"resolution":{"integrity":"sha512-6JOcJ5Tm08dOHAbdR3GrvP+yUUfkjG5ePsHYczMFLq3ZmMkAD98cDgcT2iA1lJ9NVwFd4tH/iSSoe44YWkltEA=="}},"@protobufjs/pool@1.1.0":{"resolution":{"integrity":"sha512-0kELaGSIDBKvcgS4zkjz1PeddatrjYcmMWOlAuAPwAeccUrPHdUqo/J6LiymHHEiJT5NrF1UVwxY14f+fy4WQw=="}},"@protobufjs/utf8@1.1.0":{"resolution":{"integrity":"sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw=="}},"@puppeteer/browsers@2.13.1":{"resolution":{"integrity":"sha512-zmS4RTK9fbrc++WlAJhxYbfz3IjDeOmkK/CwwbLmk7ydfS9e2CiEeRJHEPvjDVElO/bwXbidwGA37Bsm6LzCnQ=="},"engines":{"node":">=18"},"hasBin":true},"@rolldown/binding-android-arm64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-s70pVGhw4zqGeFnXWvAzJDlvxhlRollagdCCKRgOsgUOH3N1l0LIxf83AtGzmb5SiVM4Hjl5HyarMRfdfj3DaQ=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["android"]},"@rolldown/binding-darwin-arm64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-4ksWc9n0mhlZpZ9PMZgTGjeOPRu8MB1Z3Tz0Mo02eWfWCHMW1zN82Qz/pL/rC+yQa+8ZnutMF0JjJe7PjwasYw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["darwin"]},"@rolldown/binding-darwin-x64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-SUSDOI6WwUVNcWxd02QEBjLdY1VPHvlEkw6T/8nYG322iYWCTxRb1vzk4E+mWWYehTp7ERibq54LSJGjmouOsw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["darwin"]},"@rolldown/binding-freebsd-x64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-hwnz3nw9dbJ05EDO/PvcjaaewqqDy7Y1rn1UO81l8iIK1GjenME75dl16ajbvSSMfv66WXSRCYKIqfgq2KCfxw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["freebsd"]},"@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.17":{"resolution":{"integrity":"sha512-IS+W7epTcwANmFSQFrS1SivEXHtl1JtuQA9wlxrZTcNi6mx+FDOYrakGevvvTwgj2JvWiK8B29/qD9BELZPyXQ=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm"],"os":["linux"]},"@rolldown/binding-linux-arm64-gnu@1.0.0-rc.17":{"resolution":{"integrity":"sha512-e6usGaHKW5BMNZOymS1UcEYGowQMWcgZ71Z17Sl/h2+ZziNJ1a9n3Zvcz6LdRyIW5572wBCTH/Z+bKuZouGk9Q=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["linux"]},"@rolldown/binding-linux-arm64-musl@1.0.0-rc.17":{"resolution":{"integrity":"sha512-b/CgbwAJpmrRLp02RPfhbudf5tZnN9nsPWK82znefso832etkem8H7FSZwxrOI9djcdTP7U6YfNhbRnh7djErg=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["linux"]},"@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.17":{"resolution":{"integrity":"sha512-4EII1iNGRUN5WwGbF/kOh/EIkoDN9HsupgLQoXfY+D1oyJm7/F4t5PYU5n8SWZgG0FEwakyM8pGgwcBYruGTlA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["ppc64"],"os":["linux"]},"@rolldown/binding-linux-s390x-gnu@1.0.0-rc.17":{"resolution":{"integrity":"sha512-AH8oq3XqQo4IibpVXvPeLDI5pzkpYn0WiZAfT05kFzoJ6tQNzwRdDYQ45M8I/gslbodRZwW8uxLhbSBbkv96rA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["s390x"],"os":["linux"]},"@rolldown/binding-linux-x64-gnu@1.0.0-rc.17":{"resolution":{"integrity":"sha512-cLnjV3xfo7KslbU41Z7z8BH/E1y5mzUYzAqih1d1MDaIGZRCMqTijqLv76/P7fyHuvUcfGsIpqCdddbxLLK9rA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["linux"]},"@rolldown/binding-linux-x64-musl@1.0.0-rc.17":{"resolution":{"integrity":"sha512-0phclDw1spsL7dUB37sIARuis2tAgomCJXAHZlpt8PXZ4Ba0dRP1e+66lsRqrfhISeN9bEGNjQs+T/Fbd7oYGw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["linux"]},"@rolldown/binding-openharmony-arm64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-0ag/hEgXOwgw4t8QyQvUCxvEg+V0KBcA6YuOx9g0r02MprutRF5dyljgm3EmR02O292UX7UeS6HzWHAl6KgyhA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["openharmony"]},"@rolldown/binding-wasm32-wasi@1.0.0-rc.17":{"resolution":{"integrity":"sha512-LEXei6vo0E5wTGwpkJ4KoT3OZJRnglwldt5ziLzOlc6qqb55z4tWNq2A+PFqCJuvWWdP53CVhG1Z9NtToDPJrA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["wasm32"]},"@rolldown/binding-win32-arm64-msvc@1.0.0-rc.17":{"resolution":{"integrity":"sha512-gUmyzBl3SPMa6hrqFUth9sVfcLBlYsbMzBx5PlexMroZStgzGqlZ26pYG89rBb45Mnia+oil6YAIFeEWGWhoZA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["win32"]},"@rolldown/binding-win32-x64-msvc@1.0.0-rc.17":{"resolution":{"integrity":"sha512-3hkiolcUAvPB9FLb3UZdfjVVNWherN1f/skkGWJP/fgSQhYUZpSIRr0/I8ZK9TkF3F7kxvJAk0+IcKvPHk9qQg=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["win32"]},"@rolldown/pluginutils@1.0.0-beta.40":{"resolution":{"integrity":"sha512-s3GeJKSQOwBlzdUrj4ISjJj5SfSh+aqn0wjOar4Bx95iV1ETI7F6S/5hLcfAxZ9kXDcyrAkxPlqmd1ZITttf+w=="}},"@rolldown/pluginutils@1.0.0-rc.17":{"resolution":{"integrity":"sha512-n8iosDOt6Ig1UhJ2AYqoIhHWh/isz0xpicHTzpKBeotdVsTEcxsSA/i3EVM7gQAj0rU27OLAxCjzlj15IWY7bg=="}},"@rolldown/pluginutils@1.0.0-rc.7":{"resolution":{"integrity":"sha512-qujRfC8sFVInYSPPMLQByRh7zhwkGFS4+tyMQ83srV1qrxL4g8E2tyxVVyxd0+8QeBM1mIk9KbWxkegRr76XzA=="}},"@rollup/rollup-android-arm-eabi@4.53.2":{"resolution":{"integrity":"sha512-yDPzwsgiFO26RJA4nZo8I+xqzh7sJTZIWQOxn+/XOdPE31lAvLIYCKqjV+lNH/vxE2L2iH3plKxDCRK6i+CwhA=="},"cpu":["arm"],"os":["android"]},"@rollup/rollup-android-arm64@4.53.2":{"resolution":{"integrity":"sha512-k8FontTxIE7b0/OGKeSN5B6j25EuppBcWM33Z19JoVT7UTXFSo3D9CdU39wGTeb29NO3XxpMNauh09B+Ibw+9g=="},"cpu":["arm64"],"os":["android"]},"@rollup/rollup-darwin-arm64@4.53.2":{"resolution":{"integrity":"sha512-A6s4gJpomNBtJ2yioj8bflM2oogDwzUiMl2yNJ2v9E7++sHrSrsQ29fOfn5DM/iCzpWcebNYEdXpaK4tr2RhfQ=="},"cpu":["arm64"],"os":["darwin"]},"@rollup/rollup-darwin-x64@4.53.2":{"resolution":{"integrity":"sha512-e6XqVmXlHrBlG56obu9gDRPW3O3hLxpwHpLsBJvuI8qqnsrtSZ9ERoWUXtPOkY8c78WghyPHZdmPhHLWNdAGEw=="},"cpu":["x64"],"os":["darwin"]},"@rollup/rollup-freebsd-arm64@4.53.2":{"resolution":{"integrity":"sha512-v0E9lJW8VsrwPux5Qe5CwmH/CF/2mQs6xU1MF3nmUxmZUCHazCjLgYvToOk+YuuUqLQBio1qkkREhxhc656ViA=="},"cpu":["arm64"],"os":["freebsd"]},"@rollup/rollup-freebsd-x64@4.53.2":{"resolution":{"integrity":"sha512-ClAmAPx3ZCHtp6ysl4XEhWU69GUB1D+s7G9YjHGhIGCSrsg00nEGRRZHmINYxkdoJehde8VIsDC5t9C0gb6yqA=="},"cpu":["x64"],"os":["freebsd"]},"@rollup/rollup-linux-arm-gnueabihf@4.53.2":{"resolution":{"integrity":"sha512-EPlb95nUsz6Dd9Qy13fI5kUPXNSljaG9FiJ4YUGU1O/Q77i5DYFW5KR8g1OzTcdZUqQQ1KdDqsTohdFVwCwjqg=="},"cpu":["arm"],"os":["linux"]},"@rollup/rollup-linux-arm-musleabihf@4.53.2":{"resolution":{"integrity":"sha512-BOmnVW+khAUX+YZvNfa0tGTEMVVEerOxN0pDk2E6N6DsEIa2Ctj48FOMfNDdrwinocKaC7YXUZ1pHlKpnkja/Q=="},"cpu":["arm"],"os":["linux"]},"@rollup/rollup-linux-arm64-gnu@4.53.2":{"resolution":{"integrity":"sha512-Xt2byDZ+6OVNuREgBXr4+CZDJtrVso5woFtpKdGPhpTPHcNG7D8YXeQzpNbFRxzTVqJf7kvPMCub/pcGUWgBjA=="},"cpu":["arm64"],"os":["linux"]},"@rollup/rollup-linux-arm64-musl@4.53.2":{"resolution":{"integrity":"sha512-+LdZSldy/I9N8+klim/Y1HsKbJ3BbInHav5qE9Iy77dtHC/pibw1SR/fXlWyAk0ThnpRKoODwnAuSjqxFRDHUQ=="},"cpu":["arm64"],"os":["linux"]},"@rollup/rollup-linux-loong64-gnu@4.53.2":{"resolution":{"integrity":"sha512-8ms8sjmyc1jWJS6WdNSA23rEfdjWB30LH8Wqj0Cqvv7qSHnvw6kgMMXRdop6hkmGPlyYBdRPkjJnj3KCUHV/uQ=="},"cpu":["loong64"],"os":["linux"]},"@rollup/rollup-linux-ppc64-gnu@4.53.2":{"resolution":{"integrity":"sha512-3HRQLUQbpBDMmzoxPJYd3W6vrVHOo2cVW8RUo87Xz0JPJcBLBr5kZ1pGcQAhdZgX9VV7NbGNipah1omKKe23/g=="},"cpu":["ppc64"],"os":["linux"]},"@rollup/rollup-linux-riscv64-gnu@4.53.2":{"resolution":{"integrity":"sha512-fMjKi+ojnmIvhk34gZP94vjogXNNUKMEYs+EDaB/5TG/wUkoeua7p7VCHnE6T2Tx+iaghAqQX8teQzcvrYpaQA=="},"cpu":["riscv64"],"os":["linux"]},"@rollup/rollup-linux-riscv64-musl@4.53.2":{"resolution":{"integrity":"sha512-XuGFGU+VwUUV5kLvoAdi0Wz5Xbh2SrjIxCtZj6Wq8MDp4bflb/+ThZsVxokM7n0pcbkEr2h5/pzqzDYI7cCgLQ=="},"cpu":["riscv64"],"os":["linux"]},"@rollup/rollup-linux-s390x-gnu@4.53.2":{"resolution":{"integrity":"sha512-w6yjZF0P+NGzWR3AXWX9zc0DNEGdtvykB03uhonSHMRa+oWA6novflo2WaJr6JZakG2ucsyb+rvhrKac6NIy+w=="},"cpu":["s390x"],"os":["linux"]},"@rollup/rollup-linux-x64-gnu@4.53.2":{"resolution":{"integrity":"sha512-yo8d6tdfdeBArzC7T/PnHd7OypfI9cbuZzPnzLJIyKYFhAQ8SvlkKtKBMbXDxe1h03Rcr7u++nFS7tqXz87Gtw=="},"cpu":["x64"],"os":["linux"]},"@rollup/rollup-linux-x64-musl@4.53.2":{"resolution":{"integrity":"sha512-ah59c1YkCxKExPP8O9PwOvs+XRLKwh/mV+3YdKqQ5AMQ0r4M4ZDuOrpWkUaqO7fzAHdINzV9tEVu8vNw48z0lA=="},"cpu":["x64"],"os":["linux"]},"@rollup/rollup-openharmony-arm64@4.53.2":{"resolution":{"integrity":"sha512-4VEd19Wmhr+Zy7hbUsFZ6YXEiP48hE//KPLCSVNY5RMGX2/7HZ+QkN55a3atM1C/BZCGIgqN+xrVgtdak2S9+A=="},"cpu":["arm64"],"os":["openharmony"]},"@rollup/rollup-win32-arm64-msvc@4.53.2":{"resolution":{"integrity":"sha512-IlbHFYc/pQCgew/d5fslcy1KEaYVCJ44G8pajugd8VoOEI8ODhtb/j8XMhLpwHCMB3yk2J07ctup10gpw2nyMA=="},"cpu":["arm64"],"os":["win32"]},"@rollup/rollup-win32-ia32-msvc@4.53.2":{"resolution":{"integrity":"sha512-lNlPEGgdUfSzdCWU176ku/dQRnA7W+Gp8d+cWv73jYrb8uT7HTVVxq62DUYxjbaByuf1Yk0RIIAbDzp+CnOTFg=="},"cpu":["ia32"],"os":["win32"]},"@rollup/rollup-win32-x64-gnu@4.53.2":{"resolution":{"integrity":"sha512-S6YojNVrHybQis2lYov1sd+uj7K0Q05NxHcGktuMMdIQ2VixGwAfbJ23NnlvvVV1bdpR2m5MsNBViHJKcA4ADw=="},"cpu":["x64"],"os":["win32"]},"@rollup/rollup-win32-x64-msvc@4.53.2":{"resolution":{"integrity":"sha512-k+/Rkcyx//P6fetPoLMb8pBeqJBNGx81uuf7iljX9++yNBVRDQgD04L+SVXmXmh5ZP4/WOp4mWF0kmi06PW2tA=="},"cpu":["x64"],"os":["win32"]},"@shikijs/core@3.15.0":{"resolution":{"integrity":"sha512-8TOG6yG557q+fMsSVa8nkEDOZNTSxjbbR8l6lF2gyr6Np+jrPlslqDxQkN6rMXCECQ3isNPZAGszAfYoJOPGlg=="}},"@shikijs/engine-javascript@3.15.0":{"resolution":{"integrity":"sha512-ZedbOFpopibdLmvTz2sJPJgns8Xvyabe2QbmqMTz07kt1pTzfEvKZc5IqPVO/XFiEbbNyaOpjPBkkr1vlwS+qg=="}},"@shikijs/engine-oniguruma@3.15.0":{"resolution":{"integrity":"sha512-HnqFsV11skAHvOArMZdLBZZApRSYS4LSztk2K3016Y9VCyZISnlYUYsL2hzlS7tPqKHvNqmI5JSUJZprXloMvA=="}},"@shikijs/langs@3.15.0":{"resolution":{"integrity":"sha512-WpRvEFvkVvO65uKYW4Rzxs+IG0gToyM8SARQMtGGsH4GDMNZrr60qdggXrFOsdfOVssG/QQGEl3FnJ3EZ+8w8A=="}},"@shikijs/themes@3.15.0":{"resolution":{"integrity":"sha512-8ow2zWb1IDvCKjYb0KiLNrK4offFdkfNVPXb1OZykpLCzRU6j+efkY+Y7VQjNlNFXonSw+4AOdGYtmqykDbRiQ=="}},"@shikijs/types@3.15.0":{"resolution":{"integrity":"sha512-BnP+y/EQnhihgHy4oIAN+6FFtmfTekwOLsQbRw9hOKwqgNy8Bdsjq8B05oAt/ZgvIWWFrshV71ytOrlPfYjIJw=="}},"@shikijs/vscode-textmate@10.0.2":{"resolution":{"integrity":"sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg=="}},"@sinclair/typebox@0.34.40":{"resolution":{"integrity":"sha512-gwBNIP8ZAYev/ORDWW0QvxdwPXwxBtLsdsJgSc7eDIRt8ubP+rxUBzPsrwnu16fgEF8Bx4lh/+mvQvJzcTM6Kw=="}},"@sindresorhus/is@7.1.1":{"resolution":{"integrity":"sha512-rO92VvpgMc3kfiTjGT52LEtJ8Yc5kCWhZjLQ3LwlA4pSgPpQO7bVpYXParOD8Jwf+cVQECJo3yP/4I8aZtUQTQ=="},"engines":{"node":">=18"}},"@speed-highlight/core@1.2.12":{"resolution":{"integrity":"sha512-uilwrK0Ygyri5dToHYdZSjcvpS2ZwX0w5aSt3GCEN9hrjxWCoeV4Z2DTXuxjwbntaLQIEEAlCeNQss5SoHvAEA=="}},"@sqlite.org/sqlite-wasm@3.50.4-build1":{"resolution":{"integrity":"sha512-Qig2Wso7gPkU1PtXwFzndh+CTRzrIFxVGqv6eCetjU7YqxlHItj+GvQYwYTppCRgAPawtRN/4AJcEgB9xDHGug=="},"hasBin":true},"@standard-schema/spec@1.0.0":{"resolution":{"integrity":"sha512-m2bOd0f2RT9k8QJx1JN85cZYyH1RqFBdlwtkSlf4tBDYLCiiZnv1fIIwacK6cqwXavOydf0NPToMQgpKq+dVlA=="}},"@standard-schema/spec@1.1.0":{"resolution":{"integrity":"sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="}},"@tailwindcss/node@4.2.4":{"resolution":{"integrity":"sha512-Ai7+yQPxz3ddrDQzFfBKdHEVBg0w3Zl83jnjuwxnZOsnH9pGn93QHQtpU0p/8rYWxvbFZHneni6p1BSLK4DkGA=="}},"@tailwindcss/oxide-android-arm64@4.2.4":{"resolution":{"integrity":"sha512-e7MOr1SAn9U8KlZzPi1ZXGZHeC5anY36qjNwmZv9pOJ8E4Q6jmD1vyEHkQFmNOIN7twGPEMXRHmitN4zCMN03g=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["android"]},"@tailwindcss/oxide-darwin-arm64@4.2.4":{"resolution":{"integrity":"sha512-tSC/Kbqpz/5/o/C2sG7QvOxAKqyd10bq+ypZNf+9Fi2TvbVbv1zNpcEptcsU7DPROaSbVgUXmrzKhurFvo5eDg=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["darwin"]},"@tailwindcss/oxide-darwin-x64@4.2.4":{"resolution":{"integrity":"sha512-yPyUXn3yO/ufR6+Kzv0t4fCg2qNr90jxXc5QqBpjlPNd0NqyDXcmQb/6weunH/MEDXW5dhyEi+agTDiqa3WsGg=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["darwin"]},"@tailwindcss/oxide-freebsd-x64@4.2.4":{"resolution":{"integrity":"sha512-BoMIB4vMQtZsXdGLVc2z+P9DbETkiopogfWZKbWwM8b/1Vinbs4YcUwo+kM/KeLkX3Ygrf4/PsRndKaYhS8Eiw=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["freebsd"]},"@tailwindcss/oxide-linux-arm-gnueabihf@4.2.4":{"resolution":{"integrity":"sha512-7pIHBLTHYRAlS7V22JNuTh33yLH4VElwKtB3bwchK/UaKUPpQ0lPQiOWcbm4V3WP2I6fNIJ23vABIvoy2izdwA=="},"engines":{"node":">= 20"},"cpu":["arm"],"os":["linux"]},"@tailwindcss/oxide-linux-arm64-gnu@4.2.4":{"resolution":{"integrity":"sha512-+E4wxJ0ZGOzSH325reXTWB48l42i93kQqMvDyz5gqfRzRZ7faNhnmvlV4EPGJU3QJM/3Ab5jhJ5pCRUsKn6OQw=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["linux"]},"@tailwindcss/oxide-linux-arm64-musl@4.2.4":{"resolution":{"integrity":"sha512-bBADEGAbo4ASnppIziaQJelekCxdMaxisrk+fB7Thit72IBnALp9K6ffA2G4ruj90G9XRS2VQ6q2bCKbfFV82g=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["linux"]},"@tailwindcss/oxide-linux-x64-gnu@4.2.4":{"resolution":{"integrity":"sha512-7Mx25E4WTfnht0TVRTyC00j3i0M+EeFe7wguMDTlX4mRxafznw0CA8WJkFjWYH5BlgELd1kSjuU2JiPnNZbJDA=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["linux"]},"@tailwindcss/oxide-linux-x64-musl@4.2.4":{"resolution":{"integrity":"sha512-2wwJRF7nyhOR0hhHoChc04xngV3iS+akccHTGtz965FwF0up4b2lOdo6kI1EbDaEXKgvcrFBYcYQQ/rrnWFVfA=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["linux"]},"@tailwindcss/oxide-wasm32-wasi@4.2.4":{"resolution":{"integrity":"sha512-FQsqApeor8Fo6gUEklzmaa9994orJZZDBAlQpK2Mq+DslRKFJeD6AjHpBQ0kZFQohVr8o85PPh8eOy86VlSCmw=="},"engines":{"node":">=14.0.0"},"cpu":["wasm32"],"bundledDependencies":["@napi-rs/wasm-runtime","@emnapi/core","@emnapi/runtime","@tybys/wasm-util","@emnapi/wasi-threads","tslib"]},"@tailwindcss/oxide-win32-arm64-msvc@4.2.4":{"resolution":{"integrity":"sha512-L9BXqxC4ToVgwMFqj3pmZRqyHEztulpUJzCxUtLjobMCzTPsGt1Fa9enKbOpY2iIyVtaHNeNvAK8ERP/64sqGQ=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["win32"]},"@tailwindcss/oxide-win32-x64-msvc@4.2.4":{"resolution":{"integrity":"sha512-ESlKG0EpVJQwRjXDDa9rLvhEAh0mhP1sF7sap9dNZT0yyl9SAG6T7gdP09EH0vIv0UNTlo6jPWyujD6559fZvw=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["win32"]},"@tailwindcss/oxide@4.2.4":{"resolution":{"integrity":"sha512-9El/iI069DKDSXwTvB9J4BwdO5JhRrOweGaK25taBAvBXyXqJAX+Jqdvs8r8gKpsI/1m0LeJLyQYTf/WLrBT1Q=="},"engines":{"node":">= 20"}},"@tailwindcss/vite@4.2.4":{"resolution":{"integrity":"sha512-pCvohwOCspk3ZFn6eJzrrX3g4n2JY73H6MmYC87XfGPyTty4YsCjYTMArRZm/zOI8dIt3+EcrLHAFPe5A4bgtw=="},"peerDependencies":{"vite":"^5.2.0 || ^6 || ^7 || ^8"}},"@tanstack/history@1.161.6":{"resolution":{"integrity":"sha512-NaOGLRrddszbQj9upGat6HG/4TKvXLvu+osAIgfxPYA+eIvYKv8GKDJOrY2D3/U9MRnKfMWD7bU4jeD4xmqyIg=="},"engines":{"node":">=20.19"}},"@tanstack/react-router@1.169.2":{"resolution":{"integrity":"sha512-OJM7Kguc7ERnweaNRWsyWgIKcl3z23rD1B4jaxjzd9RGdnzpt2HfrWa9rggbT0Hfzhfo4D2ZmsfoTme035tniQ=="},"engines":{"node":">=20.19"},"peerDependencies":{"react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0"}},"@tanstack/react-start-client@1.166.48":{"resolution":{"integrity":"sha512-6fqwCwe6v+Nvtdf6vg6gxs/0gCXyZEHF18EslNeG/kca2wnXYFuXRhqGJjJaEgMk3WF4IE9mUgFuBSAOY3P7nQ=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0"}},"@tanstack/react-start-rsc@0.0.43":{"resolution":{"integrity":"sha512-2RCa8Caw/HKrHi9pxmUvsiUrBtjddeBiP93e7OYQOCL3rHxoMD9CSscwT9/ziCaqnIOuBFbKWgvRTahR4jSfsw=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"@rspack/core":">=2.0.0-0","@vitejs/plugin-rsc":">=0.5.20","react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0","react-server-dom-rspack":">=0.0.2"},"peerDependenciesMeta":{"@rspack/core":{"optional":true},"@vitejs/plugin-rsc":{"optional":true},"react-server-dom-rspack":{"optional":true}}},"@tanstack/react-start-server@1.166.52":{"resolution":{"integrity":"sha512-46Gx+byIndYywUtyna5h3qatHipJkPFqo/miexfuYPgeVAI6ypQzsw7wxF194H6VAP43m2q+fdLPBXStufoOGw=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0"}},"@tanstack/react-start@1.167.64":{"resolution":{"integrity":"sha512-gxtesUkHIZmKR/OEFAx6ifedIs7UM1cG5B/TJhcs6c/BrJpjeQIrkF9/GmWRpslaWCpo3tXA2IOxNSH49KFhoA=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"@rsbuild/core":"^2.0.0","@vitejs/plugin-rsc":"*","react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0","vite":">=7.0.0"},"peerDependenciesMeta":{"@rsbuild/core":{"optional":true},"@vitejs/plugin-rsc":{"optional":true},"vite":{"optional":true}}},"@tanstack/react-store@0.9.3":{"resolution":{"integrity":"sha512-y2iHd/N9OkoQbFJLUX1T9vbc2O9tjH0pQRgTcx1/Nz4IlwLvkgpuglXUx+mXt0g5ZDFrEeDnONPqkbfxXJKwRg=="},"peerDependencies":{"react":"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0","react-dom":"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"}},"@tanstack/router-core@1.169.2":{"resolution":{"integrity":"sha512-5sm0DJF1A7Mz+9gy4Gz/lLovNailK3yot4vYvz9MkBUPw26uLnhQiR8hSCYxucjE0wD6Mdlc5l+Z0/XTlZ7xHw=="},"engines":{"node":">=20.19"}},"@tanstack/router-generator@1.166.41":{"resolution":{"integrity":"sha512-XpnkVvk9AlCtw5vggJsnSx3MdKGk8Asopwy9wUFAqFAHqlrRJzV9PoZ5kGkNEJMOYYcMTriJLN4D+kyXRUJpDQ=="},"engines":{"node":">=20.19"}},"@tanstack/router-plugin@1.167.34":{"resolution":{"integrity":"sha512-hU0Cuw79Yo6FGPBB0mW9Ik8bnTzmnUKtbgbvmIzeFdK3wKBPS4+xN7kcxVaBqXfP6xR3PFkIf2SSoYsiuLjVtg=="},"engines":{"node":">=20.19"},"peerDependencies":{"@rsbuild/core":">=1.0.2 || ^2.0.0","@tanstack/react-router":"^1.169.2","vite":">=5.0.0 || >=6.0.0 || >=7.0.0 || >=8.0.0","vite-plugin-solid":"^2.11.10 || ^3.0.0-0","webpack":">=5.92.0"},"peerDependenciesMeta":{"@rsbuild/core":{"optional":true},"@tanstack/react-router":{"optional":true},"vite":{"optional":true},"vite-plugin-solid":{"optional":true},"webpack":{"optional":true}}},"@tanstack/router-utils@1.161.8":{"resolution":{"integrity":"sha512-xyiLWEKjfBAVhauDSSjXxyf7s8elU6SM+V050sbkofvGmIIvkwPFtDsX7Gvwh14kBd6iCwAT+RiPvXTxAptY0Q=="},"engines":{"node":">=20.19"}},"@tanstack/start-client-core@1.168.2":{"resolution":{"integrity":"sha512-/bckv9k/yxY4VmSY2V2MeX7NBsS5uqGvdSPs5WIvW3Uv35DXPrdiumKXTNJeZRNRMtxrM+YfxQPjXLx3C7ykvg=="},"engines":{"node":">=22.12.0"}},"@tanstack/start-fn-stubs@1.161.6":{"resolution":{"integrity":"sha512-Y6QSlGiLga8cHfvxGGaonXIlt2bIUTVdH6AMjmpMp7+ANNCp+N96GQbjjhLye3JkaxDfP68x5iZA8NK4imgRig=="},"engines":{"node":">=22.12.0"}},"@tanstack/start-plugin-core@1.169.19":{"resolution":{"integrity":"sha512-z3/Tkytb6eRQKDnFU31QLimwrcVyDi9uHMtUQKmJkxQg+Bz85di+MxMrbnvd8XXP9OHcFlWK8HpG/HpVncZq4Q=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"@rsbuild/core":"^2.0.0","vite":">=7.0.0"},"peerDependenciesMeta":{"@rsbuild/core":{"optional":true},"vite":{"optional":true}}},"@tanstack/start-server-core@1.167.30":{"resolution":{"integrity":"sha512-GC0PXzYYSEwfAOC2NxGXFUyYvfbSjVoqnIrzJsyInKd8xQxGEQaVdrebbyx9TV5cj7A5e7EJcWAsf3G3wRDQBw=="},"engines":{"node":">=22.12.0"}},"@tanstack/start-storage-context@1.166.35":{"resolution":{"integrity":"sha512-ZKDkKiorJrKwfEHjatEwRHG7EP3raJPhh6CSl4CFmHW0naIvwaW5gQcxcT8IlHtoGDLYDAjBEcSr3MZyXgqmOA=="},"engines":{"node":">=22.12.0"}},"@tanstack/store@0.9.3":{"resolution":{"integrity":"sha512-8reSzl/qGWGGVKhBoxXPMWzATSbZLZFWhwBAFO9NAyp0TxzfBP0mIrGb8CP8KrQTmvzXlR/vFPPUrHTLBGyFyw=="}},"@tanstack/virtual-file-routes@1.161.7":{"resolution":{"integrity":"sha512-olW33+Cn+bsCsZKPwEGhlkqS6w3M2slFv11JIobdnCFKMLG97oAI2kWKdx5/zsywTL8flpnoIgaZZPlQTFYhdQ=="},"engines":{"node":">=20.19"},"hasBin":true},"@testing-library/dom@10.4.1":{"resolution":{"integrity":"sha512-o4PXJQidqJl82ckFaXUeoAW+XysPLauYI43Abki5hABd853iMhitooc6znOnczgbTYmEP6U6/y1ZyKAIsvMKGg=="},"engines":{"node":">=18"}},"@testing-library/react@16.3.0":{"resolution":{"integrity":"sha512-kFSyxiEDwv1WLl2fgsq6pPBbw5aWKrsY2/noi1Id0TK0UParSF62oFQFGHXIyaG4pp2tEub/Zlel+fjjZILDsw=="},"engines":{"node":">=18"},"peerDependencies":{"@testing-library/dom":"^10.0.0","@types/react":"^18.0.0 || ^19.0.0","@types/react-dom":"^18.0.0 || ^19.0.0","react":"^18.0.0 || ^19.0.0","react-dom":"^18.0.0 || ^19.0.0"},"peerDependenciesMeta":{"@types/react":{"optional":true},"@types/react-dom":{"optional":true}}},"@testing-library/user-event@14.6.1":{"resolution":{"integrity":"sha512-vq7fv0rnt+QTXgPxr5Hjc210p6YKq2kmdziLgnsZGgLJ9e6VAShx1pACLuRjd/AS/sr7phAR58OIIpf0LlmQNw=="},"engines":{"node":">=12","npm":">=6"},"peerDependencies":{"@testing-library/dom":">=7.21.4"}},"@tootallnate/quickjs-emscripten@0.23.0":{"resolution":{"integrity":"sha512-C5Mc6rdnsaJDjO3UpGW/CQTHtCKaYlScZTly4JIu97Jxo/odCiH0ITnDXSJPTOrEKk/ycSZ0AOgTmkDtkOsvIA=="}},"@tybys/wasm-util@0.10.2":{"resolution":{"integrity":"sha512-RoBvJ2X0wuKlWFIjrwffGw1IqZHKQqzIchKaadZZfnNpsAYp2mM0h36JtPCjNDAHGgYez/15uMBpfGwchhiMgg=="}},"@tybys/wasm-util@0.9.0":{"resolution":{"integrity":"sha512-6+7nlbMVX/PVDCwaIQ8nTOPveOcFLSt8GcXdx8hD0bt39uWxYT88uXzqTd4fTvqta7oeUJqudepapKNt2DYJFw=="}},"@types/aria-query@5.0.4":{"resolution":{"integrity":"sha512-rfT93uj5s0PRL7EzccGMs3brplhcrghnDoV26NqKhCAS1hVo+WdNsPvE/yb6ilfr5hi2MEk6d5EWJTKdxg8jVw=="}},"@types/chai@5.2.2":{"resolution":{"integrity":"sha512-8kB30R7Hwqf40JPiKhVzodJs2Qc1ZJ5zuT3uzw5Hq/dhNCl3G3l83jfpdI1e20BP348+fV7VIL/+FxaXkqBmWg=="}},"@types/chai@5.2.3":{"resolution":{"integrity":"sha512-Mw558oeA9fFbv65/y4mHtXDs9bPnFMZAL/jxdPFUpOHHIXX91mcgEHbS5Lahr+pwZFR8A7GQleRWeI6cGFC2UA=="}},"@types/cookie@0.6.0":{"resolution":{"integrity":"sha512-4Kh9a6B2bQciAhf7FSuMRRkUWecJgJu9nPnx3yzpsfXX/c50REIqpHY4C82bXP90qrLtXtkDxTZosYO3UpOwlA=="}},"@types/d3-array@3.2.1":{"resolution":{"integrity":"sha512-Y2Jn2idRrLzUfAKV2LyRImR+y4oa2AntrgID95SHJxuMUrkNXmanDSed71sRNZysveJVt1hLLemQZIady0FpEg=="}},"@types/d3-axis@3.0.6":{"resolution":{"integrity":"sha512-pYeijfZuBd87T0hGn0FO1vQ/cgLk6E1ALJjfkC0oJ8cbwkZl3TpgS8bVBLZN+2jjGgg38epgxb2zmoGtSfvgMw=="}},"@types/d3-brush@3.0.6":{"resolution":{"integrity":"sha512-nH60IZNNxEcrh6L1ZSMNA28rj27ut/2ZmI3r96Zd+1jrZD++zD3LsMIjWlvg4AYrHn/Pqz4CF3veCxGjtbqt7A=="}},"@types/d3-chord@3.0.6":{"resolution":{"integrity":"sha512-LFYWWd8nwfwEmTZG9PfQxd17HbNPksHBiJHaKuY1XeqscXacsS2tyoo6OdRsjf+NQYeB6XrNL3a25E3gH69lcg=="}},"@types/d3-color@3.1.3":{"resolution":{"integrity":"sha512-iO90scth9WAbmgv7ogoq57O9YpKmFBbmoEoCHDB2xMBY0+/KVrqAaCDyCE16dUspeOvIxFFRI+0sEtqDqy2b4A=="}},"@types/d3-contour@3.0.6":{"resolution":{"integrity":"sha512-BjzLgXGnCWjUSYGfH1cpdo41/hgdWETu4YxpezoztawmqsvCeep+8QGfiY6YbDvfgHz/DkjeIkkZVJavB4a3rg=="}},"@types/d3-delaunay@6.0.4":{"resolution":{"integrity":"sha512-ZMaSKu4THYCU6sV64Lhg6qjf1orxBthaC161plr5KuPHo3CNm8DTHiLw/5Eq2b6TsNP0W0iJrUOFscY6Q450Hw=="}},"@types/d3-dispatch@3.0.6":{"resolution":{"integrity":"sha512-4fvZhzMeeuBJYZXRXrRIQnvUYfyXwYmLsdiN7XXmVNQKKw1cM8a5WdID0g1hVFZDqT9ZqZEY5pD44p24VS7iZQ=="}},"@types/d3-drag@3.0.7":{"resolution":{"integrity":"sha512-HE3jVKlzU9AaMazNufooRJ5ZpWmLIoc90A37WU2JMmeq28w1FQqCZswHZ3xR+SuxYftzHq6WU6KJHvqxKzTxxQ=="}},"@types/d3-dsv@3.0.7":{"resolution":{"integrity":"sha512-n6QBF9/+XASqcKK6waudgL0pf/S5XHPPI8APyMLLUHd8NqouBGLsU8MgtO7NINGtPBtk9Kko/W4ea0oAspwh9g=="}},"@types/d3-ease@3.0.2":{"resolution":{"integrity":"sha512-NcV1JjO5oDzoK26oMzbILE6HW7uVXOHLQvHshBUW4UMdZGfiY6v5BeQwh9a9tCzv+CeefZQHJt5SRgK154RtiA=="}},"@types/d3-fetch@3.0.7":{"resolution":{"integrity":"sha512-fTAfNmxSb9SOWNB9IoG5c8Hg6R+AzUHDRlsXsDZsNp6sxAEOP0tkP3gKkNSO/qmHPoBFTxNrjDprVHDQDvo5aA=="}},"@types/d3-force@3.0.10":{"resolution":{"integrity":"sha512-ZYeSaCF3p73RdOKcjj+swRlZfnYpK1EbaDiYICEEp5Q6sUiqFaFQ9qgoshp5CzIyyb/yD09kD9o2zEltCexlgw=="}},"@types/d3-format@3.0.4":{"resolution":{"integrity":"sha512-fALi2aI6shfg7vM5KiR1wNJnZ7r6UuggVqtDA+xiEdPZQwy/trcQaHnwShLuLdta2rTymCNpxYTiMZX/e09F4g=="}},"@types/d3-geo@3.1.0":{"resolution":{"integrity":"sha512-856sckF0oP/diXtS4jNsiQw/UuK5fQG8l/a9VVLeSouf1/PPbBE1i1W852zVwKwYCBkFJJB7nCFTbk6UMEXBOQ=="}},"@types/d3-hierarchy@3.1.7":{"resolution":{"integrity":"sha512-tJFtNoYBtRtkNysX1Xq4sxtjK8YgoWUNpIiUee0/jHGRwqvzYxkq0hGVbbOGSz+JgFxxRu4K8nb3YpG3CMARtg=="}},"@types/d3-interpolate@3.0.4":{"resolution":{"integrity":"sha512-mgLPETlrpVV1YRJIglr4Ez47g7Yxjl1lj7YKsiMCb27VJH9W8NVM6Bb9d8kkpG/uAQS5AmbA48q2IAolKKo1MA=="}},"@types/d3-path@3.1.0":{"resolution":{"integrity":"sha512-P2dlU/q51fkOc/Gfl3Ul9kicV7l+ra934qBFXCFhrZMOL6du1TM0pm1ThYvENukyOn5h9v+yMJ9Fn5JK4QozrQ=="}},"@types/d3-polygon@3.0.2":{"resolution":{"integrity":"sha512-ZuWOtMaHCkN9xoeEMr1ubW2nGWsp4nIql+OPQRstu4ypeZ+zk3YKqQT0CXVe/PYqrKpZAi+J9mTs05TKwjXSRA=="}},"@types/d3-quadtree@3.0.6":{"resolution":{"integrity":"sha512-oUzyO1/Zm6rsxKRHA1vH0NEDG58HrT5icx/azi9MF1TWdtttWl0UIUsjEQBBh+SIkrpd21ZjEv7ptxWys1ncsg=="}},"@types/d3-random@3.0.3":{"resolution":{"integrity":"sha512-Imagg1vJ3y76Y2ea0871wpabqp613+8/r0mCLEBfdtqC7xMSfj9idOnmBYyMoULfHePJyxMAw3nWhJxzc+LFwQ=="}},"@types/d3-scale-chromatic@3.1.0":{"resolution":{"integrity":"sha512-iWMJgwkK7yTRmWqRB5plb1kadXyQ5Sj8V/zYlFGMUBbIPKQScw+Dku9cAAMgJG+z5GYDoMjWGLVOvjghDEFnKQ=="}},"@types/d3-scale@4.0.8":{"resolution":{"integrity":"sha512-gkK1VVTr5iNiYJ7vWDI+yUFFlszhNMtVeneJ6lUTKPjprsvLLI9/tgEGiXJOnlINJA8FyA88gfnQsHbybVZrYQ=="}},"@types/d3-selection@3.0.11":{"resolution":{"integrity":"sha512-bhAXu23DJWsrI45xafYpkQ4NtcKMwWnAC/vKrd2l+nxMFuvOT3XMYTIj2opv8vq8AO5Yh7Qac/nSeP/3zjTK0w=="}},"@types/d3-shape@3.1.7":{"resolution":{"integrity":"sha512-VLvUQ33C+3J+8p+Daf+nYSOsjB4GXp19/S/aGo60m9h1v6XaxjiT82lKVWJCfzhtuZ3yD7i/TPeC/fuKLLOSmg=="}},"@types/d3-time-format@4.0.3":{"resolution":{"integrity":"sha512-5xg9rC+wWL8kdDj153qZcsJ0FWiFt0J5RB6LYUNZjwSnesfblqrI/bJ1wBdJ8OQfncgbJG5+2F+qfqnqyzYxyg=="}},"@types/d3-time@3.0.4":{"resolution":{"integrity":"sha512-yuzZug1nkAAaBlBBikKZTgzCeA+k1uy4ZFwWANOfKw5z5LRhV0gNA7gNkKm7HoK+HRN0wX3EkxGk0fpbWhmB7g=="}},"@types/d3-timer@3.0.2":{"resolution":{"integrity":"sha512-Ps3T8E8dZDam6fUyNiMkekK3XUsaUEik+idO9/YjPtfj2qruF8tFBXS7XhtE4iIXBLxhmLjP3SXpLhVf21I9Lw=="}},"@types/d3-transition@3.0.9":{"resolution":{"integrity":"sha512-uZS5shfxzO3rGlu0cC3bjmMFKsXv+SmZZcgp0KD22ts4uGXp5EVYGzu/0YdwZeKmddhcAccYtREJKkPfXkZuCg=="}},"@types/d3-zoom@3.0.8":{"resolution":{"integrity":"sha512-iqMC4/YlFCSlO8+2Ii1GGGliCAY4XdeG748w5vQUbevlbDu0zSjH/+jojorQVBK/se0j6DUFNPBGSqD3YWYnDw=="}},"@types/d3@7.4.3":{"resolution":{"integrity":"sha512-lZXZ9ckh5R8uiFVt8ogUNf+pIrK4EsWrx2Np75WvF/eTpJ0FMHNhjXk8CKEx/+gpHbNQyJWehbFaTvqmHWB3ww=="}},"@types/debug@4.1.12":{"resolution":{"integrity":"sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ=="}},"@types/deep-eql@4.0.2":{"resolution":{"integrity":"sha512-c9h9dVVMigMPc4bwTvC5dxqtqJZwQPePsWjPlpSOnojbor6pGqdk541lfA7AqFQr5pB1BRdq0juY9db81BwyFw=="}},"@types/eslint-scope@3.7.7":{"resolution":{"integrity":"sha512-MzMFlSLBqNF2gcHWO0G1vP/YQyfvrxZ0bF+u7mzUdZ1/xK4A4sru+nraZz5i3iEIk1l1uyicaDVTB4QbbEkAYg=="}},"@types/eslint@9.6.1":{"resolution":{"integrity":"sha512-FXx2pKgId/WyYo2jXw63kk7/+TY7u7AziEJxJAnSFzHlqTAS3Ync6SvgYAN/k4/PQpnnVuzoMuVnByKK2qp0ag=="}},"@types/estree@1.0.8":{"resolution":{"integrity":"sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w=="}},"@types/estree@1.0.9":{"resolution":{"integrity":"sha512-GhdPgy1el4/ImP05X05Uw4cw2/M93BCUmnEvWZNStlCzEKME4Fkk+YpoA5OiHNQmoS7Cafb8Xa3Pya8m1Qrzeg=="}},"@types/geojson@7946.0.15":{"resolution":{"integrity":"sha512-9oSxFzDCT2Rj6DfcHF8G++jxBKS7mBqXl5xrRW+Kbvjry6Uduya2iiwqHPhVXpasAVMBYKkEPGgKhd3+/HZ6xA=="}},"@types/hast@3.0.4":{"resolution":{"integrity":"sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ=="}},"@types/json-schema@7.0.15":{"resolution":{"integrity":"sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA=="}},"@types/mdast@4.0.4":{"resolution":{"integrity":"sha512-kGaNbPh1k7AFzgpud/gMdvIm5xuECykRR+JnWKQno9TAXVa6WIVCGTPvYGekIDL4uwCZQSYbUxNBSb1aUo79oA=="}},"@types/ms@2.1.0":{"resolution":{"integrity":"sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA=="}},"@types/node@12.20.55":{"resolution":{"integrity":"sha512-J8xLz7q2OFulZ2cyGTLE1TbbZcjpno7FaN6zdJNrgAdrJ+DZzh/uFR6YrTb4C+nXakvud8Q4+rbhoIWlYQbUFQ=="}},"@types/node@20.19.39":{"resolution":{"integrity":"sha512-orrrD74MBUyK8jOAD/r0+lfa1I2MO6I+vAkmAWzMYbCcgrN4lCrmK52gRFQq/JRxfYPfonkr4b0jcY7Olqdqbw=="}},"@types/node@22.15.33":{"resolution":{"integrity":"sha512-wzoocdnnpSxZ+6CjW4ADCK1jVmd1S/J3ArNWfn8FDDQtRm8dkDg7TA+mvek2wNrfCgwuZxqEOiB9B1XCJ6+dbw=="}},"@types/node@22.19.17":{"resolution":{"integrity":"sha512-wGdMcf+vPYM6jikpS/qhg6WiqSV/OhG+jeeHT/KlVqxYfD40iYJf9/AE1uQxVWFvU7MipKRkRv8NSHiCGgPr8Q=="}},"@types/node@24.10.2":{"resolution":{"integrity":"sha512-WOhQTZ4G8xZ1tjJTvKOpyEVSGgOTvJAfDK3FNFgELyaTpzhdgHVHeqW8V+UJvzF5BT+/B54T/1S2K6gd9c7bbA=="}},"@types/react-dom@19.2.3":{"resolution":{"integrity":"sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ=="},"peerDependencies":{"@types/react":"^19.2.0"}},"@types/react@19.2.7":{"resolution":{"integrity":"sha512-MWtvHrGZLFttgeEj28VXHxpmwYbor/ATPYbBfSFZEIRK0ecCFLl2Qo55z52Hss+UV9CRN7trSeq1zbgx7YDWWg=="}},"@types/sinonjs__fake-timers@8.1.5":{"resolution":{"integrity":"sha512-mQkU2jY8jJEF7YHjHvsQO8+3ughTL1mcnn96igfhONmR+fUPSKIkefQYpSe8bsly2Ep7oQbn/6VG5/9/0qcArQ=="}},"@types/statuses@2.0.6":{"resolution":{"integrity":"sha512-xMAgYwceFhRA2zY+XbEA7mxYbA093wdiW8Vu6gZPGWy9cmOyU9XesH1tNcEWsKFd5Vzrqx5T3D38PWx1FIIXkA=="}},"@types/tough-cookie@4.0.5":{"resolution":{"integrity":"sha512-/Ad8+nIOV7Rl++6f1BdKxFSMgmoqEoYbHRpPcx3JEfv8VRsQe9Z4mCXeJBzxs7mbHY/XOZZuXlRNfhpVPbs6ZA=="}},"@types/trusted-types@2.0.7":{"resolution":{"integrity":"sha512-ScaPdn1dQczgbl0QFTeTOmVHFULt394XJgOQNoyVhZ6r2vLnMLJfBPd53SB52T/3G36VI1/g2MZaX0cwDuXsfw=="}},"@types/unist@3.0.3":{"resolution":{"integrity":"sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q=="}},"@types/whatwg-mimetype@3.0.2":{"resolution":{"integrity":"sha512-c2AKvDT8ToxLIOUlN51gTiHXflsfIFisS4pO7pDPoKouJCESkhZnEy623gwP9laCy5lnLDAw1vAzu2vM2YLOrA=="}},"@types/which@2.0.2":{"resolution":{"integrity":"sha512-113D3mDkZDjo+EeUEHCFy0qniNc1ZpecGiAU7WSo7YDoSzolZIQKpYFHrPpjkB2nuyahcKfrmLXeQlh7gqJYdw=="}},"@types/ws@8.18.1":{"resolution":{"integrity":"sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg=="}},"@types/yauzl@2.10.3":{"resolution":{"integrity":"sha512-oJoftv0LSuaDZE3Le4DbKX+KS9G36NzOeSap90UIK0yMA/NhKJhqlSGtNDORNRaIbQfzjXDrQa0ytJ6mNRGz/Q=="}},"@ungap/structured-clone@1.2.1":{"resolution":{"integrity":"sha512-fEzPV3hSkSMltkw152tJKNARhOupqbH96MZWyRjNaYZOMIzbrTeQDG+MTc6Mr2pgzFQzFxAfmhGDNP5QK++2ZA=="},"deprecated":"Potential CWE-502 - Update to 1.3.1 or higher"},"@vitejs/plugin-react@6.0.1":{"resolution":{"integrity":"sha512-l9X/E3cDb+xY3SWzlG1MOGt2usfEHGMNIaegaUGFsLkb3RCn/k8/TOXBcab+OndDI4TBtktT8/9BwwW8Vi9KUQ=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"peerDependencies":{"@rolldown/plugin-babel":"^0.1.7 || ^0.2.0","babel-plugin-react-compiler":"^1.0.0","vite":"^8.0.0"},"peerDependenciesMeta":{"@rolldown/plugin-babel":{"optional":true},"babel-plugin-react-compiler":{"optional":true}}},"@vitest/browser@3.2.4":{"resolution":{"integrity":"sha512-tJxiPrWmzH8a+w9nLKlQMzAKX/7VjFs50MWgcAj7p9XQ7AQ9/35fByFYptgPELyLw+0aixTnC4pUWV+APcZ/kw=="},"peerDependencies":{"playwright":"*","safaridriver":"*","vitest":"3.2.4","webdriverio":"^7.0.0 || ^8.0.0 || ^9.0.0"},"peerDependenciesMeta":{"playwright":{"optional":true},"safaridriver":{"optional":true},"webdriverio":{"optional":true}}},"@vitest/browser@4.1.5":{"resolution":{"integrity":"sha512-iCDGI8c4yg+xmjUg2VsygdAUSIIB4x5Rht/P68OXy1hPELKXHDkzh87lkuTcdYmemRChDkEpB426MmDjzC0ziA=="},"peerDependencies":{"vitest":"4.1.5"}},"@vitest/coverage-v8@3.2.4":{"resolution":{"integrity":"sha512-EyF9SXU6kS5Ku/U82E259WSnvg6c8KTjppUncuNdm5QHpe17mwREHnjDzozC8x9MZ0xfBUFSaLkRv4TMA75ALQ=="},"peerDependencies":{"@vitest/browser":"3.2.4","vitest":"3.2.4"},"peerDependenciesMeta":{"@vitest/browser":{"optional":true}}},"@vitest/coverage-v8@4.1.5":{"resolution":{"integrity":"sha512-38C0/Ddb7HcRG0Z4/DUem8x57d2p9jYgp18mkaYswEOQBGsI1CG4f/hjm0ZCeaJfWhSZ4k7jgs29V1Zom7Ki9A=="},"peerDependencies":{"@vitest/browser":"4.1.5","vitest":"4.1.5"},"peerDependenciesMeta":{"@vitest/browser":{"optional":true}}},"@vitest/expect@3.2.4":{"resolution":{"integrity":"sha512-Io0yyORnB6sikFlt8QW5K7slY4OjqNX9jmJQ02QDda8lyM6B5oNgVWoSoKPac8/kgnCUzuHQKrSLtu/uOqqrig=="}},"@vitest/expect@4.0.18":{"resolution":{"integrity":"sha512-8sCWUyckXXYvx4opfzVY03EOiYVxyNrHS5QxX3DAIi5dpJAAkyJezHCP77VMX4HKA2LDT/Jpfo8i2r5BE3GnQQ=="}},"@vitest/expect@4.1.5":{"resolution":{"integrity":"sha512-PWBaRY5JoKuRnHlUHfpV/KohFylaDZTupcXN1H9vYryNLOnitSw60Mw9IAE2r67NbwwzBw/Cc/8q9BK3kIX8Kw=="}},"@vitest/mocker@3.2.4":{"resolution":{"integrity":"sha512-46ryTE9RZO/rfDd7pEqFl7etuyzekzEhUbTW3BvmeO/BcCMEgq59BKhek3dXDWgAj4oMK6OZi+vRr1wPW6qjEQ=="},"peerDependencies":{"msw":"^2.4.9","vite":"^5.0.0 || ^6.0.0 || ^7.0.0-0"},"peerDependenciesMeta":{"msw":{"optional":true},"vite":{"optional":true}}},"@vitest/mocker@4.0.18":{"resolution":{"integrity":"sha512-HhVd0MDnzzsgevnOWCBj5Otnzobjy5wLBe4EdeeFGv8luMsGcYqDuFRMcttKWZA5vVO8RFjexVovXvAM4JoJDQ=="},"peerDependencies":{"msw":"^2.4.9","vite":"^6.0.0 || ^7.0.0-0"},"peerDependenciesMeta":{"msw":{"optional":true},"vite":{"optional":true}}},"@vitest/mocker@4.1.5":{"resolution":{"integrity":"sha512-/x2EmFC4mT4NNzqvC3fmesuV97w5FC903KPmey4gsnJiMQ3Be1IlDKVaDaG8iqaLFHqJ2FVEkxZk5VmeLjIItw=="},"peerDependencies":{"msw":"^2.4.9","vite":"^6.0.0 || ^7.0.0 || ^8.0.0"},"peerDependenciesMeta":{"msw":{"optional":true},"vite":{"optional":true}}},"@vitest/pretty-format@3.2.4":{"resolution":{"integrity":"sha512-IVNZik8IVRJRTr9fxlitMKeJeXFFFN0JaB9PHPGQ8NKQbGpfjlTx9zO4RefN8gp7eqjNy8nyK3NZmBzOPeIxtA=="}},"@vitest/pretty-format@4.0.18":{"resolution":{"integrity":"sha512-P24GK3GulZWC5tz87ux0m8OADrQIUVDPIjjj65vBXYG17ZeU3qD7r+MNZ1RNv4l8CGU2vtTRqixrOi9fYk/yKw=="}},"@vitest/pretty-format@4.1.5":{"resolution":{"integrity":"sha512-7I3q6l5qr03dVfMX2wCo9FxwSJbPdwKjy2uu/YPpU3wfHvIL4QHwVRp57OfGrDFeUJ8/8QdfBKIV12FTtLn00g=="}},"@vitest/runner@3.2.4":{"resolution":{"integrity":"sha512-oukfKT9Mk41LreEW09vt45f8wx7DordoWUZMYdY/cyAk7w5TWkTRCNZYF7sX7n2wB7jyGAl74OxgwhPgKaqDMQ=="}},"@vitest/runner@4.0.18":{"resolution":{"integrity":"sha512-rpk9y12PGa22Jg6g5M3UVVnTS7+zycIGk9ZNGN+m6tZHKQb7jrP7/77WfZy13Y/EUDd52NDsLRQhYKtv7XfPQw=="}},"@vitest/runner@4.1.5":{"resolution":{"integrity":"sha512-2D+o7Pr82IEO46YPpoA/YU0neeyr6FTerQb5Ro7BUnBuv6NQtT/kmVnczngiMEBhzgqz2UZYl5gArejsyERDSQ=="}},"@vitest/snapshot@3.2.4":{"resolution":{"integrity":"sha512-dEYtS7qQP2CjU27QBC5oUOxLE/v5eLkGqPE0ZKEIDGMs4vKWe7IjgLOeauHsR0D5YuuycGRO5oSRXnwnmA78fQ=="}},"@vitest/snapshot@4.0.18":{"resolution":{"integrity":"sha512-PCiV0rcl7jKQjbgYqjtakly6T1uwv/5BQ9SwBLekVg/EaYeQFPiXcgrC2Y7vDMA8dM1SUEAEV82kgSQIlXNMvA=="}},"@vitest/snapshot@4.1.5":{"resolution":{"integrity":"sha512-zypXEt4KH/XgKGPUz4eC2AvErYx0My5hfL8oDb1HzGFpEk1P62bxSohdyOmvz+d9UJwanI68MKwr2EquOaOgMQ=="}},"@vitest/spy@3.2.4":{"resolution":{"integrity":"sha512-vAfasCOe6AIK70iP5UD11Ac4siNUNJ9i/9PZ3NKx07sG6sUxeag1LWdNrMWeKKYBLlzuK+Gn65Yd5nyL6ds+nw=="}},"@vitest/spy@4.0.18":{"resolution":{"integrity":"sha512-cbQt3PTSD7P2OARdVW3qWER5EGq7PHlvE+QfzSC0lbwO+xnt7+XH06ZzFjFRgzUX//JmpxrCu92VdwvEPlWSNw=="}},"@vitest/spy@4.1.5":{"resolution":{"integrity":"sha512-2lNOsh6+R2Idnf1TCZqSwYlKN2E/iDlD8sgU59kYVl+OMDmvldO1VDk39smRfpUNwYpNRVn3w4YfuC7KfbBnkQ=="}},"@vitest/utils@3.2.4":{"resolution":{"integrity":"sha512-fB2V0JFrQSMsCo9HiSq3Ezpdv4iYaXRG1Sx8edX3MwxfyNn83mKiGzOcH+Fkxt4MHxr3y42fQi1oeAInqgX2QA=="}},"@vitest/utils@4.0.18":{"resolution":{"integrity":"sha512-msMRKLMVLWygpK3u2Hybgi4MNjcYJvwTb0Ru09+fOyCXIgT5raYP041DRRdiJiI3k/2U6SEbAETB3YtBrUkCFA=="}},"@vitest/utils@4.1.5":{"resolution":{"integrity":"sha512-76wdkrmfXfqGjueGgnb45ITPyUi1ycZ4IHgC2bhPDUfWHklY/q3MdLOAB+TF1e6xfl8NxNY0ZYaPCFNWSsw3Ug=="}},"@wdio/config@9.1.3":{"resolution":{"integrity":"sha512-fozjb5Jl26QqQoZ2lJc8uZwzK2iKKmIfNIdNvx5JmQt78ybShiPuWWgu/EcHYDvAiZwH76K59R1Gp4lNmmEDew=="},"engines":{"node":">=18.20.0"}},"@wdio/logger@8.38.0":{"resolution":{"integrity":"sha512-kcHL86RmNbcQP+Gq/vQUGlArfU6IIcbbnNp32rRIraitomZow+iEoc519rdQmSVusDozMS5DZthkgDdxK+vz6Q=="},"engines":{"node":"^16.13 || >=18"}},"@wdio/logger@9.1.3":{"resolution":{"integrity":"sha512-cumRMK/gE1uedBUw3WmWXOQ7HtB6DR8EyKQioUz2P0IJtRRpglMBdZV7Svr3b++WWawOuzZHMfbTkJQmaVt8Gw=="},"engines":{"node":">=18.20.0"}},"@wdio/protocols@9.2.0":{"resolution":{"integrity":"sha512-lSdKCwLtqMxSIW+cl8au21GlNkvmLNGgyuGYdV/lFdWflmMYH1zusruM6Km6Kpv2VUlWySjjGknYhe7XVTOeMw=="}},"@wdio/repl@9.0.8":{"resolution":{"integrity":"sha512-3iubjl4JX5zD21aFxZwQghqC3lgu+mSs8c3NaiYYNCC+IT5cI/8QuKlgh9s59bu+N3gG988jqMJeCYlKuUv/iw=="},"engines":{"node":">=18.20.0"}},"@wdio/types@9.1.3":{"resolution":{"integrity":"sha512-oQrzLQBqn/+HXSJJo01NEfeKhzwuDdic7L8PDNxv5ySKezvmLDYVboQfoSDRtpAdfAZCcxuU9L4Jw7iTf6WV3g=="},"engines":{"node":">=18.20.0"}},"@wdio/utils@9.1.3":{"resolution":{"integrity":"sha512-dYeOzq9MTh8jYRZhzo/DYyn+cKrhw7h0/5hgyXkbyk/wHwF/uLjhATPmfaCr9+MARSEdiF7wwU8iRy/V0jfsLg=="},"engines":{"node":">=18.20.0"}},"@webassemblyjs/ast@1.14.1":{"resolution":{"integrity":"sha512-nuBEDgQfm1ccRp/8bCQrx1frohyufl4JlbMMZ4P1wpeOfDhF6FQkxZJ1b/e+PLwr6X1Nhw6OLme5usuBWYBvuQ=="}},"@webassemblyjs/floating-point-hex-parser@1.13.2":{"resolution":{"integrity":"sha512-6oXyTOzbKxGH4steLbLNOu71Oj+C8Lg34n6CqRvqfS2O71BxY6ByfMDRhBytzknj9yGUPVJ1qIKhRlAwO1AovA=="}},"@webassemblyjs/helper-api-error@1.13.2":{"resolution":{"integrity":"sha512-U56GMYxy4ZQCbDZd6JuvvNV/WFildOjsaWD3Tzzvmw/mas3cXzRJPMjP83JqEsgSbyrmaGjBfDtV7KDXV9UzFQ=="}},"@webassemblyjs/helper-buffer@1.14.1":{"resolution":{"integrity":"sha512-jyH7wtcHiKssDtFPRB+iQdxlDf96m0E39yb0k5uJVhFGleZFoNw1c4aeIcVUPPbXUVJ94wwnMOAqUHyzoEPVMA=="}},"@webassemblyjs/helper-numbers@1.13.2":{"resolution":{"integrity":"sha512-FE8aCmS5Q6eQYcV3gI35O4J789wlQA+7JrqTTpJqn5emA4U2hvwJmvFRC0HODS+3Ye6WioDklgd6scJ3+PLnEA=="}},"@webassemblyjs/helper-wasm-bytecode@1.13.2":{"resolution":{"integrity":"sha512-3QbLKy93F0EAIXLh0ogEVR6rOubA9AoZ+WRYhNbFyuB70j3dRdwH9g+qXhLAO0kiYGlg3TxDV+I4rQTr/YNXkA=="}},"@webassemblyjs/helper-wasm-section@1.14.1":{"resolution":{"integrity":"sha512-ds5mXEqTJ6oxRoqjhWDU83OgzAYjwsCV8Lo/N+oRsNDmx/ZDpqalmrtgOMkHwxsG0iI//3BwWAErYRHtgn0dZw=="}},"@webassemblyjs/ieee754@1.13.2":{"resolution":{"integrity":"sha512-4LtOzh58S/5lX4ITKxnAK2USuNEvpdVV9AlgGQb8rJDHaLeHciwG4zlGr0j/SNWlr7x3vO1lDEsuePvtcDNCkw=="}},"@webassemblyjs/leb128@1.13.2":{"resolution":{"integrity":"sha512-Lde1oNoIdzVzdkNEAWZ1dZ5orIbff80YPdHx20mrHwHrVNNTjNr8E3xz9BdpcGqRQbAEa+fkrCb+fRFTl/6sQw=="}},"@webassemblyjs/utf8@1.13.2":{"resolution":{"integrity":"sha512-3NQWGjKTASY1xV5m7Hr0iPeXD9+RDobLll3T9d2AO+g3my8xy5peVyjSag4I50mR1bBSN/Ct12lo+R9tJk0NZQ=="}},"@webassemblyjs/wasm-edit@1.14.1":{"resolution":{"integrity":"sha512-RNJUIQH/J8iA/1NzlE4N7KtyZNHi3w7at7hDjvRNm5rcUXa00z1vRz3glZoULfJ5mpvYhLybmVcwcjGrC1pRrQ=="}},"@webassemblyjs/wasm-gen@1.14.1":{"resolution":{"integrity":"sha512-AmomSIjP8ZbfGQhumkNvgC33AY7qtMCXnN6bL2u2Js4gVCg8fp735aEiMSBbDR7UQIj90n4wKAFUSEd0QN2Ukg=="}},"@webassemblyjs/wasm-opt@1.14.1":{"resolution":{"integrity":"sha512-PTcKLUNvBqnY2U6E5bdOQcSM+oVP/PmrDY9NzowJjislEjwP/C4an2303MCVS2Mg9d3AJpIGdUFIQQWbPds0Sw=="}},"@webassemblyjs/wasm-parser@1.14.1":{"resolution":{"integrity":"sha512-JLBl+KZ0R5qB7mCnud/yyX08jWFw5MsoalJ1pQ4EdFlgj9VdXKGuENGsiCIjegI1W7p91rUlcB/LB5yRJKNTcQ=="}},"@webassemblyjs/wast-printer@1.14.1":{"resolution":{"integrity":"sha512-kPSSXE6De1XOR820C90RIo2ogvZG+c3KiHzqUoO/F34Y2shGzesfqv7o57xrxovZJH/MetF5UjroJ/R/3isoiw=="}},"@xtuc/ieee754@1.2.0":{"resolution":{"integrity":"sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA=="}},"@xtuc/long@4.2.2":{"resolution":{"integrity":"sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ=="}},"@yarnpkg/lockfile@1.1.0":{"resolution":{"integrity":"sha512-GpSwvyXOcOOlV70vbnzjj4fW5xW/FdUF6nQEt1ENy7m4ZCczi1+/buVUPAqmGfqznsORNFzUMjctTIp8a9tuCQ=="}},"@yarnpkg/parsers@3.0.2":{"resolution":{"integrity":"sha512-/HcYgtUSiJiot/XWGLOlGxPYUG65+/31V8oqk17vZLW1xlCoR4PampyePljOxY2n8/3jz9+tIFzICsyGujJZoA=="},"engines":{"node":">=18.12.0"}},"@zip.js/zip.js@2.8.26":{"resolution":{"integrity":"sha512-RQ4h9F6DOiHxpdocUDrOl6xBM+yOtz+LkUol47AVWcfebGBDpZ7w7Xvz9PS24JgXvLGiXXzSAfdCdVy1tPlaFA=="},"engines":{"bun":">=0.7.0","deno":">=1.0.0","node":">=18.0.0"}},"@zkochan/js-yaml@0.0.7":{"resolution":{"integrity":"sha512-nrUSn7hzt7J6JWgWGz78ZYI8wj+gdIJdk0Ynjpp8l+trkn58Uqsf6RYrYkEK+3X18EX+TNdtJI0WxAtc+L84SQ=="},"hasBin":true},"abort-controller@3.0.0":{"resolution":{"integrity":"sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg=="},"engines":{"node":">=6.5"}},"acorn@8.16.0":{"resolution":{"integrity":"sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw=="},"engines":{"node":">=0.4.0"},"hasBin":true},"agent-base@7.1.3":{"resolution":{"integrity":"sha512-jRR5wdylq8CkOe6hei19GGZnxM6rBGwFl3Bg0YItGDimvjGtAvdZk4Pu6Cl4u4Igsws4a1fd1Vq3ezrhn4KmFw=="},"engines":{"node":">= 14"}},"agent-base@7.1.4":{"resolution":{"integrity":"sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="},"engines":{"node":">= 14"}},"ajv-formats@2.1.1":{"resolution":{"integrity":"sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA=="},"peerDependencies":{"ajv":"^8.0.0"},"peerDependenciesMeta":{"ajv":{"optional":true}}},"ajv-keywords@5.1.0":{"resolution":{"integrity":"sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw=="},"peerDependencies":{"ajv":"^8.8.2"}},"ajv@8.17.1":{"resolution":{"integrity":"sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g=="}},"ajv@8.20.0":{"resolution":{"integrity":"sha512-Thbli+OlOj+iMPYFBVBfJ3OmCAnaSyNn4M1vz9T6Gka5Jt9ba/HIR56joy65tY6kx/FCF5VXNB819Y7/GUrBGA=="}},"ansi-colors@4.1.3":{"resolution":{"integrity":"sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw=="},"engines":{"node":">=6"}},"ansi-regex@5.0.1":{"resolution":{"integrity":"sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="},"engines":{"node":">=8"}},"ansi-regex@6.1.0":{"resolution":{"integrity":"sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA=="},"engines":{"node":">=12"}},"ansi-regex@6.2.2":{"resolution":{"integrity":"sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg=="},"engines":{"node":">=12"}},"ansi-styles@4.3.0":{"resolution":{"integrity":"sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="},"engines":{"node":">=8"}},"ansi-styles@5.2.0":{"resolution":{"integrity":"sha512-Cxwpt2SfTzTtXcfOlzGEee8O+c+MmUgGrNiBcXnuWxuFJHe6a5Hz7qwhwe5OgaSYI0IJvkLqWX1ASG+cJOkEiA=="},"engines":{"node":">=10"}},"ansi-styles@6.2.1":{"resolution":{"integrity":"sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug=="},"engines":{"node":">=12"}},"ansis@4.1.0":{"resolution":{"integrity":"sha512-BGcItUBWSMRgOCe+SVZJ+S7yTRG0eGt9cXAHev72yuGcY23hnLA7Bky5L/xLyPINoSN95geovfBkqoTlNZYa7w=="},"engines":{"node":">=14"}},"anymatch@3.1.3":{"resolution":{"integrity":"sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw=="},"engines":{"node":">= 8"}},"archiver-utils@5.0.2":{"resolution":{"integrity":"sha512-wuLJMmIBQYCsGZgYLTy5FIB2pF6Lfb6cXMSF8Qywwk3t20zWnAi7zLcQFdKQmIB8wyZpY5ER38x08GbwtR2cLA=="},"engines":{"node":">= 14"}},"archiver@7.0.1":{"resolution":{"integrity":"sha512-ZcbTaIqJOfCc03QwD468Unz/5Ir8ATtvAHsK+FdXbDIbGfihqh9mrvdcYunQzqn4HrvWWaFyaxJhGZagaJJpPQ=="},"engines":{"node":">= 14"}},"argparse@1.0.10":{"resolution":{"integrity":"sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg=="}},"argparse@2.0.1":{"resolution":{"integrity":"sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="}},"aria-query@5.3.0":{"resolution":{"integrity":"sha512-b0P0sZPKtyu8HkeRAfCq0IfURZK+SuwMjY1UXGBU27wpAiTwQAIlq56IbIO+ytk/JjS1fMR14ee5WBBfKi5J6A=="}},"aria-query@5.3.2":{"resolution":{"integrity":"sha512-COROpnaoap1E2F000S62r6A60uHZnmlvomhfyT2DlTcrY1OrBKn2UhH7qn5wTC9zMvD0AY7csdPSNwKP+7WiQw=="},"engines":{"node":">= 0.4"}},"array-union@2.1.0":{"resolution":{"integrity":"sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw=="},"engines":{"node":">=8"}},"assertion-error@2.0.1":{"resolution":{"integrity":"sha512-Izi8RQcffqCeNVgFigKli1ssklIbpHnCYc6AknXGYoB6grJqyeby7jv12JUQgmTAnIDnbck1uxksT4dzN3PWBA=="},"engines":{"node":">=12"}},"ast-types@0.13.4":{"resolution":{"integrity":"sha512-x1FCFnFifvYDDzTaLII71vG5uvDwgtmDTEVWAxrgeiR8VjMONcCXJx7E+USjDtHlwFmt9MysbqgF9b9Vjr6w+w=="},"engines":{"node":">=4"}},"ast-v8-to-istanbul@0.3.4":{"resolution":{"integrity":"sha512-cxrAnZNLBnQwBPByK4CeDaw5sWZtMilJE/Q3iDA0aamgaIVNDF9T6K2/8DfYDZEejZ2jNnDrG9m8MY72HFd0KA=="}},"ast-v8-to-istanbul@1.0.0":{"resolution":{"integrity":"sha512-1fSfIwuDICFA4LKkCzRPO7F0hzFf0B7+Xqrl27ynQaa+Rh0e1Es0v6kWHPott3lU10AyAr7oKHa65OppjLn3Rg=="}},"async@3.2.6":{"resolution":{"integrity":"sha512-htCUDlxyyCLMgaM3xXg0C0LW2xqfuQ6p05pCEIsXuyQ+a1koYKTuBMzRNwmybfLgvJDMd0r1LTn4+E0Ti6C2AA=="}},"asynckit@0.4.0":{"resolution":{"integrity":"sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q=="}},"axios@1.11.0":{"resolution":{"integrity":"sha512-1Lx3WLFQWm3ooKDYZD1eXmoGO9fxYQjrycfHFC8P0sCfQVXyROp0p9PFWBehewBOdCwHc+f/b8I0fMto5eSfwA=="}},"b4a@1.8.1":{"resolution":{"integrity":"sha512-aiqre1Nr0B/6DgE2N5vwTc+2/oQZ4Wh1t4NznYY4E00y8LCt6NqdRv81so00oo27D8MVKTpUa/MwUUtBLXCoDw=="},"peerDependencies":{"react-native-b4a":"*"},"peerDependenciesMeta":{"react-native-b4a":{"optional":true}}},"babel-dead-code-elimination@1.0.12":{"resolution":{"integrity":"sha512-GERT7L2TiYcYDtYk1IpD+ASAYXjKbLTDPhBtYj7X1NuRMDTMtAx9kyBenub1Ev41lo91OHCKdmP+egTDmfQ7Ig=="}},"bail@2.0.2":{"resolution":{"integrity":"sha512-0xO6mYd7JB2YesxDKplafRpsiOzPt9V02ddPCLbY1xYGPOX24NTyN50qnUxgCPcSoYMhKpAuBTjQoRZCAkUDRw=="}},"balanced-match@1.0.2":{"resolution":{"integrity":"sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw=="}},"bare-events@2.8.2":{"resolution":{"integrity":"sha512-riJjyv1/mHLIPX4RwiK+oW9/4c3TEUeORHKefKAKnZ5kyslbN+HXowtbaVEqt4IMUB7OXlfixcs6gsFeo/jhiQ=="},"peerDependencies":{"bare-abort-controller":"*"},"peerDependenciesMeta":{"bare-abort-controller":{"optional":true}}},"bare-fs@4.7.1":{"resolution":{"integrity":"sha512-WDRsyVN52eAx/lBamKD6uyw8H4228h/x0sGGGegOamM2cd7Pag88GfMQalobXI+HaEUxpCkbKQUDOQqt9wawRw=="},"engines":{"bare":">=1.16.0"},"peerDependencies":{"bare-buffer":"*"},"peerDependenciesMeta":{"bare-buffer":{"optional":true}}},"bare-os@3.9.1":{"resolution":{"integrity":"sha512-6M5XjcnsygQNPMCMPXSK379xrJFiZ/AEMNBmFEmQW8d/789VQATvriyi5r0HYTL9TkQ26rn3kgdTG3aisbrXkQ=="},"engines":{"bare":">=1.14.0"}},"bare-path@3.0.0":{"resolution":{"integrity":"sha512-tyfW2cQcB5NN8Saijrhqn0Zh7AnFNsnczRcuWODH0eYAXBsJ5gVxAUuNr7tsHSC6IZ77cA0SitzT+s47kot8Mw=="}},"bare-stream@2.13.1":{"resolution":{"integrity":"sha512-Vp0cnjYyrEC4whYTymQ+YZi6pBpfiICZO3cfRG8sy67ZNWe951urv1x4eW1BKNngw3U+3fPYb5JQvHbCtxH7Ow=="},"peerDependencies":{"bare-abort-controller":"*","bare-buffer":"*","bare-events":"*"},"peerDependenciesMeta":{"bare-abort-controller":{"optional":true},"bare-buffer":{"optional":true},"bare-events":{"optional":true}}},"bare-url@2.4.3":{"resolution":{"integrity":"sha512-Kccpc7ACfXaxfeInfqKcZtW4pT5YBn1mesc4sCsun6sRwtbJ4h+sNOaksUpYEJUKfN65YWC6Bw2OJEFiKxq8nQ=="}},"base64-js@1.5.1":{"resolution":{"integrity":"sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA=="}},"baseline-browser-mapping@2.10.27":{"resolution":{"integrity":"sha512-zEs/ufmZoUd7WftKpKyXaT6RFxpQ5Qm9xytKRHvJfxFV9DFJkZph9RvJ1LcOUi0Z1ZVijMte65JbILeV+8QQEA=="},"engines":{"node":">=6.0.0"},"hasBin":true},"basic-ftp@5.3.1":{"resolution":{"integrity":"sha512-bopVNp6ugyA150DDuZfPFdt1KZ5a94ZDiwX4hMgZDzF+GttD80lEy8kj98kbyhLXnPvhtIo93mdnLIjpCAeeOw=="},"engines":{"node":">=10.0.0"}},"better-path-resolve@1.0.0":{"resolution":{"integrity":"sha512-pbnl5XzGBdrFU/wT4jqmJVPn2B6UHPBOhzMQkY/SPUPB6QtUXtmBHBIwCbXJol93mOpGMnQyP/+BB19q04xj7g=="},"engines":{"node":">=4"}},"better-sqlite3@12.9.0":{"resolution":{"integrity":"sha512-wqUv4Gm3toFpHDQmaKD4QhZm3g1DjUBI0yzS4UBl6lElUmXFYdTQmmEDpAFa5o8FiFiymURypEnfVHzILKaxqQ=="},"engines":{"node":"20.x || 22.x || 23.x || 24.x || 25.x"}},"bidi-js@1.0.3":{"resolution":{"integrity":"sha512-RKshQI1R3YQ+n9YJz2QQ147P66ELpa1FQEg20Dk8oW9t2KgLbpDLLp9aGZ7y8WHSshDknG0bknqGw5/tyCs5tw=="}},"binary-extensions@2.3.0":{"resolution":{"integrity":"sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw=="},"engines":{"node":">=8"}},"bindings@1.5.0":{"resolution":{"integrity":"sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ=="}},"bl@4.1.0":{"resolution":{"integrity":"sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w=="}},"blake3-wasm@2.1.5":{"resolution":{"integrity":"sha512-F1+K8EbfOZE49dtoPtmxUQrpXaBIl3ICvasLh+nJta0xkz+9kF/7uet9fLnwKqhDrmj6g+6K3Tw9yQPUg2ka5g=="}},"boolbase@1.0.0":{"resolution":{"integrity":"sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww=="}},"brace-expansion@2.0.2":{"resolution":{"integrity":"sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="}},"brace-expansion@2.1.0":{"resolution":{"integrity":"sha512-TN1kCZAgdgweJhWWpgKYrQaMNHcDULHkWwQIspdtjV4Y5aurRdZpjAqn6yX3FPqTA9ngHCc4hJxMAMgGfve85w=="}},"braces@3.0.3":{"resolution":{"integrity":"sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA=="},"engines":{"node":">=8"}},"browserslist@4.25.3":{"resolution":{"integrity":"sha512-cDGv1kkDI4/0e5yON9yM5G/0A5u8sf5TnmdX5C9qHzI9PPu++sQ9zjm1k9NiOrf3riY4OkK0zSGqfvJyJsgCBQ=="},"engines":{"node":"^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7"},"hasBin":true},"browserslist@4.28.2":{"resolution":{"integrity":"sha512-48xSriZYYg+8qXna9kwqjIVzuQxi+KYWp2+5nCYnYKPTr0LvD89Jqk2Or5ogxz0NUMfIjhh2lIUX/LyX9B4oIg=="},"engines":{"node":"^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7"},"hasBin":true},"buffer-builder@0.2.0":{"resolution":{"integrity":"sha512-7VPMEPuYznPSoR21NE1zvd2Xna6c/CloiZCfcMXR1Jny6PjX0N4Nsa38zcBFo/FMK+BlA+FLKbJCQ0i2yxp+Xg=="}},"buffer-crc32@0.2.13":{"resolution":{"integrity":"sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ=="}},"buffer-crc32@1.0.0":{"resolution":{"integrity":"sha512-Db1SbgBS/fg/392AblrMJk97KggmvYhr4pB5ZIMTWtaivCPMWLkmb7m21cJvpvgK+J3nsU2CmmixNBZx4vFj/w=="},"engines":{"node":">=8.0.0"}},"buffer-from@1.1.2":{"resolution":{"integrity":"sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="}},"buffer@5.7.1":{"resolution":{"integrity":"sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ=="}},"buffer@6.0.3":{"resolution":{"integrity":"sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA=="}},"cac@6.7.14":{"resolution":{"integrity":"sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ=="},"engines":{"node":">=8"}},"call-bind-apply-helpers@1.0.2":{"resolution":{"integrity":"sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ=="},"engines":{"node":">= 0.4"}},"caniuse-lite@1.0.30001737":{"resolution":{"integrity":"sha512-BiloLiXtQNrY5UyF0+1nSJLXUENuhka2pzy2Fx5pGxqavdrxSCW4U6Pn/PoG3Efspi2frRbHpBV2XsrPE6EDlw=="}},"caniuse-lite@1.0.30001792":{"resolution":{"integrity":"sha512-hVLMUZFgR4JJ6ACt1uEESvQN1/dBVqPAKY0hgrV70eN3391K6juAfTjKZLKvOMsx8PxA7gsY1/tLMMTcfFLLpw=="}},"ccount@2.0.1":{"resolution":{"integrity":"sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg=="}},"chai@5.3.3":{"resolution":{"integrity":"sha512-4zNhdJD/iOjSH0A05ea+Ke6MU5mmpQcbQsSOkgdaUMJ9zTlDTD/GYlwohmIE2u0gaxHYiVHEn1Fw9mZ/ktJWgw=="},"engines":{"node":">=18"}},"chai@6.2.2":{"resolution":{"integrity":"sha512-NUPRluOfOiTKBKvWPtSD4PhFvWCqOi0BGStNWs57X9js7XGTprSmFoz5F0tWhR4WPjNeR9jXqdC7/UpSJTnlRg=="},"engines":{"node":">=18"}},"chalk@4.1.2":{"resolution":{"integrity":"sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="},"engines":{"node":">=10"}},"chalk@5.6.2":{"resolution":{"integrity":"sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA=="},"engines":{"node":"^12.17.0 || ^14.13 || >=16.0.0"}},"character-entities-html4@2.1.0":{"resolution":{"integrity":"sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA=="}},"character-entities-legacy@3.0.0":{"resolution":{"integrity":"sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ=="}},"character-entities@2.0.2":{"resolution":{"integrity":"sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ=="}},"chardet@2.1.0":{"resolution":{"integrity":"sha512-bNFETTG/pM5ryzQ9Ad0lJOTa6HWD/YsScAR3EnCPZRPlQh77JocYktSHOUHelyhm8IARL+o4c4F1bP5KVOjiRA=="}},"check-error@2.1.1":{"resolution":{"integrity":"sha512-OAlb+T7V4Op9OwdkjmguYRqncdlx5JiofwOAUkmTF+jNdHwzTaTs4sRAGpzLF3oOz5xAyDGrPgeIDFQmDOTiJw=="},"engines":{"node":">= 16"}},"cheerio-select@2.1.0":{"resolution":{"integrity":"sha512-9v9kG0LvzrlcungtnJtpGNxY+fzECQKhK4EGJX2vByejiMX84MFNQw4UxPJl3bFbTMw+Dfs37XaIkCwTZfLh4g=="}},"cheerio@1.1.2":{"resolution":{"integrity":"sha512-IkxPpb5rS/d1IiLbHMgfPuS0FgiWTtFIm/Nj+2woXDLTZ7fOT2eqzgYbdMlLweqlHbsZjxEChoVK+7iph7jyQg=="},"engines":{"node":">=20.18.1"}},"cheerio@1.2.0":{"resolution":{"integrity":"sha512-WDrybc/gKFpTYQutKIK6UvfcuxijIZfMfXaYm8NMsPQxSYvf+13fXUJ4rztGGbJcBQ/GF55gvrZ0Bc0bj/mqvg=="},"engines":{"node":">=20.18.1"}},"chevrotain-allstar@0.3.1":{"resolution":{"integrity":"sha512-b7g+y9A0v4mxCW1qUhf3BSVPg+/NvGErk/dOkrDaHA0nQIQGAtrOjlX//9OQtRlSCy+x9rfB5N8yC71lH1nvMw=="},"peerDependencies":{"chevrotain":"^11.0.0"}},"chevrotain@11.0.3":{"resolution":{"integrity":"sha512-ci2iJH6LeIkvP9eJW6gpueU8cnZhv85ELY8w8WiFtNjMHA5ad6pQLaJo9mEly/9qUyCpvqX8/POVUTf18/HFdw=="}},"chokidar@3.6.0":{"resolution":{"integrity":"sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw=="},"engines":{"node":">= 8.10.0"}},"chownr@1.1.4":{"resolution":{"integrity":"sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg=="}},"chownr@2.0.0":{"resolution":{"integrity":"sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ=="},"engines":{"node":">=10"}},"chrome-trace-event@1.0.4":{"resolution":{"integrity":"sha512-rNjApaLzuwaOTjCiT8lSDdGN1APCiqkChLMJxJPWLunPAt5fy8xgU9/jNOchV84wfIxrA0lRQB7oCT8jrn/wrQ=="},"engines":{"node":">=6.0"}},"ci-info@3.9.0":{"resolution":{"integrity":"sha512-NIxF55hv4nSqQswkAeiOi1r83xy8JldOFDTWiug55KBu9Jnblncd2U6ViHmYgHf01TPZS77NJBhBMKdWj9HQMQ=="},"engines":{"node":">=8"}},"cli-cursor@3.1.0":{"resolution":{"integrity":"sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw=="},"engines":{"node":">=8"}},"cli-spinners@2.6.1":{"resolution":{"integrity":"sha512-x/5fWmGMnbKQAaNwN+UZlV79qBLM9JFnJuJ03gIi5whrob0xV0ofNVHy9DhwGdsMJQc2OKv0oGmLzvaqvAVv+g=="},"engines":{"node":">=6"}},"cli-spinners@2.9.2":{"resolution":{"integrity":"sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg=="},"engines":{"node":">=6"}},"cli-width@4.1.0":{"resolution":{"integrity":"sha512-ouuZd4/dm2Sw5Gmqy6bGyNNNe1qt9RpmxveLSO7KcgsTnU7RXfsw+/bukWGo1abgBiMAic068rclZsO4IWmmxQ=="},"engines":{"node":">= 12"}},"cliui@8.0.1":{"resolution":{"integrity":"sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ=="},"engines":{"node":">=12"}},"clone@1.0.4":{"resolution":{"integrity":"sha512-JQHZ2QMW6l3aH/j6xCqQThY/9OH4D/9ls34cgkUBiEeocRTU04tHfKPBsUK1PqZCUQM7GiA0IIXJSuXHI64Kbg=="},"engines":{"node":">=0.8"}},"color-convert@2.0.1":{"resolution":{"integrity":"sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ=="},"engines":{"node":">=7.0.0"}},"color-name@1.1.4":{"resolution":{"integrity":"sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="}},"colorjs.io@0.5.2":{"resolution":{"integrity":"sha512-twmVoizEW7ylZSN32OgKdXRmo1qg+wT5/6C3xu5b9QsWzSFAhHLn2xd8ro0diCsKfCj1RdaTP/nrcW+vAoQPIw=="}},"combined-stream@1.0.8":{"resolution":{"integrity":"sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg=="},"engines":{"node":">= 0.8"}},"comma-separated-tokens@2.0.3":{"resolution":{"integrity":"sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg=="}},"commander@2.20.3":{"resolution":{"integrity":"sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ=="}},"commander@7.2.0":{"resolution":{"integrity":"sha512-QrWXB+ZQSVPmIWIhtEO9H+gwHaMGYiF5ChvoJ+K9ZGHG/sVsa6yiesAD1GC/x46sET00Xlwo1u49RVVVzvcSkw=="},"engines":{"node":">= 10"}},"commander@8.3.0":{"resolution":{"integrity":"sha512-OkTL9umf+He2DZkUq8f8J9of7yL6RJKI24dVITBmNfZBmri9zYZQrKkuXiKhyfPSu8tUhnVBB1iKXevvnlR4Ww=="},"engines":{"node":">= 12"}},"commander@9.5.0":{"resolution":{"integrity":"sha512-KRs7WVDKg86PWiuAqhDrAQnTXZKraVcCc6vFdL14qrZ/DcWwuRo7VoiYXalXO7S5GKpqYiVEwCbgFDfxNHKJBQ=="},"engines":{"node":"^12.20.0 || >=14"}},"compress-commons@6.0.2":{"resolution":{"integrity":"sha512-6FqVXeETqWPoGcfzrXb37E50NP0LXT8kAMu5ooZayhWWdgEY4lBEEcbQNXtkuKQsGduxiIcI4gOTsxTmuq/bSg=="},"engines":{"node":">= 14"}},"confbox@0.1.8":{"resolution":{"integrity":"sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w=="}},"confbox@0.2.2":{"resolution":{"integrity":"sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ=="}},"convert-source-map@2.0.0":{"resolution":{"integrity":"sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg=="}},"cookie-es@3.1.1":{"resolution":{"integrity":"sha512-UaXxwISYJPTr9hwQxMFYZ7kNhSXboMXP+Z3TRX6f1/NyaGPfuNUZOWP1pUEb75B2HjfklIYLVRfWiFZJyC6Npg=="}},"cookie@0.7.2":{"resolution":{"integrity":"sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="},"engines":{"node":">= 0.6"}},"cookie@1.0.2":{"resolution":{"integrity":"sha512-9Kr/j4O16ISv8zBBhJoi4bXOYNTkFLOqSL3UDB0njXxCXNezjeyVrJyGOWtgfs/q2km1gwBcfH8q1yEGoMYunA=="},"engines":{"node":">=18"}},"core-js@3.46.0":{"resolution":{"integrity":"sha512-vDMm9B0xnqqZ8uSBpZ8sNtRtOdmfShrvT6h2TuQGLs0Is+cR0DYbj/KWP6ALVNbWPpqA/qPLoOuppJN07humpA=="}},"core-util-is@1.0.3":{"resolution":{"integrity":"sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ=="}},"cose-base@1.0.3":{"resolution":{"integrity":"sha512-s9whTXInMSgAp/NVXVNuVxVKzGH2qck3aQlVHxDCdAEPgtMKwc4Wq6/QKhgdEdgbLSi9rBTAcPoRa6JpiG4ksg=="}},"cose-base@2.2.0":{"resolution":{"integrity":"sha512-AzlgcsCbUMymkADOJtQm3wO9S3ltPfYOFD5033keQn9NJzIbtnZj+UdBJe7DYml/8TdbtHJW3j58SOnKhWY/5g=="}},"crc-32@1.2.2":{"resolution":{"integrity":"sha512-ROmzCKrTnOwybPcJApAA6WBWij23HVfGVNKqqrZpuyZOHqK2CwHSvpGuyt/UNNvaIjEd8X5IFGp4Mh+Ie1IHJQ=="},"engines":{"node":">=0.8"},"hasBin":true},"crc32-stream@6.0.0":{"resolution":{"integrity":"sha512-piICUB6ei4IlTv1+653yq5+KoqfBYmj9bw6LqXoOneTMDXk5nM1qt12mFW1caG3LlJXEKW1Bp0WggEmIfQB34g=="},"engines":{"node":">= 14"}},"cross-spawn@7.0.6":{"resolution":{"integrity":"sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA=="},"engines":{"node":">= 8"}},"css-select@5.1.0":{"resolution":{"integrity":"sha512-nwoRF1rvRRnnCqqY7updORDsuqKzqYJ28+oSMaJMMgOauh3fvwHqMS7EZpIPqK8GL+g9mKxF1vP/ZjSeNjEVHg=="}},"css-shorthand-properties@1.1.2":{"resolution":{"integrity":"sha512-C2AugXIpRGQTxaCW0N7n5jD/p5irUmCrwl03TrnMFBHDbdq44CFWR2zO7rK9xPN4Eo3pUxC4vQzQgbIpzrD1PQ=="}},"css-tree@3.1.0":{"resolution":{"integrity":"sha512-0eW44TGN5SQXU1mWSkKwFstI/22X2bG1nYzZTYMAWjylYURhse752YgbE4Cx46AC+bAvI+/dYTPRk1LqSUnu6w=="},"engines":{"node":"^10 || ^12.20.0 || ^14.13.0 || >=15.0.0"}},"css-value@0.0.1":{"resolution":{"integrity":"sha512-FUV3xaJ63buRLgHrLQVlVgQnQdR4yqdLGaDu7g8CQcWjInDfM9plBTPI9FRfpahju1UBSaMckeb2/46ApS/V1Q=="}},"css-what@6.1.0":{"resolution":{"integrity":"sha512-HTUrgRJ7r4dsZKU6GjmpfRK1O76h97Z8MfS1G0FozR+oF2kG6Vfe8JE6zwrkbxigziPHinCJ+gCPjA9EaBDtRw=="},"engines":{"node":">= 6"}},"cssstyle@4.3.1":{"resolution":{"integrity":"sha512-ZgW+Jgdd7i52AaLYCriF8Mxqft0gD/R9i9wi6RWBhs1pqdPEzPjym7rvRKi397WmQFf3SlyUsszhw+VVCbx79Q=="},"engines":{"node":">=18"}},"cssstyle@5.3.4":{"resolution":{"integrity":"sha512-KyOS/kJMEq5O9GdPnaf82noigg5X5DYn0kZPJTaAsCUaBizp6Xa1y9D4Qoqf/JazEXWuruErHgVXwjN5391ZJw=="},"engines":{"node":">=20"}},"csstype@3.2.3":{"resolution":{"integrity":"sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ=="}},"cytoscape-cose-bilkent@4.1.0":{"resolution":{"integrity":"sha512-wgQlVIUJF13Quxiv5e1gstZ08rnZj2XaLHGoFMYXz7SkNfCDOOteKBE6SYRfA9WxxI/iBc3ajfDoc6hb/MRAHQ=="},"peerDependencies":{"cytoscape":"^3.2.0"}},"cytoscape-fcose@2.2.0":{"resolution":{"integrity":"sha512-ki1/VuRIHFCzxWNrsshHYPs6L7TvLu3DL+TyIGEsRcvVERmxokbf5Gdk7mFxZnTdiGtnA4cfSmjZJMviqSuZrQ=="},"peerDependencies":{"cytoscape":"^3.2.0"}},"cytoscape@3.30.4":{"resolution":{"integrity":"sha512-OxtlZwQl1WbwMmLiyPSEBuzeTIQnwZhJYYWFzZ2PhEHVFwpeaqNIkUzSiso00D98qk60l8Gwon2RP304d3BJ1A=="},"engines":{"node":">=0.10"}},"d3-array@2.12.1":{"resolution":{"integrity":"sha512-B0ErZK/66mHtEsR1TkPEEkwdy+WDesimkM5gpZr5Dsg54BiTA5RXtYW5qTLIAcekaS9xfZrzBLF/OAkB3Qn1YQ=="}},"d3-array@3.2.4":{"resolution":{"integrity":"sha512-tdQAmyA18i4J7wprpYq8ClcxZy3SC31QMeByyCFyRt7BVHdREQZ5lpzoe5mFEYZUWe+oq8HBvk9JjpibyEV4Jg=="},"engines":{"node":">=12"}},"d3-axis@3.0.0":{"resolution":{"integrity":"sha512-IH5tgjV4jE/GhHkRV0HiVYPDtvfjHQlQfJHs0usq7M30XcSBvOotpmH1IgkcXsO/5gEQZD43B//fc7SRT5S+xw=="},"engines":{"node":">=12"}},"d3-brush@3.0.0":{"resolution":{"integrity":"sha512-ALnjWlVYkXsVIGlOsuWH1+3udkYFI48Ljihfnh8FZPF2QS9o+PzGLBslO0PjzVoHLZ2KCVgAM8NVkXPJB2aNnQ=="},"engines":{"node":">=12"}},"d3-chord@3.0.1":{"resolution":{"integrity":"sha512-VE5S6TNa+j8msksl7HwjxMHDM2yNK3XCkusIlpX5kwauBfXuyLAtNg9jCp/iHH61tgI4sb6R/EIMWCqEIdjT/g=="},"engines":{"node":">=12"}},"d3-color@3.1.0":{"resolution":{"integrity":"sha512-zg/chbXyeBtMQ1LbD/WSoW2DpC3I0mpmPdW+ynRTj/x2DAWYrIY7qeZIHidozwV24m4iavr15lNwIwLxRmOxhA=="},"engines":{"node":">=12"}},"d3-contour@4.0.2":{"resolution":{"integrity":"sha512-4EzFTRIikzs47RGmdxbeUvLWtGedDUNkTcmzoeyg4sP/dvCexO47AaQL7VKy/gul85TOxw+IBgA8US2xwbToNA=="},"engines":{"node":">=12"}},"d3-delaunay@6.0.4":{"resolution":{"integrity":"sha512-mdjtIZ1XLAM8bm/hx3WwjfHt6Sggek7qH043O8KEjDXN40xi3vx/6pYSVTwLjEgiXQTbvaouWKynLBiUZ6SK6A=="},"engines":{"node":">=12"}},"d3-dispatch@3.0.1":{"resolution":{"integrity":"sha512-rzUyPU/S7rwUflMyLc1ETDeBj0NRuHKKAcvukozwhshr6g6c5d8zh4c2gQjY2bZ0dXeGLWc1PF174P2tVvKhfg=="},"engines":{"node":">=12"}},"d3-drag@3.0.0":{"resolution":{"integrity":"sha512-pWbUJLdETVA8lQNJecMxoXfH6x+mO2UQo8rSmZ+QqxcbyA3hfeprFgIT//HW2nlHChWeIIMwS2Fq+gEARkhTkg=="},"engines":{"node":">=12"}},"d3-dsv@3.0.1":{"resolution":{"integrity":"sha512-UG6OvdI5afDIFP9w4G0mNq50dSOsXHJaRE8arAS5o9ApWnIElp8GZw1Dun8vP8OyHOZ/QJUKUJwxiiCCnUwm+Q=="},"engines":{"node":">=12"},"hasBin":true},"d3-ease@3.0.1":{"resolution":{"integrity":"sha512-wR/XK3D3XcLIZwpbvQwQ5fK+8Ykds1ip7A2Txe0yxncXSdq1L9skcG7blcedkOX+ZcgxGAmLX1FrRGbADwzi0w=="},"engines":{"node":">=12"}},"d3-fetch@3.0.1":{"resolution":{"integrity":"sha512-kpkQIM20n3oLVBKGg6oHrUchHM3xODkTzjMoj7aWQFq5QEM+R6E4WkzT5+tojDY7yjez8KgCBRoj4aEr99Fdqw=="},"engines":{"node":">=12"}},"d3-force@3.0.0":{"resolution":{"integrity":"sha512-zxV/SsA+U4yte8051P4ECydjD/S+qeYtnaIyAs9tgHCqfguma/aAQDjo85A9Z6EKhBirHRJHXIgJUlffT4wdLg=="},"engines":{"node":">=12"}},"d3-format@3.1.0":{"resolution":{"integrity":"sha512-YyUI6AEuY/Wpt8KWLgZHsIU86atmikuoOmCfommt0LYHiQSPjvX2AcFc38PX0CBpr2RCyZhjex+NS/LPOv6YqA=="},"engines":{"node":">=12"}},"d3-geo@3.1.1":{"resolution":{"integrity":"sha512-637ln3gXKXOwhalDzinUgY83KzNWZRKbYubaG+fGVuc/dxO64RRljtCTnf5ecMyE1RIdtqpkVcq0IbtU2S8j2Q=="},"engines":{"node":">=12"}},"d3-hierarchy@3.1.2":{"resolution":{"integrity":"sha512-FX/9frcub54beBdugHjDCdikxThEqjnR93Qt7PvQTOHxyiNCAlvMrHhclk3cD5VeAaq9fxmfRp+CnWw9rEMBuA=="},"engines":{"node":">=12"}},"d3-interpolate@3.0.1":{"resolution":{"integrity":"sha512-3bYs1rOD33uo8aqJfKP3JWPAibgw8Zm2+L9vBKEHJ2Rg+viTR7o5Mmv5mZcieN+FRYaAOWX5SJATX6k1PWz72g=="},"engines":{"node":">=12"}},"d3-path@1.0.9":{"resolution":{"integrity":"sha512-VLaYcn81dtHVTjEHd8B+pbe9yHWpXKZUC87PzoFmsFrJqgFwDe/qxfp5MlfsfM1V5E/iVt0MmEbWQ7FVIXh/bg=="}},"d3-path@3.1.0":{"resolution":{"integrity":"sha512-p3KP5HCf/bvjBSSKuXid6Zqijx7wIfNW+J/maPs+iwR35at5JCbLUT0LzF1cnjbCHWhqzQTIN2Jpe8pRebIEFQ=="},"engines":{"node":">=12"}},"d3-polygon@3.0.1":{"resolution":{"integrity":"sha512-3vbA7vXYwfe1SYhED++fPUQlWSYTTGmFmQiany/gdbiWgU/iEyQzyymwL9SkJjFFuCS4902BSzewVGsHHmHtXg=="},"engines":{"node":">=12"}},"d3-quadtree@3.0.1":{"resolution":{"integrity":"sha512-04xDrxQTDTCFwP5H6hRhsRcb9xxv2RzkcsygFzmkSIOJy3PeRJP7sNk3VRIbKXcog561P9oU0/rVH6vDROAgUw=="},"engines":{"node":">=12"}},"d3-random@3.0.1":{"resolution":{"integrity":"sha512-FXMe9GfxTxqd5D6jFsQ+DJ8BJS4E/fT5mqqdjovykEB2oFbTMDVdg1MGFxfQW+FBOGoB++k8swBrgwSHT1cUXQ=="},"engines":{"node":">=12"}},"d3-sankey@0.12.3":{"resolution":{"integrity":"sha512-nQhsBRmM19Ax5xEIPLMY9ZmJ/cDvd1BG3UVvt5h3WRxKg5zGRbvnteTyWAbzeSvlh3tW7ZEmq4VwR5mB3tutmQ=="}},"d3-scale-chromatic@3.1.0":{"resolution":{"integrity":"sha512-A3s5PWiZ9YCXFye1o246KoscMWqf8BsD9eRiJ3He7C9OBaxKhAd5TFCdEx/7VbKtxxTsu//1mMJFrEt572cEyQ=="},"engines":{"node":">=12"}},"d3-scale@4.0.2":{"resolution":{"integrity":"sha512-GZW464g1SH7ag3Y7hXjf8RoUuAFIqklOAq3MRl4OaWabTFJY9PN/E1YklhXLh+OQ3fM9yS2nOkCoS+WLZ6kvxQ=="},"engines":{"node":">=12"}},"d3-selection@3.0.0":{"resolution":{"integrity":"sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ=="},"engines":{"node":">=12"}},"d3-shape@1.3.7":{"resolution":{"integrity":"sha512-EUkvKjqPFUAZyOlhY5gzCxCeI0Aep04LwIRpsZ/mLFelJiUfnK56jo5JMDSE7yyP2kLSb6LtF+S5chMk7uqPqw=="}},"d3-shape@3.2.0":{"resolution":{"integrity":"sha512-SaLBuwGm3MOViRq2ABk3eLoxwZELpH6zhl3FbAoJ7Vm1gofKx6El1Ib5z23NUEhF9AsGl7y+dzLe5Cw2AArGTA=="},"engines":{"node":">=12"}},"d3-time-format@4.1.0":{"resolution":{"integrity":"sha512-dJxPBlzC7NugB2PDLwo9Q8JiTR3M3e4/XANkreKSUxF8vvXKqm1Yfq4Q5dl8budlunRVlUUaDUgFt7eA8D6NLg=="},"engines":{"node":">=12"}},"d3-time@3.1.0":{"resolution":{"integrity":"sha512-VqKjzBLejbSMT4IgbmVgDjpkYrNWUYJnbCGo874u7MMKIWsILRX+OpX/gTk8MqjpT1A/c6HY2dCA77ZN0lkQ2Q=="},"engines":{"node":">=12"}},"d3-timer@3.0.1":{"resolution":{"integrity":"sha512-ndfJ/JxxMd3nw31uyKoY2naivF+r29V+Lc0svZxe1JvvIRmi8hUsrMvdOwgS1o6uBHmiz91geQ0ylPP0aj1VUA=="},"engines":{"node":">=12"}},"d3-transition@3.0.1":{"resolution":{"integrity":"sha512-ApKvfjsSR6tg06xrL434C0WydLr7JewBB3V+/39RMHsaXTOG0zmt/OAXeng5M5LBm0ojmxJrpomQVZ1aPvBL4w=="},"engines":{"node":">=12"},"peerDependencies":{"d3-selection":"2 - 3"}},"d3-zoom@3.0.0":{"resolution":{"integrity":"sha512-b8AmV3kfQaqWAuacbPuNbL6vahnOJflOhexLzMMNLga62+/nh0JzvJ0aO/5a5MVgUFGS7Hu1P9P03o3fJkDCyw=="},"engines":{"node":">=12"}},"d3@7.9.0":{"resolution":{"integrity":"sha512-e1U46jVP+w7Iut8Jt8ri1YsPOvFpg46k+K8TpCb0P+zjCkjkPnV7WzfDJzMHy1LnA+wj5pLT1wjO901gLXeEhA=="},"engines":{"node":">=12"}},"dagre-d3-es@7.0.13":{"resolution":{"integrity":"sha512-efEhnxpSuwpYOKRm/L5KbqoZmNNukHa/Flty4Wp62JRvgH2ojwVgPgdYyr4twpieZnyRDdIH7PY2mopX26+j2Q=="}},"data-uri-to-buffer@4.0.1":{"resolution":{"integrity":"sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A=="},"engines":{"node":">= 12"}},"data-uri-to-buffer@6.0.2":{"resolution":{"integrity":"sha512-7hvf7/GW8e86rW0ptuwS3OcBGDjIi6SZva7hCyWC0yYry2cOPmLIjXAUHI6DK2HsnwJd9ifmt57i8eV2n4YNpw=="},"engines":{"node":">= 14"}},"data-urls@5.0.0":{"resolution":{"integrity":"sha512-ZYP5VBHshaDAiVZxjbRVcFJpc+4xGgT0bK3vzy1HLN8jTO975HEbuYzZJcHoQEY5K1a0z8YayJkyVETa08eNTg=="},"engines":{"node":">=18"}},"data-urls@6.0.0":{"resolution":{"integrity":"sha512-BnBS08aLUM+DKamupXs3w2tJJoqU+AkaE/+6vQxi/G/DPmIZFJJp9Dkb1kM03AZx8ADehDUZgsNxju3mPXZYIA=="},"engines":{"node":">=20"}},"dayjs@1.11.19":{"resolution":{"integrity":"sha512-t5EcLVS6QPBNqM2z8fakk/NKel+Xzshgt8FFKAn+qwlD1pzZWxh0nVCrvFK7ZDb6XucZeF9z8C7CBWTRIVApAw=="}},"debug@4.4.1":{"resolution":{"integrity":"sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="},"engines":{"node":">=6.0"},"peerDependencies":{"supports-color":"*"},"peerDependenciesMeta":{"supports-color":{"optional":true}}},"debug@4.4.3":{"resolution":{"integrity":"sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="},"engines":{"node":">=6.0"},"peerDependencies":{"supports-color":"*"},"peerDependenciesMeta":{"supports-color":{"optional":true}}},"decamelize@6.0.1":{"resolution":{"integrity":"sha512-G7Cqgaelq68XHJNGlZ7lrNQyhZGsFqpwtGFexqUv4IQdjKoSYF7ipZ9UuTJZUSQXFj/XaoBLuEVIVqr8EJngEQ=="},"engines":{"node":"^12.20.0 || ^14.13.1 || >=16.0.0"}},"decimal.js@10.6.0":{"resolution":{"integrity":"sha512-YpgQiITW3JXGntzdUmyUR1V812Hn8T1YVXhCu+wO3OpS4eU9l4YdD3qjyiKdV6mvV29zapkMeD390UVEf2lkUg=="}},"decode-named-character-reference@1.0.2":{"resolution":{"integrity":"sha512-O8x12RzrUF8xyVcY0KJowWsmaJxQbmy0/EtnNtHRpsOcT7dFk5W598coHqBVpmWo1oQQfsCqfCmkZN5DJrZVdg=="}},"decompress-response@6.0.0":{"resolution":{"integrity":"sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ=="},"engines":{"node":">=10"}},"deep-eql@5.0.2":{"resolution":{"integrity":"sha512-h5k/5U50IJJFpzfL6nO9jaaumfjO/f2NjK/oYB2Djzm4p9L+3T9qWpZqZ2hAbLPuuYq9wrU08WQyBTL5GbPk5Q=="},"engines":{"node":">=6"}},"deep-extend@0.6.0":{"resolution":{"integrity":"sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA=="},"engines":{"node":">=4.0.0"}},"deepmerge-ts@7.1.5":{"resolution":{"integrity":"sha512-HOJkrhaYsweh+W+e74Yn7YStZOilkoPb6fycpwNLKzSPtruFs48nYis0zy5yJz1+ktUhHxoRDJ27RQAWLIJVJw=="},"engines":{"node":">=16.0.0"}},"defaults@1.0.4":{"resolution":{"integrity":"sha512-eFuaLoy/Rxalv2kr+lqMlUnrDWV+3j4pljOIJgLIhI058IQfWJ7vXhyEIHu+HtC738klGALYxOKDO0bQP3tg8A=="}},"define-lazy-prop@2.0.0":{"resolution":{"integrity":"sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og=="},"engines":{"node":">=8"}},"degenerator@5.0.1":{"resolution":{"integrity":"sha512-TllpMR/t0M5sqCXfj85i4XaAzxmS5tVA16dqvdkMwGmzI+dXLXnw3J+3Vdv7VKw+ThlTMboK6i9rnZ6Nntj5CQ=="},"engines":{"node":">= 14"}},"delaunator@5.0.1":{"resolution":{"integrity":"sha512-8nvh+XBe96aCESrGOqMp/84b13H9cdKbG5P2ejQCh4d4sK9RL4371qou9drQjMhvnPmhWl5hnmqbEE0fXr9Xnw=="}},"delayed-stream@1.0.0":{"resolution":{"integrity":"sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ=="},"engines":{"node":">=0.4.0"}},"dequal@2.0.3":{"resolution":{"integrity":"sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA=="},"engines":{"node":">=6"}},"detect-indent@6.1.0":{"resolution":{"integrity":"sha512-reYkTUJAZb9gUuZ2RvVCNhVHdg62RHnJ7WJl8ftMi4diZ6NWlciOzQN88pUhSELEwflJht4oQDv0F0BMlwaYtA=="},"engines":{"node":">=8"}},"detect-libc@2.1.2":{"resolution":{"integrity":"sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ=="},"engines":{"node":">=8"}},"devlop@1.1.0":{"resolution":{"integrity":"sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA=="}},"diff@8.0.2":{"resolution":{"integrity":"sha512-sSuxWU5j5SR9QQji/o2qMvqRNYRDOcBTgsJ/DeCf4iSN4gW+gNMXM7wFIP+fdXZxoNiAnHUTGjCr+TSWXdRDKg=="},"engines":{"node":">=0.3.1"}},"dir-glob@3.0.1":{"resolution":{"integrity":"sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA=="},"engines":{"node":">=8"}},"dom-accessibility-api@0.5.16":{"resolution":{"integrity":"sha512-X7BJ2yElsnOJ30pZF4uIIDfBEVgF4XEBxL9Bxhy6dnrm5hkzqmsWHGTiHqRiITNhMyFLyAiWndIJP7Z1NTteDg=="}},"dom-serializer@2.0.0":{"resolution":{"integrity":"sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg=="}},"domelementtype@2.3.0":{"resolution":{"integrity":"sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw=="}},"domhandler@5.0.3":{"resolution":{"integrity":"sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w=="},"engines":{"node":">= 4"}},"dompurify@3.3.1":{"resolution":{"integrity":"sha512-qkdCKzLNtrgPFP1Vo+98FRzJnBRGe4ffyCea9IwHB1fyxPOeNTHpLKYGd4Uk9xvNoH0ZoOjwZxNptyMwqrId1Q=="}},"domutils@3.2.2":{"resolution":{"integrity":"sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw=="}},"dotenv-expand@11.0.7":{"resolution":{"integrity":"sha512-zIHwmZPRshsCdpMDyVsqGmgyP0yT8GAgXUnkdAoJisxvf33k7yO6OuoKmcTGuXPWSsm8Oh88nZicRLA9Y0rUeA=="},"engines":{"node":">=12"}},"dotenv@10.0.0":{"resolution":{"integrity":"sha512-rlBi9d8jpv9Sf1klPjNfFAuWDjKLwTIJJ/VxtoTwIR6hnZxcEOQCZg2oIL3MWBYw5GpUDKOEnND7LXTbIpQ03Q=="},"engines":{"node":">=10"}},"dotenv@16.4.7":{"resolution":{"integrity":"sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ=="},"engines":{"node":">=12"}},"dotenv@16.5.0":{"resolution":{"integrity":"sha512-m/C+AwOAr9/W1UOIZUo232ejMNnJAJtYQjUbHoNTBNTJSvqzzDh7vnrei3o3r3m9blf6ZoDkvcw0VmozNRFJxg=="},"engines":{"node":">=12"}},"dunder-proto@1.0.1":{"resolution":{"integrity":"sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A=="},"engines":{"node":">= 0.4"}},"eastasianwidth@0.2.0":{"resolution":{"integrity":"sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA=="}},"edge-paths@3.0.5":{"resolution":{"integrity":"sha512-sB7vSrDnFa4ezWQk9nZ/n0FdpdUuC6R1EOrlU3DL+bovcNFK28rqu2emmAUjujYEJTWIgQGqgVVWUZXMnc8iWg=="},"engines":{"node":">=14.0.0"}},"edgedriver@5.6.1":{"resolution":{"integrity":"sha512-3Ve9cd5ziLByUdigw6zovVeWJjVs8QHVmqOB0sJ0WNeVPcwf4p18GnxMmVvlFmYRloUwf5suNuorea4QzwBIOA=="},"hasBin":true},"electron-to-chromium@1.5.211":{"resolution":{"integrity":"sha512-IGBvimJkotaLzFnwIVgW9/UD/AOJ2tByUmeOrtqBfACSbAw5b1G0XpvdaieKyc7ULmbwXVx+4e4Be8pOPBrYkw=="}},"electron-to-chromium@1.5.352":{"resolution":{"integrity":"sha512-9wHk8x6dyuimoe18EdiDPWKExNdxYqo4fn4FwOVVper6RxT3cmpBwBkWWfSOCYJjQdIco/nPhJhNLmn4Ufg1Yg=="}},"emoji-regex@8.0.0":{"resolution":{"integrity":"sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="}},"emoji-regex@9.2.2":{"resolution":{"integrity":"sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg=="}},"encoding-sniffer@0.2.1":{"resolution":{"integrity":"sha512-5gvq20T6vfpekVtqrYQsSCFZ1wEg5+wW0/QaZMWkFr6BqD3NfKs0rLCx4rrVlSWJeZb5NBJgVLswK/w2MWU+Gw=="}},"end-of-stream@1.4.5":{"resolution":{"integrity":"sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg=="}},"enhanced-resolve@5.21.0":{"resolution":{"integrity":"sha512-otxSQPw4lkOZWkHpB3zaEQs6gWYEsmX4xQF68ElXC/TWvGxGMSGOvoNbaLXm6/cS/fSfHtsEdw90y20PCd+sCA=="},"engines":{"node":">=10.13.0"}},"enquirer@2.3.6":{"resolution":{"integrity":"sha512-yjNnPr315/FjS4zIsUxYguYUPP2e1NK4d7E7ZOLiyYCcbFBiTMyID+2wvm2w6+pZ/odMA7cRkjhsPbltwBOrLg=="},"engines":{"node":">=8.6"}},"enquirer@2.4.1":{"resolution":{"integrity":"sha512-rRqJg/6gd538VHvR3PSrdRBb/1Vy2YfzHqzvbhGIQpDRKIa4FgV/54b5Q1xYSxOOwKvjXweS26E0Q+nAMwp2pQ=="},"engines":{"node":">=8.6"}},"entities@4.5.0":{"resolution":{"integrity":"sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw=="},"engines":{"node":">=0.12"}},"entities@6.0.1":{"resolution":{"integrity":"sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g=="},"engines":{"node":">=0.12"}},"entities@7.0.1":{"resolution":{"integrity":"sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA=="},"engines":{"node":">=0.12"}},"error-stack-parser-es@1.0.5":{"resolution":{"integrity":"sha512-5qucVt2XcuGMcEGgWI7i+yZpmpByQ8J1lHhcL7PwqCwu9FPP3VUXzT4ltHe5i2z9dePwEHcDVOAfSnHsOlCXRA=="}},"es-define-property@1.0.1":{"resolution":{"integrity":"sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g=="},"engines":{"node":">= 0.4"}},"es-errors@1.3.0":{"resolution":{"integrity":"sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw=="},"engines":{"node":">= 0.4"}},"es-module-lexer@1.7.0":{"resolution":{"integrity":"sha512-jEQoCwk8hyb2AZziIOLhDqpm5+2ww5uIE6lkO/6jcOCusfk6LhMHpXXfBLXTZ7Ydyt0j4VoUQv6uGNYbdW+kBA=="}},"es-module-lexer@2.1.0":{"resolution":{"integrity":"sha512-n27zTYMjYu1aj4MjCWzSP7G9r75utsaoc8m61weK+W8JMBGGQybd43GstCXZ3WNmSFtGT9wi59qQTW6mhTR5LQ=="}},"es-object-atoms@1.1.1":{"resolution":{"integrity":"sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA=="},"engines":{"node":">= 0.4"}},"es-set-tostringtag@2.1.0":{"resolution":{"integrity":"sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA=="},"engines":{"node":">= 0.4"}},"esbuild@0.25.12":{"resolution":{"integrity":"sha512-bbPBYYrtZbkt6Os6FiTLCTFxvq4tt3JKall1vRwshA3fdVztsLAatFaZobhkBC8/BrPetoa0oksYoKXoG4ryJg=="},"engines":{"node":">=18"},"hasBin":true},"esbuild@0.27.3":{"resolution":{"integrity":"sha512-8VwMnyGCONIs6cWue2IdpHxHnAjzxnw2Zr7MkVxB2vjmQ2ivqGFb4LEG3SMnv0Gb2F/G/2yA8zUaiL1gywDCCg=="},"engines":{"node":">=18"},"hasBin":true},"escalade@3.2.0":{"resolution":{"integrity":"sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA=="},"engines":{"node":">=6"}},"escape-string-regexp@1.0.5":{"resolution":{"integrity":"sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg=="},"engines":{"node":">=0.8.0"}},"escape-string-regexp@5.0.0":{"resolution":{"integrity":"sha512-/veY75JbMK4j1yjvuUxuVsiS/hr/4iHs9FTT6cgTexxdE0Ly/glccBAkloH/DofkjRbZU3bnoj38mOmhkZ0lHw=="},"engines":{"node":">=12"}},"escodegen@2.1.0":{"resolution":{"integrity":"sha512-2NlIDTwUWJN0mRPQOdtQBzbUHvdGY2P1VXSyU83Q3xKxM7WHX2Ql8dKq782Q9TgQUNOLEzEYu9bzLNj1q88I5w=="},"engines":{"node":">=6.0"},"hasBin":true},"eslint-scope@5.1.1":{"resolution":{"integrity":"sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw=="},"engines":{"node":">=8.0.0"}},"esprima@4.0.1":{"resolution":{"integrity":"sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A=="},"engines":{"node":">=4"},"hasBin":true},"esrecurse@4.3.0":{"resolution":{"integrity":"sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag=="},"engines":{"node":">=4.0"}},"estraverse@4.3.0":{"resolution":{"integrity":"sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw=="},"engines":{"node":">=4.0"}},"estraverse@5.3.0":{"resolution":{"integrity":"sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA=="},"engines":{"node":">=4.0"}},"estree-walker@3.0.3":{"resolution":{"integrity":"sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g=="}},"esutils@2.0.3":{"resolution":{"integrity":"sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g=="},"engines":{"node":">=0.10.0"}},"event-target-shim@5.0.1":{"resolution":{"integrity":"sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ=="},"engines":{"node":">=6"}},"events-universal@1.0.1":{"resolution":{"integrity":"sha512-LUd5euvbMLpwOF8m6ivPCbhQeSiYVNb8Vs0fQ8QjXo0JTkEHpz8pxdQf0gStltaPpw0Cca8b39KxvK9cfKRiAw=="}},"events@3.3.0":{"resolution":{"integrity":"sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q=="},"engines":{"node":">=0.8.x"}},"expand-template@2.0.3":{"resolution":{"integrity":"sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg=="},"engines":{"node":">=6"}},"expect-type@1.2.2":{"resolution":{"integrity":"sha512-JhFGDVJ7tmDJItKhYgJCGLOWjuK9vPxiXoUFLwLDc99NlmklilbiQJwoctZtt13+xMw91MCk/REan6MWHqDjyA=="},"engines":{"node":">=12.0.0"}},"expect-type@1.3.0":{"resolution":{"integrity":"sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA=="},"engines":{"node":">=12.0.0"}},"exsolve@1.0.8":{"resolution":{"integrity":"sha512-LmDxfWXwcTArk8fUEnOfSZpHOJ6zOMUJKOtFLFqJLoKJetuQG874Uc7/Kki7zFLzYybmZhp1M7+98pfMqeX8yA=="}},"extend@3.0.2":{"resolution":{"integrity":"sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g=="}},"extendable-error@0.1.7":{"resolution":{"integrity":"sha512-UOiS2in6/Q0FK0R0q6UY9vYpQ21mr/Qn1KOnte7vsACuNJf514WvCCUHSRCPcgjPT2bAhNIJdlE6bVap1GKmeg=="}},"extract-zip@2.0.1":{"resolution":{"integrity":"sha512-GDhU9ntwuKyGXdZBUgTIe+vXnWj0fppUEtMDL0+idd5Sta8TGpHssn/eusA9mrPr9qNDym6SxAYZjNvCn/9RBg=="},"engines":{"node":">= 10.17.0"},"hasBin":true},"fast-deep-equal@2.0.1":{"resolution":{"integrity":"sha512-bCK/2Z4zLidyB4ReuIsvALH6w31YfAQDmXMqMx6FyfHqvBxtjC0eRumeSu4Bs3XtXwpyIywtSTrVT99BxY1f9w=="}},"fast-deep-equal@3.1.3":{"resolution":{"integrity":"sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="}},"fast-fifo@1.3.2":{"resolution":{"integrity":"sha512-/d9sfos4yxzpwkDkuN7k2SqFKtYNmCTzgfEpz82x34IM9/zc8KGxQoXg1liNC/izpRM/MBdt44Nmx41ZWqk+FQ=="}},"fast-glob@3.3.3":{"resolution":{"integrity":"sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg=="},"engines":{"node":">=8.6.0"}},"fast-uri@3.0.3":{"resolution":{"integrity":"sha512-aLrHthzCjH5He4Z2H9YZ+v6Ujb9ocRuW6ZzkJQOrTxleEijANq4v1TsaPaVG1PZcuurEzrLcWRyYBYXD5cEiaw=="}},"fast-uri@3.1.2":{"resolution":{"integrity":"sha512-rVjf7ArG3LTk+FS6Yw81V1DLuZl1bRbNrev6Tmd/9RaroeeRRJhAt7jg/6YFxbvAQXUCavSoZhPPj6oOx+5KjQ=="}},"fast-xml-parser@4.5.6":{"resolution":{"integrity":"sha512-Yd4vkROfJf8AuJrDIVMVmYfULKmIJszVsMv7Vo71aocsKgFxpdlpSHXSaInvyYfgw2PRuObQSW2GFpVMUjxu9A=="},"hasBin":true},"fastq@1.17.1":{"resolution":{"integrity":"sha512-sRVD3lWVIXWg6By68ZN7vho9a1pQcN/WBFaAAsDDFzlJjvoGx0P8z7V1t72grFJfJhu3YPZBuu25f7Kaw2jN1w=="}},"fault@2.0.1":{"resolution":{"integrity":"sha512-WtySTkS4OKev5JtpHXnib4Gxiurzh5NCGvWrFaZ34m6JehfTUhKZvn9njTfw48t6JumVQOmrKqpmGcdwxnhqBQ=="}},"fd-slicer@1.1.0":{"resolution":{"integrity":"sha512-cE1qsB/VwyQozZ+q1dGxR8LBYNZeofhEdUNGSMbQD3Gw2lAzX9Zb3uIU6Ebc/Fmyjo9AWWfnn0AUCHqtevs/8g=="}},"fdir@6.5.0":{"resolution":{"integrity":"sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg=="},"engines":{"node":">=12.0.0"},"peerDependencies":{"picomatch":"^3 || ^4"},"peerDependenciesMeta":{"picomatch":{"optional":true}}},"fetch-blob@3.2.0":{"resolution":{"integrity":"sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ=="},"engines":{"node":"^12.20 || >= 14.13"}},"fetchdts@0.1.7":{"resolution":{"integrity":"sha512-YoZjBdafyLIop9lSxXVI33oLD5kN31q4Td+CasofLLYeLXRFeOsuOw0Uo+XNRi9PZlbfdlN2GmRtm4tCEQ9/KA=="}},"fflate@0.4.8":{"resolution":{"integrity":"sha512-FJqqoDBR00Mdj9ppamLa/Y7vxm+PRmNWA67N846RvsoYVMKB4q3y/de5PA7gUmRMYK/8CMz2GDZQmCRN1wBcWA=="}},"figures@3.2.0":{"resolution":{"integrity":"sha512-yaduQFRKLXYOGgEn6AZau90j3ggSOyiqXU0F9JZfeXYhNa+Jk4X+s45A2zg5jns87GAFa34BBm2kXw4XpNcbdg=="},"engines":{"node":">=8"}},"file-uri-to-path@1.0.0":{"resolution":{"integrity":"sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw=="}},"fill-range@7.1.1":{"resolution":{"integrity":"sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="},"engines":{"node":">=8"}},"find-up@4.1.0":{"resolution":{"integrity":"sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw=="},"engines":{"node":">=8"}},"flat@5.0.2":{"resolution":{"integrity":"sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ=="},"hasBin":true},"follow-redirects@1.15.11":{"resolution":{"integrity":"sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ=="},"engines":{"node":">=4.0"},"peerDependencies":{"debug":"*"},"peerDependenciesMeta":{"debug":{"optional":true}}},"foreground-child@3.3.1":{"resolution":{"integrity":"sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw=="},"engines":{"node":">=14"}},"form-data@4.0.4":{"resolution":{"integrity":"sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow=="},"engines":{"node":">= 6"}},"format@0.2.2":{"resolution":{"integrity":"sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww=="},"engines":{"node":">=0.4.x"}},"formdata-polyfill@4.0.10":{"resolution":{"integrity":"sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g=="},"engines":{"node":">=12.20.0"}},"front-matter@4.0.2":{"resolution":{"integrity":"sha512-I8ZuJ/qG92NWX8i5x1Y8qyj3vizhXS31OxjKDu3LKP+7/qBgfIKValiZIEwoVoJKUHlhWtYrktkxV1XsX+pPlg=="}},"fs-constants@1.0.0":{"resolution":{"integrity":"sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow=="}},"fs-extra@11.3.1":{"resolution":{"integrity":"sha512-eXvGGwZ5CL17ZSwHWd3bbgk7UUpF6IFHtP57NYYakPvHOs8GDgDe5KJI36jIJzDkJ6eJjuzRA8eBQb6SkKue0g=="},"engines":{"node":">=14.14"}},"fs-extra@7.0.1":{"resolution":{"integrity":"sha512-YJDaCJZEnBmcbw13fvdAM9AwNOJwOzrE4pqMqBq5nFiEqXUqHwlK4B+3pUw6JNvfSPtX05xFHtYy/1ni01eGCw=="},"engines":{"node":">=6 <7 || >=8"}},"fs-extra@8.1.0":{"resolution":{"integrity":"sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g=="},"engines":{"node":">=6 <7 || >=8"}},"fs-minipass@2.1.0":{"resolution":{"integrity":"sha512-V/JgOLFCS+R6Vcq0slCuaeWEdNC3ouDlJMNIsacH2VtALiu9mV4LPrHc5cDl8k5aw6J8jwgWWpiTo5RYhmIzvg=="},"engines":{"node":">= 8"}},"fsevents@2.3.2":{"resolution":{"integrity":"sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA=="},"engines":{"node":"^8.16.0 || ^10.6.0 || >=11.0.0"},"os":["darwin"]},"fsevents@2.3.3":{"resolution":{"integrity":"sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw=="},"engines":{"node":"^8.16.0 || ^10.6.0 || >=11.0.0"},"os":["darwin"]},"function-bind@1.1.2":{"resolution":{"integrity":"sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA=="}},"geckodriver@4.5.1":{"resolution":{"integrity":"sha512-lGCRqPMuzbRNDWJOQcUqhNqPvNsIFu6yzXF8J/6K3WCYFd2r5ckbeF7h1cxsnjA7YLSEiWzERCt6/gjZ3tW0ug=="},"engines":{"node":"^16.13 || >=18 || >=20"},"hasBin":true},"gensync@1.0.0-beta.2":{"resolution":{"integrity":"sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg=="},"engines":{"node":">=6.9.0"}},"get-caller-file@2.0.5":{"resolution":{"integrity":"sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="},"engines":{"node":"6.* || 8.* || >= 10.*"}},"get-intrinsic@1.3.0":{"resolution":{"integrity":"sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ=="},"engines":{"node":">= 0.4"}},"get-port@7.2.0":{"resolution":{"integrity":"sha512-afP4W205ONCuMoPBqcR6PSXnzX35KTcJygfJfcp+QY+uwm3p20p1YczWXhlICIzGMCxYBQcySEcOgsJcrkyobg=="},"engines":{"node":">=16"}},"get-proto@1.0.1":{"resolution":{"integrity":"sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g=="},"engines":{"node":">= 0.4"}},"get-stream@5.2.0":{"resolution":{"integrity":"sha512-nBF+F1rAZVCu/p7rjzgA+Yb4lfYXrpl7a6VmJrU8wF9I1CKvP/QwPNZHnOlwbTkY6dvtFIzFMSyQXbLoTQPRpA=="},"engines":{"node":">=8"}},"get-tsconfig@4.14.0":{"resolution":{"integrity":"sha512-yTb+8DXzDREzgvYmh6s9vHsSVCHeC0G3PI5bEXNBHtmshPnO+S5O7qgLEOn0I5QvMy6kpZN8K1NKGyilLb93wA=="}},"get-uri@6.0.5":{"resolution":{"integrity":"sha512-b1O07XYq8eRuVzBNgJLstU6FYc1tS6wnMtF1I1D9lE8LxZSOGZ7LhxN54yPP6mGw5f2CkXY2BQUL9Fx41qvcIg=="},"engines":{"node":">= 14"}},"github-from-package@0.0.0":{"resolution":{"integrity":"sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw=="}},"github-slugger@2.0.0":{"resolution":{"integrity":"sha512-IaOQ9puYtjrkq7Y0Ygl9KDZnrf/aiUJYUpVf89y8kyaxbRG7Y1SrX/jaumrv81vc61+kiMempujsM3Yw7w5qcw=="}},"glob-parent@5.1.2":{"resolution":{"integrity":"sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="},"engines":{"node":">= 6"}},"glob-to-regexp@0.4.1":{"resolution":{"integrity":"sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw=="}},"glob@10.4.5":{"resolution":{"integrity":"sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg=="},"deprecated":"Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me","hasBin":true},"glob@10.5.0":{"resolution":{"integrity":"sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="},"deprecated":"Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me","hasBin":true},"globals@15.15.0":{"resolution":{"integrity":"sha512-7ACyT3wmyp3I61S4fG682L0VA2RGD9otkqGJIwNUMF1SWUombIIk+af1unuDYgMm082aHYwD+mzJvv9Iu8dsgg=="},"engines":{"node":">=18"}},"globby@11.1.0":{"resolution":{"integrity":"sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g=="},"engines":{"node":">=10"}},"gopd@1.2.0":{"resolution":{"integrity":"sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg=="},"engines":{"node":">= 0.4"}},"graceful-fs@4.2.11":{"resolution":{"integrity":"sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ=="}},"grapheme-splitter@1.0.4":{"resolution":{"integrity":"sha512-bzh50DW9kTPM00T8y4o8vQg89Di9oLJVLW/KaOGIXJWP/iqCN6WKYkbNOF04vFLJhwcpYUh9ydh/+5vpOqV4YQ=="}},"graphql@16.14.0":{"resolution":{"integrity":"sha512-BBvQ/406p+4CZbTpCbVPSxfzrZrbnuWSP1ELYgyS6B+hNeKzgrdB4JczCa5VZUBQrDa9hUngm0KnexY6pJRN5Q=="},"engines":{"node":"^12.22.0 || ^14.16.0 || ^16.0.0 || >=17.0.0"}},"h3@2.0.1-rc.20":{"resolution":{"integrity":"sha512-28ljodXuUp0fZovdiSRq4G9OgrxCztrJe5VdYzXAB7ueRvI7pIUqLU14Xi3XqdYJ/khXjfpUOOD2EQa6CmBgsg=="},"engines":{"node":">=20.11.1"},"hasBin":true,"peerDependencies":{"crossws":"^0.4.1"},"peerDependenciesMeta":{"crossws":{"optional":true}}},"hachure-fill@0.5.2":{"resolution":{"integrity":"sha512-3GKBOn+m2LX9iq+JC1064cSFprJY4jL1jCXTcpnfER5HYE2l/4EfWSGzkPa/ZDBmYI0ZOEj5VHV/eKnPGkHuOg=="}},"happy-dom@18.0.1":{"resolution":{"integrity":"sha512-qn+rKOW7KWpVTtgIUi6RVmTBZJSe2k0Db0vh1f7CWrWclkkc7/Q+FrOfkZIb2eiErLyqu5AXEzE7XthO9JVxRA=="},"engines":{"node":">=20.0.0"}},"has-flag@4.0.0":{"resolution":{"integrity":"sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="},"engines":{"node":">=8"}},"has-symbols@1.1.0":{"resolution":{"integrity":"sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ=="},"engines":{"node":">= 0.4"}},"has-tostringtag@1.0.2":{"resolution":{"integrity":"sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw=="},"engines":{"node":">= 0.4"}},"hasown@2.0.2":{"resolution":{"integrity":"sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="},"engines":{"node":">= 0.4"}},"hast-util-embedded@3.0.0":{"resolution":{"integrity":"sha512-naH8sld4Pe2ep03qqULEtvYr7EjrLK2QHY8KJR6RJkTUjPGObe1vnx585uzem2hGra+s1q08DZZpfgDVYRbaXA=="}},"hast-util-from-html@2.0.3":{"resolution":{"integrity":"sha512-CUSRHXyKjzHov8yKsQjGOElXy/3EKpyX56ELnkHH34vDVw1N1XSQ1ZcAvTyAPtGqLTuKP/uxM+aLkSPqF/EtMw=="}},"hast-util-from-parse5@8.0.3":{"resolution":{"integrity":"sha512-3kxEVkEKt0zvcZ3hCRYI8rqrgwtlIOFMWkbclACvjlDw8Li9S2hk/d51OI0nr/gIpdMHNepwgOKqZ/sy0Clpyg=="}},"hast-util-has-property@3.0.0":{"resolution":{"integrity":"sha512-MNilsvEKLFpV604hwfhVStK0usFY/QmM5zX16bo7EjnAEGofr5YyI37kzopBlZJkHD4t887i+q/C8/tr5Q94cA=="}},"hast-util-heading-rank@3.0.0":{"resolution":{"integrity":"sha512-EJKb8oMUXVHcWZTDepnr+WNbfnXKFNf9duMesmr4S8SXTJBJ9M4Yok08pu9vxdJwdlGRhVumk9mEhkEvKGifwA=="}},"hast-util-is-body-ok-link@3.0.1":{"resolution":{"integrity":"sha512-0qpnzOBLztXHbHQenVB8uNuxTnm/QBFUOmdOSsEn7GnBtyY07+ENTWVFBAnXd/zEgd9/SUG3lRY7hSIBWRgGpQ=="}},"hast-util-is-element@3.0.0":{"resolution":{"integrity":"sha512-Val9mnv2IWpLbNPqc/pUem+a7Ipj2aHacCwgNfTiK0vJKl0LF+4Ba4+v1oPHFpf3bLYmreq0/l3Gud9S5OH42g=="}},"hast-util-minify-whitespace@1.0.1":{"resolution":{"integrity":"sha512-L96fPOVpnclQE0xzdWb/D12VT5FabA7SnZOUMtL1DbXmYiHJMXZvFkIZfiMmTCNJHUeO2K9UYNXoVyfz+QHuOw=="}},"hast-util-parse-selector@4.0.0":{"resolution":{"integrity":"sha512-wkQCkSYoOGCRKERFWcxMVMOcYE2K1AaNLU8DXS9arxnLOUEWbOXKXiJUNzEpqZ3JOKpnha3jkFrumEjVliDe7A=="}},"hast-util-phrasing@3.0.1":{"resolution":{"integrity":"sha512-6h60VfI3uBQUxHqTyMymMZnEbNl1XmEGtOxxKYL7stY2o601COo62AWAYBQR9lZbYXYSBoxag8UpPRXK+9fqSQ=="}},"hast-util-raw@9.1.0":{"resolution":{"integrity":"sha512-Y8/SBAHkZGoNkpzqqfCldijcuUKh7/su31kEBp67cFY09Wy0mTRgtsLYsiIxMJxlu0f6AA5SUTbDR8K0rxnbUw=="}},"hast-util-sanitize@5.0.2":{"resolution":{"integrity":"sha512-3yTWghByc50aGS7JlGhk61SPenfE/p1oaFeNwkOOyrscaOkMGrcW9+Cy/QAIOBpZxP1yqDIzFMR0+Np0i0+usg=="}},"hast-util-to-html@9.0.5":{"resolution":{"integrity":"sha512-OguPdidb+fbHQSU4Q4ZiLKnzWo8Wwsf5bZfbvu7//a9oTYoqD/fWpe96NuHkoS9h0ccGOTe0C4NGXdtS0iObOw=="}},"hast-util-to-mdast@10.1.2":{"resolution":{"integrity":"sha512-FiCRI7NmOvM4y+f5w32jPRzcxDIz+PUqDwEqn1A+1q2cdp3B8Gx7aVrXORdOKjMNDQsD1ogOr896+0jJHW1EFQ=="}},"hast-util-to-parse5@8.0.0":{"resolution":{"integrity":"sha512-3KKrV5ZVI8if87DVSi1vDeByYrkGzg4mEfeu4alwgmmIeARiBLKCZS2uw5Gb6nU9x9Yufyj3iudm6i7nl52PFw=="}},"hast-util-to-string@3.0.1":{"resolution":{"integrity":"sha512-XelQVTDWvqcl3axRfI0xSeoVKzyIFPwsAGSLIsKdJKQMXDYJS4WYrBNF/8J7RdhIcFI2BOHgAifggsvsxp/3+A=="}},"hast-util-to-text@4.0.2":{"resolution":{"integrity":"sha512-KK6y/BN8lbaq654j7JgBydev7wuNMcID54lkRav1P0CaE1e47P72AWWPiGKXTJU271ooYzcvTAn/Zt0REnvc7A=="}},"hast-util-whitespace@3.0.0":{"resolution":{"integrity":"sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw=="}},"hastscript@9.0.1":{"resolution":{"integrity":"sha512-g7df9rMFX/SPi34tyGCyUBREQoKkapwdY/T04Qn9TDWfHhAYt4/I0gMVirzK5wEzeUqIjEB+LXC/ypb7Aqno5w=="}},"headers-polyfill@4.0.3":{"resolution":{"integrity":"sha512-IScLbePpkvO846sIwOtOTDjutRMWdXdJmXdMvk6gCBHxFO8d+QKOQedyZSxFTTFYRSmlgSTDtXqqq4pcenBXLQ=="}},"highlight.js@11.11.1":{"resolution":{"integrity":"sha512-Xwwo44whKBVCYoliBQwaPvtd/2tYFkRQtXDWj1nackaV2JPXx3L0+Jvd8/qCJ2p+ML0/XVkJ2q+Mr+UVdpJK5w=="},"engines":{"node":">=12.0.0"}},"html-encoding-sniffer@4.0.0":{"resolution":{"integrity":"sha512-Y22oTqIU4uuPgEemfz7NDJz6OeKf12Lsu+QC+s3BVpda64lTiMYCyGwg5ki4vFxkMwQdeZDl2adZoqUgdFuTgQ=="},"engines":{"node":">=18"}},"html-escaper@2.0.2":{"resolution":{"integrity":"sha512-H2iMtd0I4Mt5eYiapRdIDjp+XzelXQ0tFE4JS7YFwFevXXMmOp9myNrUvCg0D6ws8iqkRPBfKHgbwig1SmlLfg=="}},"html-void-elements@3.0.0":{"resolution":{"integrity":"sha512-bEqo66MRXsUGxWHV5IP0PUiAWwoEjba4VCzg0LjFJBpchPaTfyfCKTG6bc5F8ucKec3q5y6qOdGyYTSBEvhCrg=="}},"htmlfy@0.3.2":{"resolution":{"integrity":"sha512-FsxzfpeDYRqn1emox9VpxMPfGjADoUmmup8D604q497R0VNxiXs4ZZTN2QzkaMA5C9aHGUoe1iQRVSm+HK9xuA=="}},"htmlparser2@10.0.0":{"resolution":{"integrity":"sha512-TwAZM+zE5Tq3lrEHvOlvwgj1XLWQCtaaibSN11Q+gGBAS7Y1uZSWwXXRe4iF6OXnaq1riyQAPFOBtYc77Mxq0g=="}},"htmlparser2@10.1.0":{"resolution":{"integrity":"sha512-VTZkM9GWRAtEpveh7MSF6SjjrpNVNNVJfFup7xTY3UpFtm67foy9HDVXneLtFVt4pMz5kZtgNcvCniNFb1hlEQ=="}},"http-proxy-agent@7.0.2":{"resolution":{"integrity":"sha512-T1gkAiYYDWYx3V5Bmyu7HcfcvL7mUrTWiM6yOfa3PIphViJ/gFPbvidQ+veqSOHci/PxBcDabeUNCzpOODJZig=="},"engines":{"node":">= 14"}},"https-proxy-agent@7.0.2":{"resolution":{"integrity":"sha512-NmLNjm6ucYwtcUmL7JQC1ZQ57LmHP4lT15FQ8D61nak1rO6DH+fz5qNK2Ap5UN4ZapYICE3/0KodcLYSPsPbaA=="},"engines":{"node":">= 14"}},"https-proxy-agent@7.0.6":{"resolution":{"integrity":"sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw=="},"engines":{"node":">= 14"}},"human-id@4.1.1":{"resolution":{"integrity":"sha512-3gKm/gCSUipeLsRYZbbdA1BD83lBoWUkZ7G9VFrhWPAU76KwYo5KR8V28bpoPm/ygy0x5/GCbpRQdY7VLYCoIg=="},"hasBin":true},"iconv-lite@0.6.3":{"resolution":{"integrity":"sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw=="},"engines":{"node":">=0.10.0"}},"ieee754@1.2.1":{"resolution":{"integrity":"sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA=="}},"ignore@5.3.2":{"resolution":{"integrity":"sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="},"engines":{"node":">= 4"}},"immediate@3.0.6":{"resolution":{"integrity":"sha512-XXOFtyqDjNDAQxVfYxuF7g9Il/IbWmmlQg2MYKOH8ExIT1qg6xc4zyS3HaEEATgs1btfzxq15ciUiY7gjSXRGQ=="}},"immutable@5.1.5":{"resolution":{"integrity":"sha512-t7xcm2siw+hlUM68I+UEOK+z84RzmN59as9DZ7P1l0994DKUWV7UXBMQZVxaoMSRQ+PBZbHCOoBt7a2wxOMt+A=="}},"import-meta-resolve@4.2.0":{"resolution":{"integrity":"sha512-Iqv2fzaTQN28s/FwZAoFq0ZSs/7hMAHJVX+w8PZl3cY19Pxk6jFFalxQoIfW2826i/fDLXv8IiEZRIT0lDuWcg=="}},"inherits@2.0.4":{"resolution":{"integrity":"sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="}},"ini@1.3.8":{"resolution":{"integrity":"sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew=="}},"ini@4.1.3":{"resolution":{"integrity":"sha512-X7rqawQBvfdjS10YU1y1YVreA3SsLrW9dX2CewP2EbBJM4ypVNLDkO5y04gejPwKIY9lR+7r9gn3rFPt/kmWFg=="},"engines":{"node":"^14.17.0 || ^16.13.0 || >=18.0.0"}},"internmap@1.0.1":{"resolution":{"integrity":"sha512-lDB5YccMydFBtasVtxnZ3MRBHuaoE8GKsppq+EchKL2U4nK/DmEpPHNH8MZe5HkMtpSiTSOZwfN0tzYjO/lJEw=="}},"internmap@2.0.3":{"resolution":{"integrity":"sha512-5Hh7Y1wQbvY5ooGgPbDaL5iYLAPzMTUrjMulskHLH6wnv/A+1q5rgEaiuqEjB+oxGXIVZs1FF+R/KPN3ZSQYYg=="},"engines":{"node":">=12"}},"ip-address@10.2.0":{"resolution":{"integrity":"sha512-/+S6j4E9AHvW9SWMSEY9Xfy66O5PWvVEJ08O0y5JGyEKQpojb0K0GKpz/v5HJ/G0vi3D2sjGK78119oXZeE0qA=="},"engines":{"node":">= 12"}},"is-binary-path@2.1.0":{"resolution":{"integrity":"sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw=="},"engines":{"node":">=8"}},"is-docker@2.2.1":{"resolution":{"integrity":"sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ=="},"engines":{"node":">=8"},"hasBin":true},"is-extglob@2.1.1":{"resolution":{"integrity":"sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="},"engines":{"node":">=0.10.0"}},"is-fullwidth-code-point@3.0.0":{"resolution":{"integrity":"sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="},"engines":{"node":">=8"}},"is-glob@4.0.3":{"resolution":{"integrity":"sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg=="},"engines":{"node":">=0.10.0"}},"is-interactive@1.0.0":{"resolution":{"integrity":"sha512-2HvIEKRoqS62guEC+qBjpvRubdX910WCMuJTZ+I9yvqKU2/12eSL549HMwtabb4oupdj2sMP50k+XJfB/8JE6w=="},"engines":{"node":">=8"}},"is-node-process@1.2.0":{"resolution":{"integrity":"sha512-Vg4o6/fqPxIjtxgUH5QLJhwZ7gW5diGCVlXpuUfELC62CuxM1iHcRe51f2W1FDy04Ai4KJkagKjx3XaqyfRKXw=="}},"is-number@7.0.0":{"resolution":{"integrity":"sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="},"engines":{"node":">=0.12.0"}},"is-plain-obj@4.1.0":{"resolution":{"integrity":"sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg=="},"engines":{"node":">=12"}},"is-potential-custom-element-name@1.0.1":{"resolution":{"integrity":"sha512-bCYeRA2rVibKZd+s2625gGnGF/t7DSqDs4dP7CrLA1m7jKWz6pps0LpYLJN8Q64HtmPKJ1hrN3nzPNKFEKOUiQ=="}},"is-stream@2.0.1":{"resolution":{"integrity":"sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg=="},"engines":{"node":">=8"}},"is-subdir@1.2.0":{"resolution":{"integrity":"sha512-2AT6j+gXe/1ueqbW6fLZJiIw3F8iXGJtt0yDrZaBhAZEG1raiTxKWU+IPqMCzQAXOUCKdA4UDMgacKH25XG2Cw=="},"engines":{"node":">=4"}},"is-unicode-supported@0.1.0":{"resolution":{"integrity":"sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw=="},"engines":{"node":">=10"}},"is-windows@1.0.2":{"resolution":{"integrity":"sha512-eXK1UInq2bPmjyX6e3VHIzMLobc4J94i4AWn+Hpq3OU5KkrRC96OAcR3PRJ/pGu6m8TRnBHP9dkXQVsT/COVIA=="},"engines":{"node":">=0.10.0"}},"is-wsl@2.2.0":{"resolution":{"integrity":"sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww=="},"engines":{"node":">=8"}},"isarray@1.0.0":{"resolution":{"integrity":"sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ=="}},"isbot@5.1.28":{"resolution":{"integrity":"sha512-qrOp4g3xj8YNse4biorv6O5ZShwsJM0trsoda4y7j/Su7ZtTTfVXFzbKkpgcSoDrHS8FcTuUwcU04YimZlZOxw=="},"engines":{"node":">=18"}},"isexe@2.0.0":{"resolution":{"integrity":"sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw=="}},"isexe@3.1.5":{"resolution":{"integrity":"sha512-6B3tLtFqtQS4ekarvLVMZ+X+VlvQekbe4taUkf/rhVO3d/h0M2rfARm/pXLcPEsjjMsFgrFgSrhQIxcSVrBz8w=="},"engines":{"node":">=18"}},"istanbul-lib-coverage@3.2.2":{"resolution":{"integrity":"sha512-O8dpsF+r0WV/8MNRKfnmrtCWhuKjxrq2w+jpzBL5UZKTi2LeVWnWOmWRxFlesJONmc+wLAGvKQZEOanko0LFTg=="},"engines":{"node":">=8"}},"istanbul-lib-report@3.0.1":{"resolution":{"integrity":"sha512-GCfE1mtsHGOELCU8e/Z7YWzpmybrx/+dSTfLrvY8qRmaY6zXTKWn6WQIjaAFw069icm6GVMNkgu0NzI4iPZUNw=="},"engines":{"node":">=10"}},"istanbul-lib-source-maps@5.0.6":{"resolution":{"integrity":"sha512-yg2d+Em4KizZC5niWhQaIomgf5WlL4vOOjZ5xGCmF8SnPE/mDWWXgvRExdcpCgh9lLRRa1/fSYp2ymmbJ1pI+A=="},"engines":{"node":">=10"}},"istanbul-reports@3.2.0":{"resolution":{"integrity":"sha512-HGYWWS/ehqTV3xN10i23tkPkpH46MLCIMFNCaaKNavAXTF1RkqxawEPtnjnGZ6XKSInBKkiOA5BKS+aZiY3AvA=="},"engines":{"node":">=8"}},"jackspeak@3.4.3":{"resolution":{"integrity":"sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw=="}},"jest-diff@30.1.1":{"resolution":{"integrity":"sha512-LUU2Gx8EhYxpdzTR6BmjL1ifgOAQJQELTHOiPv9KITaKjZvJ9Jmgigx01tuZ49id37LorpGc9dPBPlXTboXScw=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"jest-worker@27.5.1":{"resolution":{"integrity":"sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg=="},"engines":{"node":">= 10.13.0"}},"jiti@2.6.1":{"resolution":{"integrity":"sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ=="},"hasBin":true},"js-tokens@10.0.0":{"resolution":{"integrity":"sha512-lM/UBzQmfJRo9ABXbPWemivdCW8V2G8FHaHdypQaIy523snUjog0W71ayWXTjiR+ixeMyVHN2XcpnTd/liPg/Q=="}},"js-tokens@4.0.0":{"resolution":{"integrity":"sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ=="}},"js-tokens@9.0.1":{"resolution":{"integrity":"sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ=="}},"js-yaml@3.14.1":{"resolution":{"integrity":"sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g=="},"hasBin":true},"js-yaml@4.1.1":{"resolution":{"integrity":"sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA=="},"hasBin":true},"jsdom@26.1.0":{"resolution":{"integrity":"sha512-Cvc9WUhxSMEo4McES3P7oK3QaXldCfNWp7pl2NNeiIFlCoLr3kfq9kb1fxftiwk1FLV7CvpvDfonxtzUDeSOPg=="},"engines":{"node":">=18"},"peerDependencies":{"canvas":"^3.0.0"},"peerDependenciesMeta":{"canvas":{"optional":true}}},"jsdom@27.3.0":{"resolution":{"integrity":"sha512-GtldT42B8+jefDUC4yUKAvsaOrH7PDHmZxZXNgF2xMmymjUbRYJvpAybZAKEmXDGTM0mCsz8duOa4vTm5AY2Kg=="},"engines":{"node":"^20.19.0 || ^22.12.0 || >=24.0.0"},"peerDependencies":{"canvas":"^3.0.0"},"peerDependenciesMeta":{"canvas":{"optional":true}}},"jsesc@3.1.0":{"resolution":{"integrity":"sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA=="},"engines":{"node":">=6"},"hasBin":true},"json-parse-even-better-errors@2.3.1":{"resolution":{"integrity":"sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w=="}},"json-schema-to-ts@3.1.1":{"resolution":{"integrity":"sha512-+DWg8jCJG2TEnpy7kOm/7/AxaYoaRbjVB4LFZLySZlWn8exGs3A4OLJR966cVvU26N7X9TWxl+Jsw7dzAqKT6g=="},"engines":{"node":">=16"}},"json-schema-traverse@1.0.0":{"resolution":{"integrity":"sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug=="}},"json5@2.2.3":{"resolution":{"integrity":"sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg=="},"engines":{"node":">=6"},"hasBin":true},"jsonc-parser@3.2.0":{"resolution":{"integrity":"sha512-gfFQZrcTc8CnKXp6Y4/CBT3fTc0OVuDofpre4aEeEpSBPV5X5v4+Vmx+8snU7RLPrNHPKSgLxGo9YuQzz20o+w=="}},"jsonfile@4.0.0":{"resolution":{"integrity":"sha512-m6F1R3z8jjlf2imQHS2Qez5sjKWQzbuuhuJ/FKYFRZvPE3PuHcSMVZzfsLhGVOkfd20obL5SWEBew5ShlquNxg=="}},"jsonfile@6.2.0":{"resolution":{"integrity":"sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg=="}},"jszip@3.10.1":{"resolution":{"integrity":"sha512-xXDvecyTpGLrqFrvkrUSoxxfJI5AH7U8zxxtVclpsUtMCq4JQ290LY8AW5c7Ggnr/Y/oK+bQMbqK2qmtk3pN4g=="}},"katex@0.16.22":{"resolution":{"integrity":"sha512-XCHRdUw4lf3SKBaJe4EvgqIuWwkPSo9XoeO8GjQW94Bp7TWv9hNhzZjZ+OH9yf1UmLygb7DIT5GSFQiyt16zYg=="},"hasBin":true},"khroma@2.1.0":{"resolution":{"integrity":"sha512-Ls993zuzfayK269Svk9hzpeGUKob/sIgZzyHYdjQoAdQetRKpOLj+k/QQQ/6Qi0Yz65mlROrfd+Ev+1+7dz9Kw=="}},"kleur@4.1.5":{"resolution":{"integrity":"sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ=="},"engines":{"node":">=6"}},"kolorist@1.8.0":{"resolution":{"integrity":"sha512-Y+60/zizpJ3HRH8DCss+q95yr6145JXZo46OTpFvDZWLfRCE4qChOyk1b26nMaNpfHHgxagk9dXT5OP0Tfe+dQ=="}},"kysely@0.28.7":{"resolution":{"integrity":"sha512-u/cAuTL4DRIiO2/g4vNGRgklEKNIj5Q3CG7RoUB5DV5SfEC2hMvPxKi0GWPmnzwL2ryIeud2VTcEEmqzTzEPNw=="},"engines":{"node":">=20.0.0"}},"langium@3.3.1":{"resolution":{"integrity":"sha512-QJv/h939gDpvT+9SiLVlY7tZC3xB2qK57v0J04Sh9wpMb6MP1q8gB21L3WIo8T5P1MSMg3Ep14L7KkDCFG3y4w=="},"engines":{"node":">=16.0.0"}},"layout-base@1.0.2":{"resolution":{"integrity":"sha512-8h2oVEZNktL4BH2JCOI90iD1yXwL6iNW7KcCKT2QZgQJR2vbqDsldCTPRU9NifTCqHZci57XvQQ15YTu+sTYPg=="}},"layout-base@2.0.1":{"resolution":{"integrity":"sha512-dp3s92+uNI1hWIpPGH3jK2kxE2lMjdXdr+DH8ynZHpd6PUlH6x6cbuXnoMmiNumznqaNO31xu9e79F0uuZ0JFg=="}},"lazystream@1.0.1":{"resolution":{"integrity":"sha512-b94GiNHQNy6JNTrt5w6zNyffMrNkXZb3KTkCZJb2V1xaEGCk093vkZ2jk3tpaeP33/OiXC+WvK9AxUebnf5nbw=="},"engines":{"node":">= 0.6.3"}},"lie@3.3.0":{"resolution":{"integrity":"sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ=="}},"lightningcss-android-arm64@1.32.0":{"resolution":{"integrity":"sha512-YK7/ClTt4kAK0vo6w3X+Pnm0D2cf2vPHbhOXdoNti1Ga0al1P4TBZhwjATvjNwLEBCnKvjJc2jQgHXH0NEwlAg=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["android"]},"lightningcss-darwin-arm64@1.32.0":{"resolution":{"integrity":"sha512-RzeG9Ju5bag2Bv1/lwlVJvBE3q6TtXskdZLLCyfg5pt+HLz9BqlICO7LZM7VHNTTn/5PRhHFBSjk5lc4cmscPQ=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["darwin"]},"lightningcss-darwin-x64@1.32.0":{"resolution":{"integrity":"sha512-U+QsBp2m/s2wqpUYT/6wnlagdZbtZdndSmut/NJqlCcMLTWp5muCrID+K5UJ6jqD2BFshejCYXniPDbNh73V8w=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["darwin"]},"lightningcss-freebsd-x64@1.32.0":{"resolution":{"integrity":"sha512-JCTigedEksZk3tHTTthnMdVfGf61Fky8Ji2E4YjUTEQX14xiy/lTzXnu1vwiZe3bYe0q+SpsSH/CTeDXK6WHig=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["freebsd"]},"lightningcss-linux-arm-gnueabihf@1.32.0":{"resolution":{"integrity":"sha512-x6rnnpRa2GL0zQOkt6rts3YDPzduLpWvwAF6EMhXFVZXD4tPrBkEFqzGowzCsIWsPjqSK+tyNEODUBXeeVHSkw=="},"engines":{"node":">= 12.0.0"},"cpu":["arm"],"os":["linux"]},"lightningcss-linux-arm64-gnu@1.32.0":{"resolution":{"integrity":"sha512-0nnMyoyOLRJXfbMOilaSRcLH3Jw5z9HDNGfT/gwCPgaDjnx0i8w7vBzFLFR1f6CMLKF8gVbebmkUN3fa/kQJpQ=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["linux"]},"lightningcss-linux-arm64-musl@1.32.0":{"resolution":{"integrity":"sha512-UpQkoenr4UJEzgVIYpI80lDFvRmPVg6oqboNHfoH4CQIfNA+HOrZ7Mo7KZP02dC6LjghPQJeBsvXhJod/wnIBg=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["linux"]},"lightningcss-linux-x64-gnu@1.32.0":{"resolution":{"integrity":"sha512-V7Qr52IhZmdKPVr+Vtw8o+WLsQJYCTd8loIfpDaMRWGUZfBOYEJeyJIkqGIDMZPwPx24pUMfwSxxI8phr/MbOA=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["linux"]},"lightningcss-linux-x64-musl@1.32.0":{"resolution":{"integrity":"sha512-bYcLp+Vb0awsiXg/80uCRezCYHNg1/l3mt0gzHnWV9XP1W5sKa5/TCdGWaR/zBM2PeF/HbsQv/j2URNOiVuxWg=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["linux"]},"lightningcss-win32-arm64-msvc@1.32.0":{"resolution":{"integrity":"sha512-8SbC8BR40pS6baCM8sbtYDSwEVQd4JlFTOlaD3gWGHfThTcABnNDBda6eTZeqbofalIJhFx0qKzgHJmcPTnGdw=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["win32"]},"lightningcss-win32-x64-msvc@1.32.0":{"resolution":{"integrity":"sha512-Amq9B/SoZYdDi1kFrojnoqPLxYhQ4Wo5XiL8EVJrVsB8ARoC1PWW6VGtT0WKCemjy8aC+louJnjS7U18x3b06Q=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["win32"]},"lightningcss@1.32.0":{"resolution":{"integrity":"sha512-NXYBzinNrblfraPGyrbPoD19C1h9lfI/1mzgWYvXUTe414Gz/X1FD2XBZSZM7rRTrMA8JL3OtAaGifrIKhQ5yQ=="},"engines":{"node":">= 12.0.0"}},"lines-and-columns@2.0.3":{"resolution":{"integrity":"sha512-cNOjgCnLB+FnvWWtyRTzmB3POJ+cXxTA81LoW7u8JdmhfXzriropYwpjShnz1QLLWsQwY7nIxoDmcPTwphDK9w=="},"engines":{"node":"^12.20.0 || ^14.13.1 || >=16.0.0"}},"loader-runner@4.3.2":{"resolution":{"integrity":"sha512-DFEqQ3ihfS9blba08cLfYf1NRAIEm+dDjic073DRDc3/JspI/8wYmtDsHwd3+4hwvdxSK7PGaElfTmm0awWJ4w=="},"engines":{"node":">=6.11.5"}},"local-pkg@1.1.2":{"resolution":{"integrity":"sha512-arhlxbFRmoQHl33a0Zkle/YWlmNwoyt6QNZEIJcqNbdrsix5Lvc4HyyI3EnwxTYlZYc32EbYrQ8SzEZ7dqgg9A=="},"engines":{"node":">=14"}},"locate-app@2.5.0":{"resolution":{"integrity":"sha512-xIqbzPMBYArJRmPGUZD9CzV9wOqmVtQnaAn3wrj3s6WYW0bQvPI7x+sPYUGmDTYMHefVK//zc6HEYZ1qnxIK+Q=="}},"locate-path@5.0.0":{"resolution":{"integrity":"sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="},"engines":{"node":">=8"}},"lodash-es@4.17.21":{"resolution":{"integrity":"sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw=="}},"lodash.clonedeep@4.5.0":{"resolution":{"integrity":"sha512-H5ZhCF25riFd9uB5UCkVKo61m3S/xZk1x4wA6yp/L3RFP6Z/eHH1ymQcGLo7J3GMPfm0V/7m1tryHuGVxpqEBQ=="}},"lodash.startcase@4.4.0":{"resolution":{"integrity":"sha512-+WKqsK294HMSc2jEbNgpHpd0JfIBhp7rEV4aqXWqFr6AlXov+SlcgB1Fv01y2kGe3Gc8nMW7VA0SrGuSkRfIEg=="}},"lodash.zip@4.2.0":{"resolution":{"integrity":"sha512-C7IOaBBK/0gMORRBd8OETNx3kmOkgIWIPvyDpZSCTwUrpYmgZwJkjZeOD8ww4xbOUOs4/attY+pciKvadNfFbg=="}},"lodash@4.18.1":{"resolution":{"integrity":"sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q=="}},"log-symbols@4.1.0":{"resolution":{"integrity":"sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg=="},"engines":{"node":">=10"}},"loglevel-plugin-prefix@0.8.4":{"resolution":{"integrity":"sha512-WpG9CcFAOjz/FtNht+QJeGpvVl/cdR6P0z6OcXSkr8wFJOsV2GRj2j10JLfjuA4aYkcKCNIEqRGCyTife9R8/g=="}},"loglevel@1.9.2":{"resolution":{"integrity":"sha512-HgMmCqIJSAKqo68l0rS2AanEWfkxaZ5wNiEFb5ggm08lDs9Xl2KxBlX3PTcaD2chBM1gXAYf491/M2Rv8Jwayg=="},"engines":{"node":">= 0.6.0"}},"long@5.3.2":{"resolution":{"integrity":"sha512-mNAgZ1GmyNhD7AuqnTG3/VQ26o760+ZYBPKjPvugO8+nLbYfX6TVpJPseBvopbdY+qpZ/lKUnmEc1LeZYS3QAA=="}},"longest-streak@3.1.0":{"resolution":{"integrity":"sha512-9Ri+o0JYgehTaVBBDoMqIl8GXtbWg711O3srftcHhZ0dqnETqLaoIK0x17fUw9rFSlK/0NlsKe0Ahhyl5pXE2g=="}},"loupe@3.2.1":{"resolution":{"integrity":"sha512-CdzqowRJCeLU72bHvWqwRBBlLcMEtIvGrlvef74kMnV2AolS9Y8xUv1I0U/MNAWMhBlKIoyuEgoJ0t/bbwHbLQ=="}},"lowlight@3.3.0":{"resolution":{"integrity":"sha512-0JNhgFoPvP6U6lE/UdVsSq99tn6DhjjpAj5MxG49ewd2mOBVtwWYIT8ClyABhq198aXXODMU6Ox8DrGy/CpTZQ=="}},"lru-cache@10.4.3":{"resolution":{"integrity":"sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="}},"lru-cache@11.2.4":{"resolution":{"integrity":"sha512-B5Y16Jr9LB9dHVkh6ZevG+vAbOsNOYCX+sXvFWFu7B3Iz5mijW3zdbMyhsh8ANd2mSWBYdJgnqi+mL7/LrOPYg=="},"engines":{"node":"20 || >=22"}},"lru-cache@5.1.1":{"resolution":{"integrity":"sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w=="}},"lru-cache@7.18.3":{"resolution":{"integrity":"sha512-jumlc0BIUrS3qJGgIkWZsyfAM7NCWiBcCDhnd+3NNM5KbBmLTgHVfWBcg6W+rLUsIpzpERPsvwUP7CckAQSOoA=="},"engines":{"node":">=12"}},"lucide-react@0.544.0":{"resolution":{"integrity":"sha512-t5tS44bqd825zAW45UQxpG2CvcC4urOwn2TrwSH8u+MjeE+1NnWl6QqeQ/6NdjMqdOygyiT9p3Ev0p1NJykxjw=="},"peerDependencies":{"react":"^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0"}},"lz-string@1.5.0":{"resolution":{"integrity":"sha512-h5bgJWpxJNswbU7qCrV0tIKQCaS3blPDrqKWx+QxzuzL1zGUzij9XCWLrSLsJPu5t+eWA/ycetzYAO5IOMcWAQ=="},"hasBin":true},"magic-string@0.30.18":{"resolution":{"integrity":"sha512-yi8swmWbO17qHhwIBNeeZxTceJMeBvWJaId6dyvTSOwTipqeHhMhOrz6513r1sOKnpvQ7zkhlG8tPrpilwTxHQ=="}},"magic-string@0.30.21":{"resolution":{"integrity":"sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ=="}},"magicast@0.3.5":{"resolution":{"integrity":"sha512-L0WhttDl+2BOsybvEOLK7fW3UA0OQ0IQ2d6Zl2x/a6vVRs3bAY0ECOSHHeL5jD+SbOpOCUEi0y1DgHEn9Qn1AQ=="}},"magicast@0.5.2":{"resolution":{"integrity":"sha512-E3ZJh4J3S9KfwdjZhe2afj6R9lGIN5Pher1pF39UGrXRqq/VDaGVIGN13BjHd2u8B61hArAGOnso7nBOouW3TQ=="}},"make-dir@4.0.0":{"resolution":{"integrity":"sha512-hXdUTZYIVOt1Ex//jAQi+wTZZpUpwBj/0QsOzqegb3rGMMeJiSEu5xLHnYfBrRV4RH2+OCSOO95Is/7x1WJ4bw=="},"engines":{"node":">=10"}},"markdown-table@3.0.4":{"resolution":{"integrity":"sha512-wiYz4+JrLyb/DqW2hkFJxP7Vd7JuTDm77fvbM8VfEQdmSMqcImWeeRbHwZjBjIFki/VaMK2BhFi7oUUZeM5bqw=="}},"marked@16.4.2":{"resolution":{"integrity":"sha512-TI3V8YYWvkVf3KJe1dRkpnjs68JUPyEa5vjKrp1XEEJUAOaQc+Qj+L1qWbPd0SJuAdQkFU0h73sXXqwDYxsiDA=="},"engines":{"node":">= 20"},"hasBin":true},"math-intrinsics@1.1.0":{"resolution":{"integrity":"sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g=="},"engines":{"node":">= 0.4"}},"mdast-util-find-and-replace@3.0.2":{"resolution":{"integrity":"sha512-Tmd1Vg/m3Xz43afeNxDIhWRtFZgM2VLyaf4vSTYwudTyeuTneoL3qtWMA5jeLyz/O1vDJmmV4QuScFCA2tBPwg=="}},"mdast-util-from-markdown@2.0.2":{"resolution":{"integrity":"sha512-uZhTV/8NBuw0WHkPTrCqDOl0zVe1BIng5ZtHoDk49ME1qqcjYmmLmOf0gELgcRMxN4w2iuIeVso5/6QymSrgmA=="}},"mdast-util-frontmatter@2.0.1":{"resolution":{"integrity":"sha512-LRqI9+wdgC25P0URIJY9vwocIzCcksduHQ9OF2joxQoyTNVduwLAFUzjoopuRJbJAReaKrNQKAZKL3uCMugWJA=="}},"mdast-util-gfm-autolink-literal@2.0.1":{"resolution":{"integrity":"sha512-5HVP2MKaP6L+G6YaxPNjuL0BPrq9orG3TsrZ9YXbA3vDw/ACI4MEsnoDpn6ZNm7GnZgtAcONJyPhOP8tNJQavQ=="}},"mdast-util-gfm-footnote@2.0.0":{"resolution":{"integrity":"sha512-5jOT2boTSVkMnQ7LTrd6n/18kqwjmuYqo7JUPe+tRCY6O7dAuTFMtTPauYYrMPpox9hlN0uOx/FL8XvEfG9/mQ=="}},"mdast-util-gfm-strikethrough@2.0.0":{"resolution":{"integrity":"sha512-mKKb915TF+OC5ptj5bJ7WFRPdYtuHv0yTRxK2tJvi+BDqbkiG7h7u/9SI89nRAYcmap2xHQL9D+QG/6wSrTtXg=="}},"mdast-util-gfm-table@2.0.0":{"resolution":{"integrity":"sha512-78UEvebzz/rJIxLvE7ZtDd/vIQ0RHv+3Mh5DR96p7cS7HsBhYIICDBCu8csTNWNO6tBWfqXPWekRuj2FNOGOZg=="}},"mdast-util-gfm-task-list-item@2.0.0":{"resolution":{"integrity":"sha512-IrtvNvjxC1o06taBAVJznEnkiHxLFTzgonUdy8hzFVeDun0uTjxxrRGVaNFqkU1wJR3RBPEfsxmU6jDWPofrTQ=="}},"mdast-util-gfm@3.0.0":{"resolution":{"integrity":"sha512-dgQEX5Amaq+DuUqf26jJqSK9qgixgd6rYDHAv4aTBuA92cTknZlKpPfa86Z/s8Dj8xsAQpFfBmPUHWJBWqS4Bw=="}},"mdast-util-phrasing@4.1.0":{"resolution":{"integrity":"sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w=="}},"mdast-util-to-hast@13.2.0":{"resolution":{"integrity":"sha512-QGYKEuUsYT9ykKBCMOEDLsU5JRObWQusAolFMeko/tYPufNkRffBAQjIE+99jbA87xv6FgmjLtwjh9wBWajwAA=="}},"mdast-util-to-markdown@2.1.2":{"resolution":{"integrity":"sha512-xj68wMTvGXVOKonmog6LwyJKrYXZPvlwabaryTjLh9LuvovB/KAH+kvi8Gjj+7rJjsFi23nkUxRQv1KqSroMqA=="}},"mdast-util-to-string@4.0.0":{"resolution":{"integrity":"sha512-0H44vDimn51F0YwvxSJSm0eCDOJTRlmN0R1yBh4HLj9wiV1Dn0QoXGbvFAWj2hSItVTlCmBF1hqKlIyUBVFLPg=="}},"mdn-data@2.12.2":{"resolution":{"integrity":"sha512-IEn+pegP1aManZuckezWCO+XZQDplx1366JoVhTpMpBB1sPey/SbveZQUosKiKiGYjg1wH4pMlNgXbCiYgihQA=="}},"merge-stream@2.0.0":{"resolution":{"integrity":"sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w=="}},"merge2@1.4.1":{"resolution":{"integrity":"sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="},"engines":{"node":">= 8"}},"mermaid@11.12.1":{"resolution":{"integrity":"sha512-UlIZrRariB11TY1RtTgUWp65tphtBv4CSq7vyS2ZZ2TgoMjs2nloq+wFqxiwcxlhHUvs7DPGgMjs2aeQxz5h9g=="}},"micromark-core-commonmark@2.0.2":{"resolution":{"integrity":"sha512-FKjQKbxd1cibWMM1P9N+H8TwlgGgSkWZMmfuVucLCHaYqeSvJ0hFeHsIa65pA2nYbes0f8LDHPMrd9X7Ujxg9w=="}},"micromark-extension-frontmatter@2.0.0":{"resolution":{"integrity":"sha512-C4AkuM3dA58cgZha7zVnuVxBhDsbttIMiytjgsM2XbHAB2faRVaHRle40558FBN+DJcrLNCoqG5mlrpdU4cRtg=="}},"micromark-extension-gfm-autolink-literal@2.1.0":{"resolution":{"integrity":"sha512-oOg7knzhicgQ3t4QCjCWgTmfNhvQbDDnJeVu9v81r7NltNCVmhPy1fJRX27pISafdjL+SVc4d3l48Gb6pbRypw=="}},"micromark-extension-gfm-footnote@2.1.0":{"resolution":{"integrity":"sha512-/yPhxI1ntnDNsiHtzLKYnE3vf9JZ6cAisqVDauhp4CEHxlb4uoOTxOCJ+9s51bIB8U1N1FJ1RXOKTIlD5B/gqw=="}},"micromark-extension-gfm-strikethrough@2.1.0":{"resolution":{"integrity":"sha512-ADVjpOOkjz1hhkZLlBiYA9cR2Anf8F4HqZUO6e5eDcPQd0Txw5fxLzzxnEkSkfnD0wziSGiv7sYhk/ktvbf1uw=="}},"micromark-extension-gfm-table@2.1.1":{"resolution":{"integrity":"sha512-t2OU/dXXioARrC6yWfJ4hqB7rct14e8f7m0cbI5hUmDyyIlwv5vEtooptH8INkbLzOatzKuVbQmAYcbWoyz6Dg=="}},"micromark-extension-gfm-tagfilter@2.0.0":{"resolution":{"integrity":"sha512-xHlTOmuCSotIA8TW1mDIM6X2O1SiX5P9IuDtqGonFhEK0qgRI4yeC6vMxEV2dgyr2TiD+2PQ10o+cOhdVAcwfg=="}},"micromark-extension-gfm-task-list-item@2.1.0":{"resolution":{"integrity":"sha512-qIBZhqxqI6fjLDYFTBIa4eivDMnP+OZqsNwmQ3xNLE4Cxwc+zfQEfbs6tzAo2Hjq+bh6q5F+Z8/cksrLFYWQQw=="}},"micromark-extension-gfm@3.0.0":{"resolution":{"integrity":"sha512-vsKArQsicm7t0z2GugkCKtZehqUm31oeGBV/KVSorWSy8ZlNAv7ytjFhvaryUiCUJYqs+NoE6AFhpQvBTM6Q4w=="}},"micromark-factory-destination@2.0.1":{"resolution":{"integrity":"sha512-Xe6rDdJlkmbFRExpTOmRj9N3MaWmbAgdpSrBQvCFqhezUn4AHqJHbaEnfbVYYiexVSs//tqOdY/DxhjdCiJnIA=="}},"micromark-factory-label@2.0.1":{"resolution":{"integrity":"sha512-VFMekyQExqIW7xIChcXn4ok29YE3rnuyveW3wZQWWqF4Nv9Wk5rgJ99KzPvHjkmPXF93FXIbBp6YdW3t71/7Vg=="}},"micromark-factory-space@2.0.1":{"resolution":{"integrity":"sha512-zRkxjtBxxLd2Sc0d+fbnEunsTj46SWXgXciZmHq0kDYGnck/ZSGj9/wULTV95uoeYiK5hRXP2mJ98Uo4cq/LQg=="}},"micromark-factory-title@2.0.1":{"resolution":{"integrity":"sha512-5bZ+3CjhAd9eChYTHsjy6TGxpOFSKgKKJPJxr293jTbfry2KDoWkhBb6TcPVB4NmzaPhMs1Frm9AZH7OD4Cjzw=="}},"micromark-factory-whitespace@2.0.1":{"resolution":{"integrity":"sha512-Ob0nuZ3PKt/n0hORHyvoD9uZhr+Za8sFoP+OnMcnWK5lngSzALgQYKMr9RJVOWLqQYuyn6ulqGWSXdwf6F80lQ=="}},"micromark-util-character@2.1.1":{"resolution":{"integrity":"sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q=="}},"micromark-util-chunked@2.0.1":{"resolution":{"integrity":"sha512-QUNFEOPELfmvv+4xiNg2sRYeS/P84pTW0TCgP5zc9FpXetHY0ab7SxKyAQCNCc1eK0459uoLI1y5oO5Vc1dbhA=="}},"micromark-util-classify-character@2.0.1":{"resolution":{"integrity":"sha512-K0kHzM6afW/MbeWYWLjoHQv1sgg2Q9EccHEDzSkxiP/EaagNzCm7T/WMKZ3rjMbvIpvBiZgwR3dKMygtA4mG1Q=="}},"micromark-util-combine-extensions@2.0.1":{"resolution":{"integrity":"sha512-OnAnH8Ujmy59JcyZw8JSbK9cGpdVY44NKgSM7E9Eh7DiLS2E9RNQf0dONaGDzEG9yjEl5hcqeIsj4hfRkLH/Bg=="}},"micromark-util-decode-numeric-character-reference@2.0.2":{"resolution":{"integrity":"sha512-ccUbYk6CwVdkmCQMyr64dXz42EfHGkPQlBj5p7YVGzq8I7CtjXZJrubAYezf7Rp+bjPseiROqe7G6foFd+lEuw=="}},"micromark-util-decode-string@2.0.1":{"resolution":{"integrity":"sha512-nDV/77Fj6eH1ynwscYTOsbK7rR//Uj0bZXBwJZRfaLEJ1iGBR6kIfNmlNqaqJf649EP0F3NWNdeJi03elllNUQ=="}},"micromark-util-encode@2.0.1":{"resolution":{"integrity":"sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw=="}},"micromark-util-html-tag-name@2.0.1":{"resolution":{"integrity":"sha512-2cNEiYDhCWKI+Gs9T0Tiysk136SnR13hhO8yW6BGNyhOC4qYFnwF1nKfD3HFAIXA5c45RrIG1ub11GiXeYd1xA=="}},"micromark-util-normalize-identifier@2.0.1":{"resolution":{"integrity":"sha512-sxPqmo70LyARJs0w2UclACPUUEqltCkJ6PhKdMIDuJ3gSf/Q+/GIe3WKl0Ijb/GyH9lOpUkRAO2wp0GVkLvS9Q=="}},"micromark-util-resolve-all@2.0.1":{"resolution":{"integrity":"sha512-VdQyxFWFT2/FGJgwQnJYbe1jjQoNTS4RjglmSjTUlpUMa95Htx9NHeYW4rGDJzbjvCsl9eLjMQwGeElsqmzcHg=="}},"micromark-util-sanitize-uri@2.0.1":{"resolution":{"integrity":"sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ=="}},"micromark-util-subtokenize@2.0.3":{"resolution":{"integrity":"sha512-VXJJuNxYWSoYL6AJ6OQECCFGhIU2GGHMw8tahogePBrjkG8aCCas3ibkp7RnVOSTClg2is05/R7maAhF1XyQMg=="}},"micromark-util-symbol@2.0.1":{"resolution":{"integrity":"sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q=="}},"micromark-util-types@2.0.1":{"resolution":{"integrity":"sha512-534m2WhVTddrcKVepwmVEVnUAmtrx9bfIjNoQHRqfnvdaHQiFytEhJoTgpWJvDEXCO5gLTQh3wYC1PgOJA4NSQ=="}},"micromark@4.0.1":{"resolution":{"integrity":"sha512-eBPdkcoCNvYcxQOAKAlceo5SNdzZWfF+FcSupREAzdAh9rRmE239CEQAiTwIgblwnoM8zzj35sZ5ZwvSEOF6Kw=="}},"micromatch@4.0.8":{"resolution":{"integrity":"sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA=="},"engines":{"node":">=8.6"}},"mime-db@1.52.0":{"resolution":{"integrity":"sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="},"engines":{"node":">= 0.6"}},"mime-types@2.1.35":{"resolution":{"integrity":"sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="},"engines":{"node":">= 0.6"}},"mimic-fn@2.1.0":{"resolution":{"integrity":"sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="},"engines":{"node":">=6"}},"mimic-response@3.1.0":{"resolution":{"integrity":"sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ=="},"engines":{"node":">=10"}},"miniflare@4.20260504.0":{"resolution":{"integrity":"sha512-HeI/HLx+rbeo/UB4qb6NsNcFdUVD7xDzyCexZJTVtFMlfpfexUKEDmdeTRRpzeHrJseZFGua+v9JO1kfPublUw=="},"engines":{"node":">=22.0.0"},"hasBin":true},"minimatch@5.1.9":{"resolution":{"integrity":"sha512-7o1wEA2RyMP7Iu7GNba9vc0RWWGACJOCZBJX2GJWip0ikV+wcOsgVuY9uE8CPiyQhkGFSlhuSkZPavN7u1c2Fw=="},"engines":{"node":">=10"}},"minimatch@9.0.3":{"resolution":{"integrity":"sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="},"engines":{"node":">=16 || 14 >=14.17"}},"minimatch@9.0.5":{"resolution":{"integrity":"sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow=="},"engines":{"node":">=16 || 14 >=14.17"}},"minimatch@9.0.9":{"resolution":{"integrity":"sha512-OBwBN9AL4dqmETlpS2zasx+vTeWclWzkblfZk7KTA5j3jeOONz/tRCnZomUyvNg83wL5Zv9Ss6HMJXAgL8R2Yg=="},"engines":{"node":">=16 || 14 >=14.17"}},"minimist@1.2.8":{"resolution":{"integrity":"sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA=="}},"minipass@3.3.6":{"resolution":{"integrity":"sha512-DxiNidxSEK+tHG6zOIklvNOwm3hvCrbUrdtzY74U6HKTJxvIDfOUL5W5P2Ghd3DTkhhKPYGqeNUIh5qcM4YBfw=="},"engines":{"node":">=8"}},"minipass@5.0.0":{"resolution":{"integrity":"sha512-3FnjYuehv9k6ovOEbyOswadCDPX1piCfhV8ncmYtHOjuPwylVWsghTLo7rabjC3Rx5xD4HDx8Wm1xnMF7S5qFQ=="},"engines":{"node":">=8"}},"minipass@7.1.2":{"resolution":{"integrity":"sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw=="},"engines":{"node":">=16 || 14 >=14.17"}},"minipass@7.1.3":{"resolution":{"integrity":"sha512-tEBHqDnIoM/1rXME1zgka9g6Q2lcoCkxHLuc7ODJ5BxbP5d4c2Z5cGgtXAku59200Cx7diuHTOYfSBD8n6mm8A=="},"engines":{"node":">=16 || 14 >=14.17"}},"minizlib@2.1.2":{"resolution":{"integrity":"sha512-bAxsR8BVfj60DWXHE3u30oHzfl4G7khkSuPW+qvpd7jFRHm7dLxOjUk1EHACJ/hxLY8phGJ0YhYHZo7jil7Qdg=="},"engines":{"node":">= 8"}},"mkdirp-classic@0.5.3":{"resolution":{"integrity":"sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A=="}},"mkdirp@1.0.4":{"resolution":{"integrity":"sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw=="},"engines":{"node":">=10"},"hasBin":true},"mlly@1.8.0":{"resolution":{"integrity":"sha512-l8D9ODSRWLe2KHJSifWGwBqpTZXIXTeo8mlKjY+E2HAakaTeNpqAyBZ8GSqLzHgw4XmHmC8whvpjJNMbFZN7/g=="}},"mri@1.2.0":{"resolution":{"integrity":"sha512-tzzskb3bG8LvYGFF/mDTpq3jpI6Q9wc3LEmBaghu+DdCssd1FakN7Bc0hVNmEyGq1bq3RgfkCb3cmQLpNPOroA=="},"engines":{"node":">=4"}},"mrmime@2.0.1":{"resolution":{"integrity":"sha512-Y3wQdFg2Va6etvQ5I82yUhGdsKrcYox6p7FfL1LbK2J4V01F9TGlepTIhnK24t7koZibmg82KGglhA1XK5IsLQ=="},"engines":{"node":">=10"}},"ms@2.1.3":{"resolution":{"integrity":"sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="}},"msw@2.10.2":{"resolution":{"integrity":"sha512-RCKM6IZseZQCWcSWlutdf590M8nVfRHG1ImwzOtwz8IYxgT4zhUO0rfTcTvDGiaFE0Rhcc+h43lcF3Jc9gFtwQ=="},"engines":{"node":">=18"},"hasBin":true,"peerDependencies":{"typescript":">= 4.8.x"},"peerDependenciesMeta":{"typescript":{"optional":true}}},"mute-stream@2.0.0":{"resolution":{"integrity":"sha512-WWdIxpyjEn+FhQJQQv9aQAYlHoNVdzIzUySNV1gHUPDSdZJ3yZn7pAAbQcV7B56Mvu881q9FZV+0Vx2xC44VWA=="},"engines":{"node":"^18.17.0 || >=20.5.0"}},"nanoid@3.3.11":{"resolution":{"integrity":"sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w=="},"engines":{"node":"^10 || ^12 || ^13.7 || ^14 || >=15.0.1"},"hasBin":true},"napi-build-utils@2.0.0":{"resolution":{"integrity":"sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA=="}},"neo-async@2.6.2":{"resolution":{"integrity":"sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw=="}},"netmask@2.1.1":{"resolution":{"integrity":"sha512-eonl3sLUha+S1GzTPxychyhnUzKyeQkZ7jLjKrBagJgPla13F+uQ71HgpFefyHgqrjEbCPkDArxYsjY8/+gLKA=="},"engines":{"node":">= 0.4.0"}},"node-abi@3.89.0":{"resolution":{"integrity":"sha512-6u9UwL0HlAl21+agMN3YAMXcKByMqwGx+pq+P76vii5f7hTPtKDp08/H9py6DY+cfDw7kQNTGEj/rly3IgbNQA=="},"engines":{"node":">=10"}},"node-domexception@1.0.0":{"resolution":{"integrity":"sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ=="},"engines":{"node":">=10.5.0"},"deprecated":"Use your platform's native DOMException instead"},"node-fetch@3.3.2":{"resolution":{"integrity":"sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA=="},"engines":{"node":"^12.20.0 || ^14.13.1 || >=16.0.0"}},"node-machine-id@1.1.12":{"resolution":{"integrity":"sha512-QNABxbrPa3qEIfrE6GOJ7BYIuignnJw7iQ2YPbc3Nla1HzRJjXzZOiikfF8m7eAMfichLt3M4VgLOetqgDmgGQ=="}},"node-releases@2.0.19":{"resolution":{"integrity":"sha512-xxOWJsBKtzAq7DY0J+DTzuz58K8e7sJbdgwkbMWQe8UYB6ekmsQ45q0M/tJDsGaZmbC+l7n57UV8Hl5tHxO9uw=="}},"node-releases@2.0.38":{"resolution":{"integrity":"sha512-3qT/88Y3FbH/Kx4szpQQ4HzUbVrHPKTLVpVocKiLfoYvw9XSGOX2FmD2d6DrXbVYyAQTF2HeF6My8jmzx7/CRw=="}},"normalize-path@3.0.0":{"resolution":{"integrity":"sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA=="},"engines":{"node":">=0.10.0"}},"npm-run-path@4.0.1":{"resolution":{"integrity":"sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw=="},"engines":{"node":">=8"}},"nth-check@2.1.1":{"resolution":{"integrity":"sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w=="}},"nwsapi@2.2.20":{"resolution":{"integrity":"sha512-/ieB+mDe4MrrKMT8z+mQL8klXydZWGR5Dowt4RAGKbJ3kIGEx3X4ljUo+6V73IXtUPWgfOlU5B9MlGxFO5T+cA=="}},"nx-cloud@19.1.0":{"resolution":{"integrity":"sha512-f24vd5/57/MFSXNMfkerdDiK0EvScGOKO71iOWgJNgI1xVweDRmOA/EfjnPMRd5m+pnoPs/4A7DzuwSW0jZVyw=="},"hasBin":true},"nx@21.4.1":{"resolution":{"integrity":"sha512-nD8NjJGYk5wcqiATzlsLauvyrSHV2S2YmM2HBIKqTTwVP2sey07MF3wDB9U2BwxIjboahiITQ6pfqFgB79TF2A=="},"hasBin":true,"peerDependencies":{"@swc-node/register":"^1.8.0","@swc/core":"^1.3.85"},"peerDependenciesMeta":{"@swc-node/register":{"optional":true},"@swc/core":{"optional":true}}},"obug@2.1.1":{"resolution":{"integrity":"sha512-uTqF9MuPraAQ+IsnPf366RG4cP9RtUi7MLO1N3KEc+wb0a6yKpeL0lmk2IB1jY5KHPAlTc6T/JRdC/YqxHNwkQ=="}},"once@1.4.0":{"resolution":{"integrity":"sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="}},"onetime@5.1.2":{"resolution":{"integrity":"sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg=="},"engines":{"node":">=6"}},"oniguruma-parser@0.12.1":{"resolution":{"integrity":"sha512-8Unqkvk1RYc6yq2WBYRj4hdnsAxVze8i7iPfQr8e4uSP3tRv0rpZcbGUDvxfQQcdwHt/e9PrMvGCsa8OqG9X3w=="}},"oniguruma-to-es@4.3.3":{"resolution":{"integrity":"sha512-rPiZhzC3wXwE59YQMRDodUwwT9FZ9nNBwQQfsd1wfdtlKEyCdRV0avrTcSZ5xlIvGRVPd/cx6ZN45ECmS39xvg=="}},"open@8.4.2":{"resolution":{"integrity":"sha512-7x81NCL719oNbsq/3mh+hVrAWmFuEYUqrq/Iw3kUzH8ReypT9QQ0BLoJS7/G9k6N81XjW4qHWtjWwe/9eLy1EQ=="},"engines":{"node":">=12"}},"ora@5.3.0":{"resolution":{"integrity":"sha512-zAKMgGXUim0Jyd6CXK9lraBnD3H5yPGBPPOkC23a2BG6hsm4Zu6OQSjQuEtV0BHDf4aKHcUFvJiGRrFuW3MG8g=="},"engines":{"node":">=10"}},"outdent@0.5.0":{"resolution":{"integrity":"sha512-/jHxFIzoMXdqPzTaCpFzAAWhpkSjZPF4Vsn6jAfNpmbH/ymsmd7Qc6VE9BGn0L6YMj6uwpQLxCECpus4ukKS9Q=="}},"outvariant@1.4.3":{"resolution":{"integrity":"sha512-+Sl2UErvtsoajRDKCE5/dBz4DIvHXQQnAxtQTF04OJxY0+DyZXSo5P5Bb7XYWOh81syohlYL24hbDwxedPUJCA=="}},"oxlint@1.26.0":{"resolution":{"integrity":"sha512-KRpL+SMi07JQyggv5ldIF+wt2pnrKm8NLW0B+8bK+0HZsLmH9/qGA+qMWie5Vf7lnlMBllJmsuzHaKFEGY3rIA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"hasBin":true,"peerDependencies":{"oxlint-tsgolint":">=0.4.0"},"peerDependenciesMeta":{"oxlint-tsgolint":{"optional":true}}},"p-filter@2.1.0":{"resolution":{"integrity":"sha512-ZBxxZ5sL2HghephhpGAQdoskxplTwr7ICaehZwLIlfL6acuVgZPm8yBNuRAFBGEqtD/hmUeq9eqLg2ys9Xr/yw=="},"engines":{"node":">=8"}},"p-limit@2.3.0":{"resolution":{"integrity":"sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w=="},"engines":{"node":">=6"}},"p-locate@4.1.0":{"resolution":{"integrity":"sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A=="},"engines":{"node":">=8"}},"p-map@2.1.0":{"resolution":{"integrity":"sha512-y3b8Kpd8OAN444hxfBbFfj1FY/RjtTd8tzYwhUqNYXx0fXx2iX4maP4Qr6qhIKbQXI02wTLAda4fYUbDagTUFw=="},"engines":{"node":">=6"}},"p-map@7.0.4":{"resolution":{"integrity":"sha512-tkAQEw8ysMzmkhgw8k+1U/iPhWNhykKnSk4Rd5zLoPJCuJaGRPo6YposrZgaxHKzDHdDWWZvE/Sk7hsL2X/CpQ=="},"engines":{"node":">=18"}},"p-try@2.2.0":{"resolution":{"integrity":"sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ=="},"engines":{"node":">=6"}},"pac-proxy-agent@7.2.0":{"resolution":{"integrity":"sha512-TEB8ESquiLMc0lV8vcd5Ql/JAKAoyzHFXaStwjkzpOpC5Yv+pIzLfHvjTSdf3vpa2bMiUQrg9i6276yn8666aA=="},"engines":{"node":">= 14"}},"pac-resolver@7.0.1":{"resolution":{"integrity":"sha512-5NPgf87AT2STgwa2ntRMr45jTKrYBGkVU36yT0ig/n/GMAa3oPqhZfIQ2kMEimReg0+t9kZViDVZ83qfVUlckg=="},"engines":{"node":">= 14"}},"package-json-from-dist@1.0.1":{"resolution":{"integrity":"sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw=="}},"package-manager-detector@0.2.11":{"resolution":{"integrity":"sha512-BEnLolu+yuz22S56CU1SUKq3XC3PkwD5wv4ikR4MfGvnRVcmzXR9DwSlW2fEamyTPyXHomBJRzgapeuBvRNzJQ=="}},"package-manager-detector@1.5.0":{"resolution":{"integrity":"sha512-uBj69dVlYe/+wxj8JOpr97XfsxH/eumMt6HqjNTmJDf/6NO9s+0uxeOneIz3AsPt2m6y9PqzDzd3ATcU17MNfw=="}},"pako@1.0.11":{"resolution":{"integrity":"sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw=="}},"parse5-htmlparser2-tree-adapter@7.1.0":{"resolution":{"integrity":"sha512-ruw5xyKs6lrpo9x9rCZqZZnIUntICjQAd0Wsmp396Ul9lN/h+ifgVV1x1gZHi8euej6wTfpqX8j+BFQxF0NS/g=="}},"parse5-parser-stream@7.1.2":{"resolution":{"integrity":"sha512-JyeQc9iwFLn5TbvvqACIF/VXG6abODeB3Fwmv/TGdLk2LfbWkaySGY72at4+Ty7EkPZj854u4CrICqNk2qIbow=="}},"parse5@7.3.0":{"resolution":{"integrity":"sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw=="}},"parse5@8.0.0":{"resolution":{"integrity":"sha512-9m4m5GSgXjL4AjumKzq1Fgfp3Z8rsvjRNbnkVwfu2ImRqE5D0LnY2QfDen18FSY9C573YU5XxSapdHZTZ2WolA=="}},"path-data-parser@0.1.0":{"resolution":{"integrity":"sha512-NOnmBpt5Y2RWbuv0LMzsayp3lVylAHLPUTut412ZA3l+C4uw4ZVkQbjShYCQ8TCpUMdPapr4YjUqLYD6v68j+w=="}},"path-exists@4.0.0":{"resolution":{"integrity":"sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w=="},"engines":{"node":">=8"}},"path-key@3.1.1":{"resolution":{"integrity":"sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q=="},"engines":{"node":">=8"}},"path-scurry@1.11.1":{"resolution":{"integrity":"sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="},"engines":{"node":">=16 || 14 >=14.18"}},"path-to-regexp@6.3.0":{"resolution":{"integrity":"sha512-Yhpw4T9C6hPpgPeA28us07OJeqZ5EzQTkbfwuhsUg0c237RomFoETJgmp2sa3F/41gfLE6G5cqcYwznmeEeOlQ=="}},"path-type@4.0.0":{"resolution":{"integrity":"sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw=="},"engines":{"node":">=8"}},"pathe@2.0.3":{"resolution":{"integrity":"sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w=="}},"pathval@2.0.1":{"resolution":{"integrity":"sha512-//nshmD55c46FuFw26xV/xFAaB5HF9Xdap7HJBBnrKdAd6/GxDBaNA1870O79+9ueg61cZLSVc+OaFlfmObYVQ=="},"engines":{"node":">= 14.16"}},"pend@1.2.0":{"resolution":{"integrity":"sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg=="}},"picocolors@1.1.1":{"resolution":{"integrity":"sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA=="}},"picomatch@2.3.1":{"resolution":{"integrity":"sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="},"engines":{"node":">=8.6"}},"picomatch@4.0.3":{"resolution":{"integrity":"sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q=="},"engines":{"node":">=12"}},"picomatch@4.0.4":{"resolution":{"integrity":"sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A=="},"engines":{"node":">=12"}},"pify@4.0.1":{"resolution":{"integrity":"sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g=="},"engines":{"node":">=6"}},"pkg-types@1.3.1":{"resolution":{"integrity":"sha512-/Jm5M4RvtBFVkKWRu2BLUTNP8/M2a+UwuAX+ae4770q1qVGtfjG+WTCupoZixokjmHiry8uI+dlY8KXYV5HVVQ=="}},"pkg-types@2.3.0":{"resolution":{"integrity":"sha512-SIqCzDRg0s9npO5XQ3tNZioRY1uK06lA41ynBC1YmFTmnY6FjUjVt6s4LoADmwoig1qqD0oK8h1p/8mlMx8Oig=="}},"playwright-core@1.55.0":{"resolution":{"integrity":"sha512-GvZs4vU3U5ro2nZpeiwyb0zuFaqb9sUiAJuyrWpcGouD8y9/HLgGbNRjIph7zU9D3hnPaisMl9zG9CgFi/biIg=="},"engines":{"node":">=18"},"hasBin":true},"playwright@1.55.0":{"resolution":{"integrity":"sha512-sdCWStblvV1YU909Xqx0DhOjPZE4/5lJsIS84IfN9dAZfcl/CIZ5O8l3o0j7hPMjDvqoTF8ZUcc+i/GL5erstA=="},"engines":{"node":">=18"},"hasBin":true},"pngjs@7.0.0":{"resolution":{"integrity":"sha512-LKWqWJRhstyYo9pGvgor/ivk2w94eSjE3RGVuzLGlr3NmD8bf7RcYGze1mNdEHRP6TRP6rMuDHk5t44hnTRyow=="},"engines":{"node":">=14.19.0"}},"points-on-curve@0.2.0":{"resolution":{"integrity":"sha512-0mYKnYYe9ZcqMCWhUjItv/oHjvgEsfKvnUTg8sAtnHr3GVy7rGkXCb6d5cSyqrWqL4k81b9CPg3urd+T7aop3A=="}},"points-on-path@0.2.1":{"resolution":{"integrity":"sha512-25ClnWWuw7JbWZcgqY/gJ4FQWadKxGWk+3kR/7kD0tCaDtPPMj7oHu2ToLaVhfpnHrZzYby2w6tUA0eOIuUg8g=="}},"postcss@8.5.14":{"resolution":{"integrity":"sha512-SoSL4+OSEtR99LHFZQiJLkT59C5B1amGO1NzTwj7TT1qCUgUO6hxOvzkOYxD+vMrXBM3XJIKzokoERdqQq/Zmg=="},"engines":{"node":"^10 || ^12 || >=14"}},"postcss@8.5.6":{"resolution":{"integrity":"sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg=="},"engines":{"node":"^10 || ^12 || >=14"}},"posthog-js@1.321.2":{"resolution":{"integrity":"sha512-h5852d9lYmSNjKWvjDkrmO9/awUU3jayNBEoEBUuMAdfDPc4yYYdxBJeDBxYnCFm6RjCLy4O+vmcwuCRC67EXA=="}},"preact@10.28.2":{"resolution":{"integrity":"sha512-lbteaWGzGHdlIuiJ0l2Jq454m6kcpI1zNje6d8MlGAFlYvP2GO4ibnat7P74Esfz4sPTdM6UxtTwh/d3pwM9JA=="}},"prebuild-install@7.1.3":{"resolution":{"integrity":"sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug=="},"engines":{"node":">=10"},"deprecated":"No longer maintained. Please contact the author of the relevant native addon; alternatives are available.","hasBin":true},"prettier@2.8.8":{"resolution":{"integrity":"sha512-tdN8qQGvNjw4CHbY+XXk0JgCXn9QiF21a55rBe5LJAU+kDyC4WQn4+awm2Xfk2lQMk5fKup9XgzTZtGkjBdP9Q=="},"engines":{"node":">=10.13.0"},"hasBin":true},"prettier@3.6.2":{"resolution":{"integrity":"sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ=="},"engines":{"node":">=14"},"hasBin":true},"pretty-format@27.5.1":{"resolution":{"integrity":"sha512-Qb1gy5OrP5+zDf2Bvnzdl3jsTf1qXVMazbvCoKhtKqVs4/YK4ozX4gKQJJVyNe+cajNPn0KoC0MC3FUmaHWEmQ=="},"engines":{"node":"^10.13.0 || ^12.13.0 || ^14.15.0 || >=15.0.0"}},"pretty-format@30.0.5":{"resolution":{"integrity":"sha512-D1tKtYvByrBkFLe2wHJl2bwMJIiT8rW+XA+TiataH79/FszLQMrpGEvzUVkzPau7OCO0Qnrhpe87PqtOAIB8Yw=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"process-nextick-args@2.0.1":{"resolution":{"integrity":"sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag=="}},"process@0.11.10":{"resolution":{"integrity":"sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A=="},"engines":{"node":">= 0.6.0"}},"progress@2.0.3":{"resolution":{"integrity":"sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA=="},"engines":{"node":">=0.4.0"}},"property-information@6.5.0":{"resolution":{"integrity":"sha512-PgTgs/BlvHxOu8QuEN7wi5A0OmXaBcHpmCSTehcs6Uuu9IkDIEo13Hy7n898RHfrQ49vKCoGeWZSaAK01nwVig=="}},"property-information@7.1.0":{"resolution":{"integrity":"sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ=="}},"protobufjs@7.5.4":{"resolution":{"integrity":"sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg=="},"engines":{"node":">=12.0.0"}},"proxy-agent@6.5.0":{"resolution":{"integrity":"sha512-TmatMXdr2KlRiA2CyDu8GqR8EjahTG3aY3nXjdzFyoZbmB8hrBsTyMezhULIXKnC0jpfjlmiZ3+EaCzoInSu/A=="},"engines":{"node":">= 14"}},"proxy-from-env@1.1.0":{"resolution":{"integrity":"sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg=="}},"psl@1.15.0":{"resolution":{"integrity":"sha512-JZd3gMVBAVQkSs6HdNZo9Sdo0LNcQeMNP3CozBJb3JYC/QUYZTnKxP+f8oWRX4rHP5EurWxqAHTSwUCjlNKa1w=="}},"pump@3.0.3":{"resolution":{"integrity":"sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA=="}},"pump@3.0.4":{"resolution":{"integrity":"sha512-VS7sjc6KR7e1ukRFhQSY5LM2uBWAUPiOPa/A3mkKmiMwSmRFUITt0xuj+/lesgnCv+dPIEYlkzrcyXgquIHMcA=="}},"punycode@2.3.1":{"resolution":{"integrity":"sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg=="},"engines":{"node":">=6"}},"quansync@0.2.11":{"resolution":{"integrity":"sha512-AifT7QEbW9Nri4tAwR5M/uzpBuqfZf+zwaEM/QkzEjj7NBuFD2rBuy0K3dE+8wltbezDV7JMA0WfnCPYRSYbXA=="}},"query-selector-shadow-dom@1.0.1":{"resolution":{"integrity":"sha512-lT5yCqEBgfoMYpf3F2xQRK7zEr1rhIIZuceDK6+xRkJQ4NMbHTwXqk4NkwDwQMNqXgG9r9fyHnzwNVs6zV5KRw=="}},"querystringify@2.2.0":{"resolution":{"integrity":"sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ=="}},"queue-microtask@1.2.3":{"resolution":{"integrity":"sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A=="}},"rc@1.2.8":{"resolution":{"integrity":"sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw=="},"hasBin":true},"react-dom@19.2.0":{"resolution":{"integrity":"sha512-UlbRu4cAiGaIewkPyiRGJk0imDN2T3JjieT6spoL2UeSf5od4n5LB/mQ4ejmxhCFT1tYe8IvaFulzynWovsEFQ=="},"peerDependencies":{"react":"^19.2.0"}},"react-is@17.0.2":{"resolution":{"integrity":"sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w=="}},"react-is@18.3.1":{"resolution":{"integrity":"sha512-/LLMVyas0ljjAtoYiPqYiL8VWXzUUdThrmU5+n20DZv+a+ClRoevUzw5JxU+Ieh5/c87ytoTBV9G1FiKfNJdmg=="}},"react@19.2.0":{"resolution":{"integrity":"sha512-tmbWg6W31tQLeB5cdIBOicJDJRR2KzXsV7uSK9iNfLWQ5bIZfxuPEHp7M8wiHyHnn0DD1i7w3Zmin0FtkrwoCQ=="},"engines":{"node":">=0.10.0"}},"read-yaml-file@1.1.0":{"resolution":{"integrity":"sha512-VIMnQi/Z4HT2Fxuwg5KrY174U1VdUIASQVWXXyqtNRtxSr9IYkn1rsI6Tb6HsrHCmB7gVpNwX6JxPTHcH6IoTA=="},"engines":{"node":">=6"}},"readable-stream@2.3.8":{"resolution":{"integrity":"sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA=="}},"readable-stream@3.6.2":{"resolution":{"integrity":"sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="},"engines":{"node":">= 6"}},"readable-stream@4.7.0":{"resolution":{"integrity":"sha512-oIGGmcpTLwPga8Bn6/Z75SVaH1z5dUut2ibSyAMVhmUggWpmDn2dapB0n7f8nwaSiRtepAsfJyfXIO5DCVAODg=="},"engines":{"node":"^12.22.0 || ^14.17.0 || >=16.0.0"}},"readdir-glob@1.1.3":{"resolution":{"integrity":"sha512-v05I2k7xN8zXvPD9N+z/uhXPaj0sUFCe2rcWZIpBsqxfP7xXFQ0tipAd/wjj1YxWyWtUS5IDJpOG82JKt2EAVA=="}},"readdirp@3.6.0":{"resolution":{"integrity":"sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA=="},"engines":{"node":">=8.10.0"}},"regex-recursion@6.0.2":{"resolution":{"integrity":"sha512-0YCaSCq2VRIebiaUviZNs0cBz1kg5kVS2UKUfNIx8YVs1cN3AV7NTctO5FOKBA+UT2BPJIWZauYHPqJODG50cg=="}},"regex-utilities@2.3.0":{"resolution":{"integrity":"sha512-8VhliFJAWRaUiVvREIiW2NXXTmHs4vMNnSzuJVhscgmGav3g9VDxLrQndI3dZZVVdp0ZO/5v0xmX516/7M9cng=="}},"regex@6.0.1":{"resolution":{"integrity":"sha512-uorlqlzAKjKQZ5P+kTJr3eeJGSVroLKoHmquUj4zHWuR+hEyNqlXsSKlYYF5F4NI6nl7tWCs0apKJ0lmfsXAPA=="}},"rehype-autolink-headings@7.1.0":{"resolution":{"integrity":"sha512-rItO/pSdvnvsP4QRB1pmPiNHUskikqtPojZKJPPPAVx9Hj8i8TwMBhofrrAYRhYOOBZH9tgmG5lPqDLuIWPWmw=="}},"rehype-highlight@7.0.2":{"resolution":{"integrity":"sha512-k158pK7wdC2qL3M5NcZROZ2tR/l7zOzjxXd5VGdcfIyoijjQqpHd3JKtYSBDpDZ38UI2WJWuFAtkMDxmx5kstA=="}},"rehype-minify-whitespace@6.0.2":{"resolution":{"integrity":"sha512-Zk0pyQ06A3Lyxhe9vGtOtzz3Z0+qZ5+7icZ/PL/2x1SHPbKao5oB/g/rlc6BCTajqBb33JcOe71Ye1oFsuYbnw=="}},"rehype-parse@9.0.1":{"resolution":{"integrity":"sha512-ksCzCD0Fgfh7trPDxr2rSylbwq9iYDkSn8TCDmEJ49ljEUBxDVCzCHv7QNzZOfODanX4+bWQ4WZqLCRWYLfhag=="}},"rehype-raw@7.0.0":{"resolution":{"integrity":"sha512-/aE8hCfKlQeA8LmyeyQvQF3eBiLRGNlfBJEvWH7ivp9sBqs7TNqBL5X3v157rM4IFETqDnIOO+z5M/biZbo9Ww=="}},"rehype-remark@10.0.1":{"resolution":{"integrity":"sha512-EmDndlb5NVwXGfUa4c9GPK+lXeItTilLhE6ADSaQuHr4JUlKw9MidzGzx4HpqZrNCt6vnHmEifXQiiA+CEnjYQ=="}},"rehype-sanitize@6.0.0":{"resolution":{"integrity":"sha512-CsnhKNsyI8Tub6L4sm5ZFsme4puGfc6pYylvXo1AeqaGbjOYyzNv3qZPwvs0oMJ39eryyeOdmxwUIo94IpEhqg=="}},"rehype-slug@6.0.0":{"resolution":{"integrity":"sha512-lWyvf/jwu+oS5+hL5eClVd3hNdmwM1kAC0BUvEGD19pajQMIzcNUd/k9GsfQ+FfECvX+JE+e9/btsKH0EjJT6A=="}},"rehype-stringify@10.0.1":{"resolution":{"integrity":"sha512-k9ecfXHmIPuFVI61B9DeLPN0qFHfawM6RsuX48hoqlaKSF61RskNjSm1lI8PhBEM0MRdLxVVm4WmTqJQccH9mA=="}},"remark-frontmatter@5.0.0":{"resolution":{"integrity":"sha512-XTFYvNASMe5iPN0719nPrdItC9aU0ssC4v14mH1BCi1u0n1gAocqcujWUrByftZTbLhRtiKRyjYTSIOcr69UVQ=="}},"remark-gfm@4.0.1":{"resolution":{"integrity":"sha512-1quofZ2RQ9EWdeN34S79+KExV1764+wCUGop5CPL1WGdD0ocPpu91lzPGbwWMECpEpd42kJGQwzRfyov9j4yNg=="}},"remark-parse@11.0.0":{"resolution":{"integrity":"sha512-FCxlKLNGknS5ba/1lmpYijMUzX2esxW5xQqjWxw2eHFfS2MSdaHVINFmhjo+qN1WhZhNimq0dZATN9pH0IDrpA=="}},"remark-rehype@11.1.2":{"resolution":{"integrity":"sha512-Dh7l57ianaEoIpzbp0PC9UKAdCSVklD8E5Rpw7ETfbTl3FqcOOgq5q2LVDhgGCkaBv7p24JXikPdvhhmHvKMsw=="}},"remark-stringify@11.0.0":{"resolution":{"integrity":"sha512-1OSmLd3awB/t8qdoEOMazZkNsfVTeY4fTsgzcQFdXNq8ToTN4ZGwrMnlda4K6smTFKD+GRV6O48i6Z4iKgPPpw=="}},"require-directory@2.1.1":{"resolution":{"integrity":"sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q=="},"engines":{"node":">=0.10.0"}},"require-from-string@2.0.2":{"resolution":{"integrity":"sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw=="},"engines":{"node":">=0.10.0"}},"requires-port@1.0.0":{"resolution":{"integrity":"sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ=="}},"resolve-from@5.0.0":{"resolution":{"integrity":"sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw=="},"engines":{"node":">=8"}},"resolve-pkg-maps@1.0.0":{"resolution":{"integrity":"sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw=="}},"resolve.exports@2.0.3":{"resolution":{"integrity":"sha512-OcXjMsGdhL4XnbShKpAcSqPMzQoYkYyhbEaeSko47MjRP9NfEQMhZkXL1DoFlt9LWQn4YttrdnV6X2OiyzBi+A=="},"engines":{"node":">=10"}},"resq@1.11.0":{"resolution":{"integrity":"sha512-G10EBz+zAAy3zUd/CDoBbXRL6ia9kOo3xRHrMDsHljI0GDkhYlyjwoCx5+3eCC4swi1uCoZQhskuJkj7Gp57Bw=="}},"restore-cursor@3.1.0":{"resolution":{"integrity":"sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA=="},"engines":{"node":">=8"}},"reusify@1.0.4":{"resolution":{"integrity":"sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw=="},"engines":{"iojs":">=1.0.0","node":">=0.10.0"}},"rgb2hex@0.2.5":{"resolution":{"integrity":"sha512-22MOP1Rh7sAo1BZpDG6R5RFYzR2lYEgwq7HEmyW2qcsOqR2lQKmn+O//xV3YG/0rrhMC6KVX2hU+ZXuaw9a5bw=="}},"robust-predicates@3.0.2":{"resolution":{"integrity":"sha512-IXgzBWvWQwE6PrDI05OvmXUIruQTcoMDzRsOd5CDvHCVLcLHMTSYvOK5Cm46kWqlV3yAbuSpBZdJ5oP5OUoStg=="}},"rolldown@1.0.0-rc.17":{"resolution":{"integrity":"sha512-ZrT53oAKrtA4+YtBWPQbtPOxIbVDbxT0orcYERKd63VJTF13zPcgXTvD4843L8pcsI7M6MErt8QtON6lrB9tyA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"hasBin":true},"rollup@4.53.2":{"resolution":{"integrity":"sha512-MHngMYwGJVi6Fmnk6ISmnk7JAHRNF0UkuucA0CUW3N3a4KnONPEZz+vUanQP/ZC/iY1Qkf3bwPWzyY84wEks1g=="},"engines":{"node":">=18.0.0","npm":">=8.0.0"},"hasBin":true},"rou3@0.8.1":{"resolution":{"integrity":"sha512-ePa+XGk00/3HuCqrEnK3LxJW7I0SdNg6EFzKUJG73hMAdDcOUC/i/aSz7LSDwLrGr33kal/rqOGydzwl6U7zBA=="}},"roughjs@4.6.6":{"resolution":{"integrity":"sha512-ZUz/69+SYpFN/g/lUlo2FXcIjRkSu3nDarreVdGGndHEBJ6cXPdKguS8JGxwj5HA5xIbVKSmLgr5b3AWxtRfvQ=="}},"rrweb-cssom@0.8.0":{"resolution":{"integrity":"sha512-guoltQEx+9aMf2gDZ0s62EcV8lsXR+0w8915TC3ITdn2YueuNjdAYh/levpU9nFaoChh9RUS5ZdQMrKfVEN9tw=="}},"run-parallel@1.2.0":{"resolution":{"integrity":"sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA=="}},"rw@1.3.3":{"resolution":{"integrity":"sha512-PdhdWy89SiZogBLaw42zdeqtRJ//zFd2PgQavcICDUgJT5oW10QCRKbJ6bg4r0/UY2M6BWd5tkxuGFRvCkgfHQ=="}},"rxjs@7.8.2":{"resolution":{"integrity":"sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA=="}},"safaridriver@0.1.2":{"resolution":{"integrity":"sha512-4R309+gWflJktzPXBQCobbWEHlzC4aK3a+Ov3tz2Ib2aBxiwd11phkdIBH1l0EO22x24CJMUQkpKFumRriCSRg=="}},"safe-buffer@5.1.2":{"resolution":{"integrity":"sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g=="}},"safe-buffer@5.2.1":{"resolution":{"integrity":"sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="}},"safer-buffer@2.1.2":{"resolution":{"integrity":"sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg=="}},"sass-embedded-android-arm64@1.89.2":{"resolution":{"integrity":"sha512-+pq7a7AUpItNyPu61sRlP6G2A8pSPpyazASb+8AK2pVlFayCSPAEgpwpCE9A2/Xj86xJZeMizzKUHxM2CBCUxA=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["android"]},"sass-embedded-android-arm@1.89.2":{"resolution":{"integrity":"sha512-oHAPTboBHRZlDBhyRB6dvDKh4KvFs+DZibDHXbkSI6dBZxMTT+Yb2ivocHnctVGucKTLQeT7+OM5DjWHyynL/A=="},"engines":{"node":">=14.0.0"},"cpu":["arm"],"os":["android"]},"sass-embedded-android-riscv64@1.89.2":{"resolution":{"integrity":"sha512-HfJJWp/S6XSYvlGAqNdakeEMPOdhBkj2s2lN6SHnON54rahKem+z9pUbCriUJfM65Z90lakdGuOfidY61R9TYg=="},"engines":{"node":">=14.0.0"},"cpu":["riscv64"],"os":["android"]},"sass-embedded-android-x64@1.89.2":{"resolution":{"integrity":"sha512-BGPzq53VH5z5HN8de6jfMqJjnRe1E6sfnCWFd4pK+CAiuM7iw5Fx6BQZu3ikfI1l2GY0y6pRXzsVLdp/j4EKEA=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["android"]},"sass-embedded-darwin-arm64@1.89.2":{"resolution":{"integrity":"sha512-UCm3RL/tzMpG7DsubARsvGUNXC5pgfQvP+RRFJo9XPIi6elopY5B6H4m9dRYDpHA+scjVthdiDwkPYr9+S/KGw=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["darwin"]},"sass-embedded-darwin-x64@1.89.2":{"resolution":{"integrity":"sha512-D9WxtDY5VYtMApXRuhQK9VkPHB8R79NIIR6xxVlN2MIdEid/TZWi1MHNweieETXhWGrKhRKglwnHxxyKdJYMnA=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["darwin"]},"sass-embedded-linux-arm64@1.89.2":{"resolution":{"integrity":"sha512-2N4WW5LLsbtrWUJ7iTpjvhajGIbmDR18ZzYRywHdMLpfdPApuHPMDF5CYzHbS+LLx2UAx7CFKBnj5LLjY6eFgQ=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["linux"]},"sass-embedded-linux-arm@1.89.2":{"resolution":{"integrity":"sha512-leP0t5U4r95dc90o8TCWfxNXwMAsQhpWxTkdtySDpngoqtTy3miMd7EYNYd1znI0FN1CBaUvbdCMbnbPwygDlA=="},"engines":{"node":">=14.0.0"},"cpu":["arm"],"os":["linux"]},"sass-embedded-linux-musl-arm64@1.89.2":{"resolution":{"integrity":"sha512-nTyuaBX6U1A/cG7WJh0pKD1gY8hbg1m2SnzsyoFG+exQ0lBX/lwTLHq3nyhF+0atv7YYhYKbmfz+sjPP8CZ9lw=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["linux"]},"sass-embedded-linux-musl-arm@1.89.2":{"resolution":{"integrity":"sha512-Z6gG2FiVEEdxYHRi2sS5VIYBmp17351bWtOCUZ/thBM66+e70yiN6Eyqjz80DjL8haRUegNQgy9ZJqsLAAmr9g=="},"engines":{"node":">=14.0.0"},"cpu":["arm"],"os":["linux"]},"sass-embedded-linux-musl-riscv64@1.89.2":{"resolution":{"integrity":"sha512-N6oul+qALO0SwGY8JW7H/Vs0oZIMrRMBM4GqX3AjM/6y8JsJRxkAwnfd0fDyK+aICMFarDqQonQNIx99gdTZqw=="},"engines":{"node":">=14.0.0"},"cpu":["riscv64"],"os":["linux"]},"sass-embedded-linux-musl-x64@1.89.2":{"resolution":{"integrity":"sha512-K+FmWcdj/uyP8GiG9foxOCPfb5OAZG0uSVq80DKgVSC0U44AdGjvAvVZkrgFEcZ6cCqlNC2JfYmslB5iqdL7tg=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["linux"]},"sass-embedded-linux-riscv64@1.89.2":{"resolution":{"integrity":"sha512-g9nTbnD/3yhOaskeqeBQETbtfDQWRgsjHok6bn7DdAuwBsyrR3JlSFyqKc46pn9Xxd9SQQZU8AzM4IR+sY0A0w=="},"engines":{"node":">=14.0.0"},"cpu":["riscv64"],"os":["linux"]},"sass-embedded-linux-x64@1.89.2":{"resolution":{"integrity":"sha512-Ax7dKvzncyQzIl4r7012KCMBvJzOz4uwSNoyoM5IV6y5I1f5hEwI25+U4WfuTqdkv42taCMgpjZbh9ERr6JVMQ=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["linux"]},"sass-embedded-win32-arm64@1.89.2":{"resolution":{"integrity":"sha512-j96iJni50ZUsfD6tRxDQE2QSYQ2WrfHxeiyAXf41Kw0V4w5KYR/Sf6rCZQLMTUOHnD16qTMVpQi20LQSqf4WGg=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["win32"]},"sass-embedded-win32-x64@1.89.2":{"resolution":{"integrity":"sha512-cS2j5ljdkQsb4PaORiClaVYynE9OAPZG/XjbOMxpQmjRIf7UroY4PEIH+Waf+y47PfXFX9SyxhYuw2NIKGbEng=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["win32"]},"sass-embedded@1.89.2":{"resolution":{"integrity":"sha512-Ack2K8rc57kCFcYlf3HXpZEJFNUX8xd8DILldksREmYXQkRHI879yy8q4mRDJgrojkySMZqmmmW1NxrFxMsYaA=="},"engines":{"node":">=16.0.0"},"hasBin":true},"saxes@6.0.0":{"resolution":{"integrity":"sha512-xAg7SOnEhrm5zI3puOOKyy1OMcMlIJZYNJY7xLBwSze0UjhPLnWfj2GF2EpT0jmzaJKIWKHLsaSSajf35bcYnA=="},"engines":{"node":">=v12.22.7"}},"scheduler@0.27.0":{"resolution":{"integrity":"sha512-eNv+WrVbKu1f3vbYJT/xtiF5syA5HPIMtf9IgY/nKg0sWqzAUEvqY/xm7OcZc/qafLx/iO9FgOmeSAp4v5ti/Q=="}},"schema-utils@4.3.3":{"resolution":{"integrity":"sha512-eflK8wEtyOE6+hsaRVPxvUKYCpRgzLqDTb8krvAsRIwOGlHoSgYLgBXoubGgLd2fT41/OUYdb48v4k4WWHQurA=="},"engines":{"node":">= 10.13.0"}},"semver@6.3.1":{"resolution":{"integrity":"sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA=="},"hasBin":true},"semver@7.7.2":{"resolution":{"integrity":"sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA=="},"engines":{"node":">=10"},"hasBin":true},"semver@7.7.3":{"resolution":{"integrity":"sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="},"engines":{"node":">=10"},"hasBin":true},"semver@7.7.4":{"resolution":{"integrity":"sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA=="},"engines":{"node":">=10"},"hasBin":true},"serialize-error@11.0.3":{"resolution":{"integrity":"sha512-2G2y++21dhj2R7iHAdd0FIzjGwuKZld+7Pl/bTU6YIkrC2ZMbVUjm+luj6A6V34Rv9XfKJDKpTWu9W4Gse1D9g=="},"engines":{"node":">=14.16"}},"seroval-plugins@1.5.4":{"resolution":{"integrity":"sha512-S0xQPhUTefAhNvNWFg0c1J8qJArHt5KdtJ/cFAofo06KD1MVSeFWyl4iiu+ApDIuw0WhjpOfCdgConOfAnLgkw=="},"engines":{"node":">=10"},"peerDependencies":{"seroval":"^1.0"}},"seroval@1.5.4":{"resolution":{"integrity":"sha512-46uFvgrXTVxZcUorgSSRZ4y+ieqLLQRMlG4bnCZKW3qI6BZm7Rg4ntMW4p1mILEEBZWrFlcpp0AyIIlM6jD9iw=="},"engines":{"node":">=10"}},"setimmediate@1.0.5":{"resolution":{"integrity":"sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA=="}},"sharp@0.34.5":{"resolution":{"integrity":"sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"}},"shebang-command@2.0.0":{"resolution":{"integrity":"sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA=="},"engines":{"node":">=8"}},"shebang-regex@3.0.0":{"resolution":{"integrity":"sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A=="},"engines":{"node":">=8"}},"shiki@3.15.0":{"resolution":{"integrity":"sha512-kLdkY6iV3dYbtPwS9KXU7mjfmDm25f5m0IPNFnaXO7TBPcvbUOY72PYXSuSqDzwp+vlH/d7MXpHlKO/x+QoLXw=="}},"siginfo@2.0.0":{"resolution":{"integrity":"sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g=="}},"signal-exit@3.0.7":{"resolution":{"integrity":"sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="}},"signal-exit@4.1.0":{"resolution":{"integrity":"sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="},"engines":{"node":">=14"}},"simple-concat@1.0.1":{"resolution":{"integrity":"sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q=="}},"simple-get@4.0.1":{"resolution":{"integrity":"sha512-brv7p5WgH0jmQJr1ZDDfKDOSeWWg+OVypG99A/5vYGPqJ6pxiaHLy8nxtFjBA7oMa01ebA9gfh1uMCFqOuXxvA=="}},"sirv@3.0.2":{"resolution":{"integrity":"sha512-2wcC/oGxHis/BoHkkPwldgiPSYcpZK3JU28WoMVv55yHJgcZ8rlXvuG9iZggz+sU1d4bRgIGASwyWqjxu3FM0g=="},"engines":{"node":">=18"}},"slash@3.0.0":{"resolution":{"integrity":"sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q=="},"engines":{"node":">=8"}},"smart-buffer@4.2.0":{"resolution":{"integrity":"sha512-94hK0Hh8rPqQl2xXc3HsaBoOXKV20MToPkcXvwbISWLEs+64sBq5kFgn2kJDHb1Pry9yrP0dxrCI9RRci7RXKg=="},"engines":{"node":">= 6.0.0","npm":">= 3.0.0"}},"socks-proxy-agent@8.0.5":{"resolution":{"integrity":"sha512-HehCEsotFqbPW9sJ8WVYB6UbmIMv7kUUORIF2Nncq4VQvBfNBLibW9YZR5dlYCSUhwcD628pRllm7n+E+YTzJw=="},"engines":{"node":">= 14"}},"socks@2.8.8":{"resolution":{"integrity":"sha512-NlGELfPrgX2f1TAAcz0WawlLn+0r3FyhhCRpFFK2CemXenPYvzMWWZINv3eDNo9ucdwme7oCHRY0Jnbs4aIkog=="},"engines":{"node":">= 10.0.0","npm":">= 3.0.0"}},"source-map-js@1.2.1":{"resolution":{"integrity":"sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA=="},"engines":{"node":">=0.10.0"}},"source-map-support@0.5.21":{"resolution":{"integrity":"sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w=="}},"source-map@0.6.1":{"resolution":{"integrity":"sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="},"engines":{"node":">=0.10.0"}},"source-map@0.7.6":{"resolution":{"integrity":"sha512-i5uvt8C3ikiWeNZSVZNWcfZPItFQOsYTUAOkcUPGd8DqDy1uOUikjt5dG+uRlwyvR108Fb9DOd4GvXfT0N2/uQ=="},"engines":{"node":">= 12"}},"space-separated-tokens@2.0.2":{"resolution":{"integrity":"sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q=="}},"spacetrim@0.11.59":{"resolution":{"integrity":"sha512-lLYsktklSRKprreOm7NXReW8YiX2VBjbgmXYEziOoGf/qsJqAEACaDvoTtUOycwjpaSh+bT8eu0KrJn7UNxiCg=="}},"spawndamnit@3.0.1":{"resolution":{"integrity":"sha512-MmnduQUuHCoFckZoWnXsTg7JaiLBJrKFj9UI2MbRPGaJeVpsLcVBu6P/IGZovziM/YBsellCmsprgNA+w0CzVg=="}},"split2@4.2.0":{"resolution":{"integrity":"sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg=="},"engines":{"node":">= 10.x"}},"sprintf-js@1.0.3":{"resolution":{"integrity":"sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g=="}},"srvx@0.11.15":{"resolution":{"integrity":"sha512-iXsux0UcOjdvs0LCMa2Ws3WwcDUozA3JN3BquNXkaFPP7TpRqgunKdEgoZ/uwb1J6xaYHfxtz9Twlh6yzwM6Tg=="},"engines":{"node":">=20.16.0"},"hasBin":true},"stackback@0.0.2":{"resolution":{"integrity":"sha512-1XMJE5fQo1jGH6Y/7ebnwPOBEkIEnT4QF32d5R1+VXdXveM0IBMJt8zfaxX1P3QhVwrYe+576+jkANtSS2mBbw=="}},"statuses@2.0.2":{"resolution":{"integrity":"sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw=="},"engines":{"node":">= 0.8"}},"std-env@3.10.0":{"resolution":{"integrity":"sha512-5GS12FdOZNliM5mAOxFRg7Ir0pWz8MdpYm6AY6VPkGpbA7ZzmbzNcBJQ0GPvvyWgcY7QAhCgf9Uy89I03faLkg=="}},"std-env@3.9.0":{"resolution":{"integrity":"sha512-UGvjygr6F6tpH7o2qyqR6QYpwraIjKSdtzyBdyytFOHmPZY917kwdwLG0RbOjWOnKmnm3PeHjaoLLMie7kPLQw=="}},"std-env@4.1.0":{"resolution":{"integrity":"sha512-Rq7ybcX2RuC55r9oaPVEW7/xu3tj8u4GeBYHBWCychFtzMIr86A7e3PPEBPT37sHStKX3+TiX/Fr/ACmJLVlLQ=="}},"streamx@2.25.0":{"resolution":{"integrity":"sha512-0nQuG6jf1w+wddNEEXCF4nTg3LtufWINB5eFEN+5TNZW7KWJp6x87+JFL43vaAUPyCfH1wID+mNVyW6OHtFamg=="}},"strict-event-emitter@0.5.1":{"resolution":{"integrity":"sha512-vMgjE/GGEPEFnhFub6pa4FmJBRBVOLpIII2hvCZ8Kzb7K0hlHo7mQv6xYrBvCL2LtAIBwFUK8wvuJgTVSQ5MFQ=="}},"string-width@4.2.3":{"resolution":{"integrity":"sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="},"engines":{"node":">=8"}},"string-width@5.1.2":{"resolution":{"integrity":"sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA=="},"engines":{"node":">=12"}},"string_decoder@1.1.1":{"resolution":{"integrity":"sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg=="}},"string_decoder@1.3.0":{"resolution":{"integrity":"sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA=="}},"stringify-entities@4.0.4":{"resolution":{"integrity":"sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg=="}},"strip-ansi@6.0.1":{"resolution":{"integrity":"sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="},"engines":{"node":">=8"}},"strip-ansi@7.1.2":{"resolution":{"integrity":"sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA=="},"engines":{"node":">=12"}},"strip-ansi@7.2.0":{"resolution":{"integrity":"sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w=="},"engines":{"node":">=12"}},"strip-bom@3.0.0":{"resolution":{"integrity":"sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA=="},"engines":{"node":">=4"}},"strip-json-comments@2.0.1":{"resolution":{"integrity":"sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ=="},"engines":{"node":">=0.10.0"}},"strip-literal@3.0.0":{"resolution":{"integrity":"sha512-TcccoMhJOM3OebGhSBEmp3UZ2SfDMZUEBdRA/9ynfLi8yYajyWX3JiXArcJt4Umh4vISpspkQIY8ZZoCqjbviA=="}},"strnum@1.1.2":{"resolution":{"integrity":"sha512-vrN+B7DBIoTTZjnPNewwhx6cBA/H+IS7rfW68n7XxC1y7uoiGQBxaKzqucGUgavX15dJgiGztLJ8vxuEzwqBdA=="}},"stylis@4.3.6":{"resolution":{"integrity":"sha512-yQ3rwFWRfwNUY7H5vpU0wfdkNSnvnJinhF9830Swlaxl03zsOjCfmX0ugac+3LtK0lYSgwL/KXc8oYL3mG4YFQ=="}},"supports-color@10.2.2":{"resolution":{"integrity":"sha512-SS+jx45GF1QjgEXQx4NJZV9ImqmO2NPz5FNsIHrsDjh2YsHnawpan7SNQ1o8NuhrbHZy9AZhIoCUiCeaW/C80g=="},"engines":{"node":">=18"}},"supports-color@7.2.0":{"resolution":{"integrity":"sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw=="},"engines":{"node":">=8"}},"supports-color@8.1.1":{"resolution":{"integrity":"sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q=="},"engines":{"node":">=10"}},"symbol-tree@3.2.4":{"resolution":{"integrity":"sha512-9QNk5KwDF+Bvz+PyObkmSYjI5ksVUYtjW7AU22r2NKcfLJcXp96hkDWU3+XndOsUb+AQ9QhfzfCT2O+CNWT5Tw=="}},"sync-child-process@1.0.2":{"resolution":{"integrity":"sha512-8lD+t2KrrScJ/7KXCSyfhT3/hRq78rC0wBFqNJXv3mZyn6hW2ypM05JmlSvtqRbeq6jqA94oHbxAr2vYsJ8vDA=="},"engines":{"node":">=16.0.0"}},"sync-message-port@1.2.0":{"resolution":{"integrity":"sha512-gAQ9qrUN/UCypHtGFbbe7Rc/f9bzO88IwrG8TDo/aMKAApKyD6E3W4Cm0EfhfBb6Z6SKt59tTCTfD+n1xmAvMg=="},"engines":{"node":">=16.0.0"}},"tailwindcss@4.2.4":{"resolution":{"integrity":"sha512-HhKppgO81FQof5m6TEnuBWCZGgfRAWbaeOaGT00KOy/Pf/j6oUihdvBpA7ltCeAvZpFhW3j0PTclkxsd4IXYDA=="}},"tapable@2.3.3":{"resolution":{"integrity":"sha512-uxc/zpqFg6x7C8vOE7lh6Lbda8eEL9zmVm/PLeTPBRhh1xCgdWaQ+J1CUieGpIfm2HdtsUpRv+HshiasBMcc6A=="},"engines":{"node":">=6"}},"tar-fs@2.1.4":{"resolution":{"integrity":"sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ=="}},"tar-fs@3.1.2":{"resolution":{"integrity":"sha512-QGxxTxxyleAdyM3kpFs14ymbYmNFrfY+pHj7Z8FgtbZ7w2//VAgLMac7sT6nRpIHjppXO2AwwEOg0bPFVRcmXw=="}},"tar-stream@2.2.0":{"resolution":{"integrity":"sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ=="},"engines":{"node":">=6"}},"tar-stream@3.2.0":{"resolution":{"integrity":"sha512-ojzvCvVaNp6aOTFmG7jaRD0meowIAuPc3cMMhSgKiVWws1GyHbGd/xvnyuRKcKlMpt3qvxx6r0hreCNITP9hIg=="}},"tar@6.2.1":{"resolution":{"integrity":"sha512-DZ4yORTwrbTj/7MZYq2w+/ZFdI6OZ/f9SFHR+71gIVUZhOQPHzVCLpvRnPgyaMpfWxxk/4ONva3GQSyNIKRv6A=="},"engines":{"node":">=10"},"deprecated":"Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me"},"teex@1.0.1":{"resolution":{"integrity":"sha512-eYE6iEI62Ni1H8oIa7KlDU6uQBtqr4Eajni3wX7rpfXD8ysFx8z0+dri+KWEPWpBsxXfxu58x/0jvTVT1ekOSg=="}},"term-size@2.2.1":{"resolution":{"integrity":"sha512-wK0Ri4fOGjv/XPy8SBHZChl8CM7uMc5VML7SqiQ0zG7+J5Vr+RMQDoHa2CNT6KHUnTGIXH34UDMkPzAUyapBZg=="},"engines":{"node":">=8"}},"terser-webpack-plugin@5.5.0":{"resolution":{"integrity":"sha512-UYhptBwhWvfIjKd/UuFo6D8uq9xpGLDK+z8EDsj/zWhrTaH34cKEbrkMKfV5YWqGBvAYA3tlzZbs2R+qYrbQJA=="},"engines":{"node":">= 10.13.0"},"peerDependencies":{"@swc/core":"*","esbuild":"*","uglify-js":"*","webpack":"^5.1.0"},"peerDependenciesMeta":{"@swc/core":{"optional":true},"esbuild":{"optional":true},"uglify-js":{"optional":true}}},"terser@5.36.0":{"resolution":{"integrity":"sha512-IYV9eNMuFAV4THUspIRXkLakHnV6XO7FEdtKjf/mDyrnqUg9LnlOn6/RwRvM9SZjR4GUq8Nk8zj67FzVARr74w=="},"engines":{"node":">=10"},"hasBin":true},"test-exclude@7.0.1":{"resolution":{"integrity":"sha512-pFYqmTw68LXVjeWJMST4+borgQP2AyMNbg1BpZh9LbyhUeNkeaPF9gzfPGUAnSMV3qPYdWUwDIjjCLiSDOl7vg=="},"engines":{"node":">=18"}},"text-decoder@1.2.7":{"resolution":{"integrity":"sha512-vlLytXkeP4xvEq2otHeJfSQIRyWxo/oZGEbXrtEEF9Hnmrdly59sUbzZ/QgyWuLYHctCHxFF4tRQZNQ9k60ExQ=="}},"tinybench@2.9.0":{"resolution":{"integrity":"sha512-0+DUvqWMValLmha6lr4kD8iAMK1HzV0/aKnCtWb9v9641TnP/MFb7Pc2bxoxQjTXAErryXVgUOfv2YqNllqGeg=="}},"tinyexec@0.3.2":{"resolution":{"integrity":"sha512-KQQR9yN7R5+OSwaK0XQoj22pwHoTlgYqmUscPYoknOoWCWfj/5/ABTMRi69FrKU5ffPVh5QcFikpWJI/P1ocHA=="}},"tinyexec@1.0.2":{"resolution":{"integrity":"sha512-W/KYk+NFhkmsYpuHq5JykngiOCnxeVL8v8dFnqxSD8qEEdRfXk1SDM6JzNqcERbcGYj9tMrDQBYV9cjgnunFIg=="},"engines":{"node":">=18"}},"tinyglobby@0.2.14":{"resolution":{"integrity":"sha512-tX5e7OM1HnYr2+a2C/4V0htOcSQcoSTH9KgJnVvNm5zm/cyEWKJ7j7YutsH9CxMdtOkkLFy2AHrMci9IM8IPZQ=="},"engines":{"node":">=12.0.0"}},"tinyglobby@0.2.15":{"resolution":{"integrity":"sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ=="},"engines":{"node":">=12.0.0"}},"tinyglobby@0.2.16":{"resolution":{"integrity":"sha512-pn99VhoACYR8nFHhxqix+uvsbXineAasWm5ojXoN8xEwK5Kd3/TrhNn1wByuD52UxWRLy8pu+kRMniEi6Eq9Zg=="},"engines":{"node":">=12.0.0"}},"tinypool@1.1.1":{"resolution":{"integrity":"sha512-Zba82s87IFq9A9XmjiX5uZA/ARWDrB03OHlq+Vw1fSdt0I+4/Kutwy8BP4Y/y/aORMo61FQ0vIb5j44vSo5Pkg=="},"engines":{"node":"^18.0.0 || >=20.0.0"}},"tinyrainbow@2.0.0":{"resolution":{"integrity":"sha512-op4nsTR47R6p0vMUUoYl/a+ljLFVtlfaXkLQmqfLR1qHma1h/ysYk4hEXZ880bf2CYgTskvTa/e196Vd5dDQXw=="},"engines":{"node":">=14.0.0"}},"tinyrainbow@3.0.3":{"resolution":{"integrity":"sha512-PSkbLUoxOFRzJYjjxHJt9xro7D+iilgMX/C9lawzVuYiIdcihh9DXmVibBe8lmcFrRi/VzlPjBxbN7rH24q8/Q=="},"engines":{"node":">=14.0.0"}},"tinyrainbow@3.1.0":{"resolution":{"integrity":"sha512-Bf+ILmBgretUrdJxzXM0SgXLZ3XfiaUuOj/IKQHuTXip+05Xn+uyEYdVg0kYDipTBcLrCVyUzAPz7QmArb0mmw=="},"engines":{"node":">=14.0.0"}},"tinyspy@4.0.3":{"resolution":{"integrity":"sha512-t2T/WLB2WRgZ9EpE4jgPJ9w+i66UZfDc8wHh0xrwiRNN+UwH98GIJkTeZqX9rg0i0ptwzqW+uYeIF0T4F8LR7A=="},"engines":{"node":">=14.0.0"}},"tldts-core@6.1.52":{"resolution":{"integrity":"sha512-j4OxQI5rc1Ve/4m/9o2WhWSC4jGc4uVbCINdOEJRAraCi0YqTqgMcxUx7DbmuP0G3PCixoof/RZB0Q5Kh9tagw=="}},"tldts-core@7.0.19":{"resolution":{"integrity":"sha512-lJX2dEWx0SGH4O6p+7FPwYmJ/bu1JbcGJ8RLaG9b7liIgZ85itUVEPbMtWRVrde/0fnDPEPHW10ZsKW3kVsE9A=="}},"tldts@6.1.52":{"resolution":{"integrity":"sha512-fgrDJXDjbAverY6XnIt0lNfv8A0cf7maTEaZxNykLGsLG7XP+5xhjBTrt/ieAsFjAlZ+G5nmXomLcZDkxXnDzw=="},"hasBin":true},"tldts@7.0.19":{"resolution":{"integrity":"sha512-8PWx8tvC4jDB39BQw1m4x8y5MH1BcQ5xHeL2n7UVFulMPH/3Q0uiamahFJ3lXA0zO2SUyRXuVVbWSDmstlt9YA=="},"hasBin":true},"tmp@0.2.5":{"resolution":{"integrity":"sha512-voyz6MApa1rQGUxT3E+BK7/ROe8itEx7vD8/HEvt4xwXucvQ5G5oeEiHkmHZJuBO21RpOf+YYm9MOivj709jow=="},"engines":{"node":">=14.14"}},"to-regex-range@5.0.1":{"resolution":{"integrity":"sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ=="},"engines":{"node":">=8.0"}},"totalist@3.0.1":{"resolution":{"integrity":"sha512-sf4i37nQ2LBx4m3wB74y+ubopq6W/dIzXg0FDGjsYnZHVa1Da8FH853wlL2gtUhg+xJXjfk3kUZS3BRoQeoQBQ=="},"engines":{"node":">=6"}},"tough-cookie@4.1.4":{"resolution":{"integrity":"sha512-Loo5UUvLD9ScZ6jh8beX1T6sO1w2/MpCRpEP7V280GKMVUQ0Jzar2U3UJPsrdbziLEMMhu3Ujnq//rhiFuIeag=="},"engines":{"node":">=6"}},"tough-cookie@5.1.2":{"resolution":{"integrity":"sha512-FVDYdxtnj0G6Qm/DhNPSb8Ju59ULcup3tuJxkFb5K8Bv2pUXILbf0xZWU8PX8Ov19OXljbUyveOFwRMwkXzO+A=="},"engines":{"node":">=16"}},"tough-cookie@6.0.0":{"resolution":{"integrity":"sha512-kXuRi1mtaKMrsLUxz3sQYvVl37B0Ns6MzfrtV5DvJceE9bPyspOqk9xxv7XbZWcfLWbFmm997vl83qUWVJA64w=="},"engines":{"node":">=16"}},"tr46@5.1.1":{"resolution":{"integrity":"sha512-hdF5ZgjTqgAntKkklYw0R03MG2x/bSzTtkxmIRw/sTNV8YXsCJ1tfLAX23lhxhHJlEf3CRCOCGGWw3vI3GaSPw=="},"engines":{"node":">=18"}},"tr46@6.0.0":{"resolution":{"integrity":"sha512-bLVMLPtstlZ4iMQHpFHTR7GAGj2jxi8Dg0s2h2MafAE4uSWF98FC/3MomU51iQAMf8/qDUbKWf5GxuvvVcXEhw=="},"engines":{"node":">=20"}},"tree-kill@1.2.2":{"resolution":{"integrity":"sha512-L0Orpi8qGpRG//Nd+H90vFB+3iHnue1zSSGmNOOCh1GLJ7rUKVwV2HvijphGQS2UmhUZewS9VgvxYIdgr+fG1A=="},"hasBin":true},"trim-lines@3.0.1":{"resolution":{"integrity":"sha512-kRj8B+YHZCc9kQYdWfJB2/oUl9rA99qbowYYBtr4ui4mZyAQ2JpvVBd/6U2YloATfqBhBTSMhTpgBHtU0Mf3Rg=="}},"trim-trailing-lines@2.1.0":{"resolution":{"integrity":"sha512-5UR5Biq4VlVOtzqkm2AZlgvSlDJtME46uV0br0gENbwN4l5+mMKT4b9gJKqWtuL2zAIqajGJGuvbCbcAJUZqBg=="}},"trough@2.2.0":{"resolution":{"integrity":"sha512-tmMpK00BjZiUyVyvrBK7knerNgmgvcV/KLVyuma/SC+TQN167GrMRciANTz09+k3zW8L8t60jWO1GpfkZdjTaw=="}},"ts-algebra@2.0.0":{"resolution":{"integrity":"sha512-FPAhNPFMrkwz76P7cdjdmiShwMynZYN6SgOujD1urY4oNm80Ou9oMdmbR45LotcKOXoy7wSmHkRFE6Mxbrhefw=="}},"ts-dedent@2.2.0":{"resolution":{"integrity":"sha512-q5W7tVM71e2xjHZTlgfTDoPF/SmqKG5hddq9SzR49CH2hayqRKJtQ4mtRlSxKaJlR/+9rEM+mnBHf7I2/BQcpQ=="},"engines":{"node":">=6.10"}},"tsconfig-paths@4.2.0":{"resolution":{"integrity":"sha512-NoZ4roiN7LnbKn9QqE1amc9DJfzvZXxF4xDavcOWt1BPkdx+m+0gJuPM+S0vCe7zTJMYUP0R8pO2XMr+Y8oLIg=="},"engines":{"node":">=6"}},"tslib@2.8.1":{"resolution":{"integrity":"sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="}},"tsx@4.20.5":{"resolution":{"integrity":"sha512-+wKjMNU9w/EaQayHXb7WA7ZaHY6hN8WgfvHNQ3t1PnU91/7O8TcTnIhCDYTZwnt8JsO9IBqZ30Ln1r7pPF52Aw=="},"engines":{"node":">=18.0.0"},"hasBin":true},"tunnel-agent@0.6.0":{"resolution":{"integrity":"sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w=="}},"type-fest@2.19.0":{"resolution":{"integrity":"sha512-RAH822pAdBgcNMAfWnCBU3CFZcfZ/i1eZjwFU/dsLKumyuuP3niueg2UAukXYF0E2AAoc82ZSSf9J0WQBinzHA=="},"engines":{"node":">=12.20"}},"type-fest@4.26.0":{"resolution":{"integrity":"sha512-OduNjVJsFbifKb57UqZ2EMP1i4u64Xwow3NYXUtBbD4vIwJdQd4+xl8YDou1dlm4DVrtwT/7Ky8z8WyCULVfxw=="},"engines":{"node":">=16"}},"type-fest@4.41.0":{"resolution":{"integrity":"sha512-TeTSQ6H5YHvpqVwBRcnLDCBnDOHWYu7IvGbHT6N8AOymcr9PJGjc1GTtiWZTYg0NCgYwvnYWEkVChQAr9bjfwA=="},"engines":{"node":">=16"}},"typescript@5.8.3":{"resolution":{"integrity":"sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ=="},"engines":{"node":">=14.17"},"hasBin":true},"typescript@5.9.3":{"resolution":{"integrity":"sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="},"engines":{"node":">=14.17"},"hasBin":true},"ufo@1.6.1":{"resolution":{"integrity":"sha512-9a4/uxlTWJ4+a5i0ooc1rU7C7YOw3wT+UGqdeNNHWnOF9qcMBgLRS+4IYUqbczewFx4mLEig6gawh7X6mFlEkA=="}},"undici-types@6.21.0":{"resolution":{"integrity":"sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ=="}},"undici-types@7.16.0":{"resolution":{"integrity":"sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="}},"undici@7.16.0":{"resolution":{"integrity":"sha512-QEg3HPMll0o3t2ourKwOeUAZ159Kn9mx5pnzHRQO8+Wixmh88YdZRiIwat0iNzNNXn0yoEtXJqFpyW7eM8BV7g=="},"engines":{"node":">=20.18.1"}},"undici@7.24.8":{"resolution":{"integrity":"sha512-6KQ/+QxK49Z/p3HO6E5ZCZWNnCasyZLa5ExaVYyvPxUwKtbCPMKELJOqh7EqOle0t9cH/7d2TaaTRRa6Nhs4YQ=="},"engines":{"node":">=20.18.1"}},"undici@7.25.0":{"resolution":{"integrity":"sha512-xXnp4kTyor2Zq+J1FfPI6Eq3ew5h6Vl0F/8d9XU5zZQf1tX9s2Su1/3PiMmUANFULpmksxkClamIZcaUqryHsQ=="},"engines":{"node":">=20.18.1"}},"unenv@2.0.0-rc.24":{"resolution":{"integrity":"sha512-i7qRCmY42zmCwnYlh9H2SvLEypEFGye5iRmEMKjcGi7zk9UquigRjFtTLz0TYqr0ZGLZhaMHl/foy1bZR+Cwlw=="}},"unified@11.0.5":{"resolution":{"integrity":"sha512-xKvGhPWw3k84Qjh8bI3ZeJjqnyadK+GEFtazSfZv/rKeTkTjOJho6mFqh2SM96iIcZokxiOpg78GazTSg8+KHA=="}},"unist-util-find-after@5.0.0":{"resolution":{"integrity":"sha512-amQa0Ep2m6hE2g72AugUItjbuM8X8cGQnFoHk0pGfrFeT9GZhzN5SW8nRsiGKK7Aif4CrACPENkA6P/Lw6fHGQ=="}},"unist-util-is@6.0.0":{"resolution":{"integrity":"sha512-2qCTHimwdxLfz+YzdGfkqNlH0tLi9xjTnHddPmJwtIG9MGsdbutfTc4P+haPD7l7Cjxf/WZj+we5qfVPvvxfYw=="}},"unist-util-position@5.0.0":{"resolution":{"integrity":"sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA=="}},"unist-util-stringify-position@4.0.0":{"resolution":{"integrity":"sha512-0ASV06AAoKCDkS2+xw5RXJywruurpbC4JZSm7nr7MOt1ojAzvyyaO+UxZf18j8FCF6kmzCZKcAgN/yu2gm2XgQ=="}},"unist-util-visit-parents@6.0.1":{"resolution":{"integrity":"sha512-L/PqWzfTP9lzzEa6CKs0k2nARxTdZduw3zyh8d2NVBnsyvHjSX4TWse388YrrQKbvI8w20fGjGlhgT96WwKykw=="}},"unist-util-visit@5.0.0":{"resolution":{"integrity":"sha512-MR04uvD+07cwl/yhVuVWAtw+3GOR/knlL55Nd/wAdblk27GCVt3lqpTivy/tkJcZoNPzTwS1Y+KMojlLDhoTzg=="}},"universalify@0.1.2":{"resolution":{"integrity":"sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg=="},"engines":{"node":">= 4.0.0"}},"universalify@0.2.0":{"resolution":{"integrity":"sha512-CJ1QgKmNg3CwvAv/kOFmtnEN05f0D/cn9QntgNOQlQF9dgvVTHj3t+8JPdjqawCHk7V/KA+fbUqzZ9XWhcqPUg=="},"engines":{"node":">= 4.0.0"}},"universalify@2.0.1":{"resolution":{"integrity":"sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw=="},"engines":{"node":">= 10.0.0"}},"unplugin@3.0.0":{"resolution":{"integrity":"sha512-0Mqk3AT2TZCXWKdcoaufeXNukv2mTrEZExeXlHIOZXdqYoHHr4n51pymnwV8x2BOVxwXbK2HLlI7usrqMpycdg=="},"engines":{"node":"^20.19.0 || >=22.12.0"}},"update-browserslist-db@1.1.3":{"resolution":{"integrity":"sha512-UxhIZQ+QInVdunkDAaiazvvT/+fXL5Osr0JZlJulepYu6Jd7qJtDZjlur0emRlT71EN3ScPoE7gvsuIKKNavKw=="},"hasBin":true,"peerDependencies":{"browserslist":">= 4.21.0"}},"update-browserslist-db@1.2.3":{"resolution":{"integrity":"sha512-Js0m9cx+qOgDxo0eMiFGEueWztz+d4+M3rGlmKPT+T4IS/jP4ylw3Nwpu6cpTTP8R1MAC1kF4VbdLt3ARf209w=="},"hasBin":true,"peerDependencies":{"browserslist":">= 4.21.0"}},"url-parse@1.5.10":{"resolution":{"integrity":"sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ=="}},"urlpattern-polyfill@10.1.0":{"resolution":{"integrity":"sha512-IGjKp/o0NL3Bso1PymYURCJxMPNAf/ILOpendP9f5B6e1rTJgdgiOvgfoT8VxCAdY+Wisb9uhGaJJf3yZ2V9nw=="}},"use-sync-external-store@1.6.0":{"resolution":{"integrity":"sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w=="},"peerDependencies":{"react":"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"}},"userhome@1.0.1":{"resolution":{"integrity":"sha512-5cnLm4gseXjAclKowC4IjByaGsjtAoV6PrOQOljplNB54ReUYJP8HdAFq2muHinSDAh09PPX/uXDPfdxRHvuSA=="},"engines":{"node":">= 0.8.0"}},"util-deprecate@1.0.2":{"resolution":{"integrity":"sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw=="}},"uuid@11.1.0":{"resolution":{"integrity":"sha512-0/A9rDy9P7cJ+8w1c9WD9V//9Wj15Ce2MPz8Ri6032usz+NfePxx5AcN3bN+r6ZL6jEo066/yNYB3tn4pQEx+A=="},"hasBin":true},"varint@6.0.0":{"resolution":{"integrity":"sha512-cXEIW6cfr15lFv563k4GuVuW/fiwjknytD37jIOLSdSWuOI6WnO/oKwmP2FQTU2l01LP8/M5TSAJpzUaGe3uWg=="}},"vfile-location@5.0.3":{"resolution":{"integrity":"sha512-5yXvWDEgqeiYiBe1lbxYF7UMAIm/IcopxMHrMQDq3nvKcjPKIhZklUKL+AE7J7uApI4kwe2snsK+eI6UTj9EHg=="}},"vfile-message@4.0.2":{"resolution":{"integrity":"sha512-jRDZ1IMLttGj41KcZvlrYAaI3CfqpLpfpf+Mfig13viT6NKvRzWZ+lXz0Y5D60w6uJIBAOGq9mSHf0gktF0duw=="}},"vfile@6.0.3":{"resolution":{"integrity":"sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q=="}},"vite-node@3.2.4":{"resolution":{"integrity":"sha512-EbKSKh+bh1E1IFxeO0pg1n4dvoOTt0UDiXMd/qn++r98+jPO1xtJilvXldeuQ8giIB5IkpjCgMleHMNEsGH6pg=="},"engines":{"node":"^18.0.0 || ^20.0.0 || >=22.0.0"},"hasBin":true},"vite-plugin-static-copy@4.1.0":{"resolution":{"integrity":"sha512-9XOarNV7LgP0KBB7AApxdgFikLXx3daZdqjC3AevYsL6MrUH62zphonLUs2a6LZc1HN1GY+vQdheZ8VVJb6dQQ=="},"engines":{"node":"^22.0.0 || >=24.0.0"},"peerDependencies":{"vite":"^6.0.0 || ^7.0.0 || ^8.0.0"}},"vite@7.2.7":{"resolution":{"integrity":"sha512-ITcnkFeR3+fI8P1wMgItjGrR10170d8auB4EpMLPqmx6uxElH3a/hHGQabSHKdqd4FXWO1nFIp9rRn7JQ34ACQ=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"hasBin":true,"peerDependencies":{"@types/node":"^20.19.0 || >=22.12.0","jiti":">=1.21.0","less":"^4.0.0","lightningcss":"^1.21.0","sass":"^1.70.0","sass-embedded":"^1.70.0","stylus":">=0.54.8","sugarss":"^5.0.0","terser":"^5.16.0","tsx":"^4.8.1","yaml":"^2.4.2"},"peerDependenciesMeta":{"@types/node":{"optional":true},"jiti":{"optional":true},"less":{"optional":true},"lightningcss":{"optional":true},"sass":{"optional":true},"sass-embedded":{"optional":true},"stylus":{"optional":true},"sugarss":{"optional":true},"terser":{"optional":true},"tsx":{"optional":true},"yaml":{"optional":true}}},"vite@8.0.10":{"resolution":{"integrity":"sha512-rZuUu9j6J5uotLDs+cAA4O5H4K1SfPliUlQwqa6YEwSrWDZzP4rhm00oJR5snMewjxF5V/K3D4kctsUTsIU9Mw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"hasBin":true,"peerDependencies":{"@types/node":"^20.19.0 || >=22.12.0","@vitejs/devtools":"^0.1.0","esbuild":"^0.27.0 || ^0.28.0","jiti":">=1.21.0","less":"^4.0.0","sass":"^1.70.0","sass-embedded":"^1.70.0","stylus":">=0.54.8","sugarss":"^5.0.0","terser":"^5.16.0","tsx":"^4.8.1","yaml":"^2.4.2"},"peerDependenciesMeta":{"@types/node":{"optional":true},"@vitejs/devtools":{"optional":true},"esbuild":{"optional":true},"jiti":{"optional":true},"less":{"optional":true},"sass":{"optional":true},"sass-embedded":{"optional":true},"stylus":{"optional":true},"sugarss":{"optional":true},"terser":{"optional":true},"tsx":{"optional":true},"yaml":{"optional":true}}},"vitefu@1.1.1":{"resolution":{"integrity":"sha512-B/Fegf3i8zh0yFbpzZ21amWzHmuNlLlmJT6n7bu5e+pCHUKQIfXSYokrqOBGEMMe9UG2sostKQF9mml/vYaWJQ=="},"peerDependencies":{"vite":"^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 || ^7.0.0-beta.0"},"peerDependenciesMeta":{"vite":{"optional":true}}},"vitest@3.2.4":{"resolution":{"integrity":"sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A=="},"engines":{"node":"^18.0.0 || ^20.0.0 || >=22.0.0"},"hasBin":true,"peerDependencies":{"@edge-runtime/vm":"*","@types/debug":"^4.1.12","@types/node":"^18.0.0 || ^20.0.0 || >=22.0.0","@vitest/browser":"3.2.4","@vitest/ui":"3.2.4","happy-dom":"*","jsdom":"*"},"peerDependenciesMeta":{"@edge-runtime/vm":{"optional":true},"@types/debug":{"optional":true},"@types/node":{"optional":true},"@vitest/browser":{"optional":true},"@vitest/ui":{"optional":true},"happy-dom":{"optional":true},"jsdom":{"optional":true}}},"vitest@4.0.18":{"resolution":{"integrity":"sha512-hOQuK7h0FGKgBAas7v0mSAsnvrIgAvWmRFjmzpJ7SwFHH3g1k2u37JtYwOwmEKhK6ZO3v9ggDBBm0La1LCK4uQ=="},"engines":{"node":"^20.0.0 || ^22.0.0 || >=24.0.0"},"hasBin":true,"peerDependencies":{"@edge-runtime/vm":"*","@opentelemetry/api":"^1.9.0","@types/node":"^20.0.0 || ^22.0.0 || >=24.0.0","@vitest/browser-playwright":"4.0.18","@vitest/browser-preview":"4.0.18","@vitest/browser-webdriverio":"4.0.18","@vitest/ui":"4.0.18","happy-dom":"*","jsdom":"*"},"peerDependenciesMeta":{"@edge-runtime/vm":{"optional":true},"@opentelemetry/api":{"optional":true},"@types/node":{"optional":true},"@vitest/browser-playwright":{"optional":true},"@vitest/browser-preview":{"optional":true},"@vitest/browser-webdriverio":{"optional":true},"@vitest/ui":{"optional":true},"happy-dom":{"optional":true},"jsdom":{"optional":true}}},"vitest@4.1.5":{"resolution":{"integrity":"sha512-9Xx1v3/ih3m9hN+SbfkUyy0JAs72ap3r7joc87XL6jwF0jGg6mFBvQ1SrwaX+h8BlkX6Hz9shdd1uo6AF+ZGpg=="},"engines":{"node":"^20.0.0 || ^22.0.0 || >=24.0.0"},"hasBin":true,"peerDependencies":{"@edge-runtime/vm":"*","@opentelemetry/api":"^1.9.0","@types/node":"^20.0.0 || ^22.0.0 || >=24.0.0","@vitest/browser-playwright":"4.1.5","@vitest/browser-preview":"4.1.5","@vitest/browser-webdriverio":"4.1.5","@vitest/coverage-istanbul":"4.1.5","@vitest/coverage-v8":"4.1.5","@vitest/ui":"4.1.5","happy-dom":"*","jsdom":"*","vite":"^6.0.0 || ^7.0.0 || ^8.0.0"},"peerDependenciesMeta":{"@edge-runtime/vm":{"optional":true},"@opentelemetry/api":{"optional":true},"@types/node":{"optional":true},"@vitest/browser-playwright":{"optional":true},"@vitest/browser-preview":{"optional":true},"@vitest/browser-webdriverio":{"optional":true},"@vitest/coverage-istanbul":{"optional":true},"@vitest/coverage-v8":{"optional":true},"@vitest/ui":{"optional":true},"happy-dom":{"optional":true},"jsdom":{"optional":true}}},"vscode-jsonrpc@8.2.0":{"resolution":{"integrity":"sha512-C+r0eKJUIfiDIfwJhria30+TYWPtuHJXHtI7J0YlOmKAo7ogxP20T0zxB7HZQIFhIyvoBPwWskjxrvAtfjyZfA=="},"engines":{"node":">=14.0.0"}},"vscode-languageserver-protocol@3.17.5":{"resolution":{"integrity":"sha512-mb1bvRJN8SVznADSGWM9u/b07H7Ecg0I3OgXDuLdn307rl/J3A9YD6/eYOssqhecL27hK1IPZAsaqh00i/Jljg=="}},"vscode-languageserver-textdocument@1.0.12":{"resolution":{"integrity":"sha512-cxWNPesCnQCcMPeenjKKsOCKQZ/L6Tv19DTRIGuLWe32lyzWhihGVJ/rcckZXJxfdKCFvRLS3fpBIsV/ZGX4zA=="}},"vscode-languageserver-types@3.17.5":{"resolution":{"integrity":"sha512-Ld1VelNuX9pdF39h2Hgaeb5hEZM2Z3jUrrMgWQAu82jMtZp7p3vJT3BzToKtZI7NgQssZje5o0zryOrhQvzQAg=="}},"vscode-languageserver@9.0.1":{"resolution":{"integrity":"sha512-woByF3PDpkHFUreUa7Hos7+pUWdeWMXRd26+ZX2A8cFx6v/JPTtd4/uN0/jB6XQHYaOlHbio03NTHCqrgG5n7g=="},"hasBin":true},"vscode-uri@3.0.8":{"resolution":{"integrity":"sha512-AyFQ0EVmsOZOlAnxoFOGOq1SQDWAB7C6aqMGS23svWAllfOaxbuFvcT8D1i8z3Gyn8fraVeZNNmN6e9bxxXkKw=="}},"w3c-xmlserializer@5.0.0":{"resolution":{"integrity":"sha512-o8qghlI8NZHU1lLPrpi2+Uq7abh4GGPpYANlalzWxyWteJOCsr/P+oPBA49TOLu5FTZO4d3F9MnWJfiMo4BkmA=="},"engines":{"node":">=18"}},"wait-port@1.1.0":{"resolution":{"integrity":"sha512-3e04qkoN3LxTMLakdqeWth8nih8usyg+sf1Bgdf9wwUkp05iuK1eSY/QpLvscT/+F/gA89+LpUmmgBtesbqI2Q=="},"engines":{"node":">=10"},"hasBin":true},"watchpack@2.5.1":{"resolution":{"integrity":"sha512-Zn5uXdcFNIA1+1Ei5McRd+iRzfhENPCe7LeABkJtNulSxjma+l7ltNx55BWZkRlwRnpOgHqxnjyaDgJnNXnqzg=="},"engines":{"node":">=10.13.0"}},"wcwidth@1.0.1":{"resolution":{"integrity":"sha512-XHPEwS0q6TaxcvG85+8EYkbiCux2XtWG2mkc47Ng2A77BQu9+DqIOJldST4HgPkuea7dvKSj5VgX3P1d4rW8Tg=="}},"web-namespaces@2.0.1":{"resolution":{"integrity":"sha512-bKr1DkiNa2krS7qxNtdrtHAmzuYGFQLiQ13TsorsdT6ULTkPLKuu5+GsFpDlg6JFjUTwX2DyhMPG2be8uPrqsQ=="}},"web-streams-polyfill@3.3.3":{"resolution":{"integrity":"sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw=="},"engines":{"node":">= 8"}},"web-vitals@4.2.4":{"resolution":{"integrity":"sha512-r4DIlprAGwJ7YM11VZp4R884m0Vmgr6EAKe3P+kO0PPj3Unqyvv59rczf6UiGcb9Z8QxZVcqKNwv/g0WNdWwsw=="}},"web-vitals@5.1.0":{"resolution":{"integrity":"sha512-ArI3kx5jI0atlTtmV0fWU3fjpLmq/nD3Zr1iFFlJLaqa5wLBkUSzINwBPySCX/8jRyjlmy1Volw1kz1g9XE4Jg=="}},"webdriver@9.2.0":{"resolution":{"integrity":"sha512-UrhuHSLq4m3OgncvX75vShfl5w3gmjAy8LvLb6/L6V+a+xcqMRelFx/DQ72Mr84F4m8Li6wjtebrOH1t9V/uOQ=="},"engines":{"node":">=18.20.0"}},"webdriverio@9.2.1":{"resolution":{"integrity":"sha512-AI7xzqTmFiU7oAx4fpEF1U1MA7smhCPVDeM0gxPqG5qWepzib3WDX2SsRtcmhdVW+vLJ3m4bf8rAXxZ2M1msWA=="},"engines":{"node":">=18.20.0"},"peerDependencies":{"puppeteer-core":"^22.3.0"},"peerDependenciesMeta":{"puppeteer-core":{"optional":true}}},"webidl-conversions@7.0.0":{"resolution":{"integrity":"sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g=="},"engines":{"node":">=12"}},"webidl-conversions@8.0.0":{"resolution":{"integrity":"sha512-n4W4YFyz5JzOfQeA8oN7dUYpR+MBP3PIUsn2jLjWXwK5ASUzt0Jc/A5sAUZoCYFJRGF0FBKJ+1JjN43rNdsQzA=="},"engines":{"node":">=20"}},"webpack-sources@3.4.1":{"resolution":{"integrity":"sha512-eACpxRN02yaawnt+uUNIF7Qje6A9zArxBbcAJjK1PK3S9Ycg5jIuJ8pW4q8EMnwNZCEGltcjkRx1QzOxOkKD8A=="},"engines":{"node":">=10.13.0"}},"webpack-virtual-modules@0.6.2":{"resolution":{"integrity":"sha512-66/V2i5hQanC51vBQKPH4aI8NMAcBW59FVBs+rC7eGHupMyfn34q7rZIE+ETlJ+XTevqfUhVVBgSUNSW2flEUQ=="}},"webpack@5.99.9":{"resolution":{"integrity":"sha512-brOPwM3JnmOa+7kd3NsmOUOwbDAj8FT9xDsG3IW0MgbN9yZV7Oi/s/+MNQ/EcSMqw7qfoRyXPoeEWT8zLVdVGg=="},"engines":{"node":">=10.13.0"},"hasBin":true,"peerDependencies":{"webpack-cli":"*"},"peerDependenciesMeta":{"webpack-cli":{"optional":true}}},"whatwg-encoding@3.1.1":{"resolution":{"integrity":"sha512-6qN4hJdMwfYBtE3YBTTHhoeuUrDBPZmbQaxWAqSALV/MeEnR5z1xd8UKud2RAkFoPkmB+hli1TZSnyi84xz1vQ=="},"engines":{"node":">=18"},"deprecated":"Use @exodus/bytes instead for a more spec-conformant and faster implementation"},"whatwg-mimetype@3.0.0":{"resolution":{"integrity":"sha512-nt+N2dzIutVRxARx1nghPKGv1xHikU7HKdfafKkLNLindmPU/ch3U31NOCGGA/dmPcmb1VlofO0vnKAcsm0o/Q=="},"engines":{"node":">=12"}},"whatwg-mimetype@4.0.0":{"resolution":{"integrity":"sha512-QaKxh0eNIi2mE9p2vEdzfagOKHCcj1pJ56EEHGQOVxp8r9/iszLUUV7v89x9O1p/T+NlTM5W7jW6+cz4Fq1YVg=="},"engines":{"node":">=18"}},"whatwg-url@14.2.0":{"resolution":{"integrity":"sha512-De72GdQZzNTUBBChsXueQUnPKDkg/5A5zp7pFDuQAj5UFoENpiACU0wlCvzpAGnTkj++ihpKwKyYewn/XNUbKw=="},"engines":{"node":">=18"}},"whatwg-url@15.1.0":{"resolution":{"integrity":"sha512-2ytDk0kiEj/yu90JOAp44PVPUkO9+jVhyf+SybKlRHSDlvOOZhdPIrr7xTH64l4WixO2cP+wQIcgujkGBPPz6g=="},"engines":{"node":">=20"}},"which@2.0.2":{"resolution":{"integrity":"sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA=="},"engines":{"node":">= 8"},"hasBin":true},"which@4.0.0":{"resolution":{"integrity":"sha512-GlaYyEb07DPxYCKhKzplCWBJtvxZcZMrL+4UkrTSJHHPyZU4mYYTv3qaOe77H7EODLSSopAUFAc6W8U4yqvscg=="},"engines":{"node":"^16.13.0 || >=18.0.0"},"hasBin":true},"why-is-node-running@2.3.0":{"resolution":{"integrity":"sha512-hUrmaWBdVDcxvYqnyh09zunKzROWjbZTiNy8dBEjkS7ehEDQibXJ7XvlmtbwuTclUiIyN+CyXQD4Vmko8fNm8w=="},"engines":{"node":">=8"},"hasBin":true},"workerd@1.20260504.1":{"resolution":{"integrity":"sha512-AQTXSHbYNP9tLPgJNn0TmizyE4aDh2VuZZXlTAL0uu4fbCY436NAnQSJIzZbaFHM3DnAtVs9G8tkiJztSdYqDg=="},"engines":{"node":">=16"},"hasBin":true},"wrangler@4.88.0":{"resolution":{"integrity":"sha512-f470QwbeT/JM1S0duq+sLtkss7UBxIFDtYHgujv9tdQUyA/dLGDq51am0rqrsuFtCi97lTM1P5sqtt8xra1AlA=="},"engines":{"node":">=22.0.0"},"hasBin":true,"peerDependencies":{"@cloudflare/workers-types":"^4.20260504.1"},"peerDependenciesMeta":{"@cloudflare/workers-types":{"optional":true}}},"wrap-ansi@6.2.0":{"resolution":{"integrity":"sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA=="},"engines":{"node":">=8"}},"wrap-ansi@7.0.0":{"resolution":{"integrity":"sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="},"engines":{"node":">=10"}},"wrap-ansi@8.1.0":{"resolution":{"integrity":"sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ=="},"engines":{"node":">=12"}},"wrappy@1.0.2":{"resolution":{"integrity":"sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="}},"ws@8.18.0":{"resolution":{"integrity":"sha512-8VbfWfHLbbwu3+N6OKsOMpBdT4kXPDDB9cJk2bJ6mh9ucxdlnNvH1e+roYkKmN9Nxw2yjz7VzeO9oOz2zJ04Pw=="},"engines":{"node":">=10.0.0"},"peerDependencies":{"bufferutil":"^4.0.1","utf-8-validate":">=5.0.2"},"peerDependenciesMeta":{"bufferutil":{"optional":true},"utf-8-validate":{"optional":true}}},"ws@8.18.3":{"resolution":{"integrity":"sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg=="},"engines":{"node":">=10.0.0"},"peerDependencies":{"bufferutil":"^4.0.1","utf-8-validate":">=5.0.2"},"peerDependenciesMeta":{"bufferutil":{"optional":true},"utf-8-validate":{"optional":true}}},"ws@8.20.0":{"resolution":{"integrity":"sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA=="},"engines":{"node":">=10.0.0"},"peerDependencies":{"bufferutil":"^4.0.1","utf-8-validate":">=5.0.2"},"peerDependenciesMeta":{"bufferutil":{"optional":true},"utf-8-validate":{"optional":true}}},"xml-name-validator@5.0.0":{"resolution":{"integrity":"sha512-EvGK8EJ3DhaHfbRlETOWAS5pO9MZITeauHKJyb8wyajUfQUenkIg2MvLDTZ4T/TgIcm3HU0TFBgWWboAZ30UHg=="},"engines":{"node":">=18"}},"xmlbuilder2@4.0.3":{"resolution":{"integrity":"sha512-bx8Q1STctnNaaDymWnkfQLKofs0mGNN7rLLapJlGuV3VlvegD7Ls4ggMjE3aUSWItCCzU0PEv45lI87iSigiCA=="},"engines":{"node":">=20.0"}},"xmlchars@2.2.0":{"resolution":{"integrity":"sha512-JZnDKK8B0RCDw84FNdDAIpZK+JuJw+s7Lz8nksI7SIuU3UXJJslUthsi+uWBUYOwPFwW7W7PRLRfUKpxjtjFCw=="}},"y18n@5.0.8":{"resolution":{"integrity":"sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA=="},"engines":{"node":">=10"}},"yallist@3.1.1":{"resolution":{"integrity":"sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g=="}},"yallist@4.0.0":{"resolution":{"integrity":"sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A=="}},"yaml@2.8.1":{"resolution":{"integrity":"sha512-lcYcMxX2PO9XMGvAJkJ3OsNMw+/7FKes7/hgerGUYWIoWu5j/+YQqcZr5JnPZWzOsEBgMbSbiSTn/dv/69Mkpw=="},"engines":{"node":">= 14.6"},"hasBin":true},"yargs-parser@21.1.1":{"resolution":{"integrity":"sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="},"engines":{"node":">=12"}},"yargs-parser@22.0.0":{"resolution":{"integrity":"sha512-rwu/ClNdSMpkSrUb+d6BRsSkLUq1fmfsY6TOpYzTwvwkg1/NRG85KBy3kq++A8LKQwX6lsu+aWad+2khvuXrqw=="},"engines":{"node":"^20.19.0 || ^22.12.0 || >=23"}},"yargs@17.7.2":{"resolution":{"integrity":"sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w=="},"engines":{"node":">=12"}},"yauzl@2.10.0":{"resolution":{"integrity":"sha512-p4a9I6X6nu6IhoGmBqAcbJy1mlC4j27vEPZX9F4L4/vZT3Lyq1VkFHw/V/PUcB9Buo+DG3iHkT0x3Qya58zc3g=="}},"yoctocolors-cjs@2.1.3":{"resolution":{"integrity":"sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw=="},"engines":{"node":">=18"}},"youch-core@0.3.3":{"resolution":{"integrity":"sha512-ho7XuGjLaJ2hWHoK8yFnsUGy2Y5uDpqSTq1FkHLK4/oqKtyUU1AFbOOxY4IpC9f0fTLjwYbslUz0Po5BpD1wrA=="}},"youch@4.1.0-beta.10":{"resolution":{"integrity":"sha512-rLfVLB4FgQneDr0dv1oddCVZmKjcJ6yX6mS4pU82Mq/Dt9a3cLZQ62pDBL4AUO+uVrCvtWz3ZFUL2HFAFJ/BXQ=="}},"zip-stream@6.0.1":{"resolution":{"integrity":"sha512-zK7YHHz4ZXpW89AHXUPbQVGKI7uvkd3hzusTdotCg1UxyaVtg0zFJSTfW/Dq5f7OBBVnq6cZIaC8Ti4hb6dtCA=="},"engines":{"node":">= 14"}},"zod@3.25.76":{"resolution":{"integrity":"sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="}},"zwitch@2.0.4":{"resolution":{"integrity":"sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A=="}}},"snapshots":{"@acemir/cssom@0.9.28":{},"@ampproject/remapping@2.3.0":{"dependencies":{"@jridgewell/gen-mapping":"0.3.13","@jridgewell/trace-mapping":"0.3.30"}},"@antfu/install-pkg@1.1.0":{"dependencies":{"package-manager-detector":"1.5.0","tinyexec":"1.0.2"}},"@antfu/utils@9.3.0":{},"@asamuzakjp/css-color@3.1.4":{"dependencies":{"@csstools/css-calc":"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-color-parser":"3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-parser-algorithms":"3.0.5(@csstools/css-tokenizer@3.0.4)","@csstools/css-tokenizer":"3.0.4","lru-cache":"10.4.3"}},"@asamuzakjp/css-color@4.1.0":{"dependencies":{"@csstools/css-calc":"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-color-parser":"3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-parser-algorithms":"3.0.5(@csstools/css-tokenizer@3.0.4)","@csstools/css-tokenizer":"3.0.4","lru-cache":"11.2.4"}},"@asamuzakjp/dom-selector@6.7.6":{"dependencies":{"@asamuzakjp/nwsapi":"2.3.9","bidi-js":"1.0.3","css-tree":"3.1.0","is-potential-custom-element-name":"1.0.1","lru-cache":"11.2.4"}},"@asamuzakjp/nwsapi@2.3.9":{},"@babel/code-frame@7.27.1":{"dependencies":{"@babel/helper-validator-identifier":"7.28.5","js-tokens":"4.0.0","picocolors":"1.1.1"}},"@babel/compat-data@7.28.0":{},"@babel/core@7.28.5":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/generator":"7.28.5","@babel/helper-compilation-targets":"7.27.2","@babel/helper-module-transforms":"7.28.3(@babel/core@7.28.5)","@babel/helpers":"7.28.4","@babel/parser":"7.28.5","@babel/template":"7.27.2","@babel/traverse":"7.28.5","@babel/types":"7.28.5","@jridgewell/remapping":"2.3.5","convert-source-map":"2.0.0","debug":"4.4.3","gensync":"1.0.0-beta.2","json5":"2.2.3","semver":"6.3.1"},"transitivePeerDependencies":["supports-color"]},"@babel/generator@7.28.5":{"dependencies":{"@babel/parser":"7.28.5","@babel/types":"7.28.5","@jridgewell/gen-mapping":"0.3.13","@jridgewell/trace-mapping":"0.3.31","jsesc":"3.1.0"}},"@babel/helper-compilation-targets@7.27.2":{"dependencies":{"@babel/compat-data":"7.28.0","@babel/helper-validator-option":"7.27.1","browserslist":"4.25.3","lru-cache":"5.1.1","semver":"6.3.1"}},"@babel/helper-globals@7.28.0":{},"@babel/helper-module-imports@7.27.1":{"dependencies":{"@babel/traverse":"7.28.5","@babel/types":"7.28.5"},"transitivePeerDependencies":["supports-color"]},"@babel/helper-module-transforms@7.28.3(@babel/core@7.28.5)":{"dependencies":{"@babel/core":"7.28.5","@babel/helper-module-imports":"7.27.1","@babel/helper-validator-identifier":"7.28.5","@babel/traverse":"7.28.5"},"transitivePeerDependencies":["supports-color"]},"@babel/helper-plugin-utils@7.27.1":{},"@babel/helper-string-parser@7.27.1":{},"@babel/helper-validator-identifier@7.28.5":{},"@babel/helper-validator-option@7.27.1":{},"@babel/helpers@7.28.4":{"dependencies":{"@babel/template":"7.27.2","@babel/types":"7.28.5"}},"@babel/parser@7.28.5":{"dependencies":{"@babel/types":"7.28.5"}},"@babel/parser@7.29.3":{"dependencies":{"@babel/types":"7.29.0"}},"@babel/plugin-syntax-jsx@7.27.1(@babel/core@7.28.5)":{"dependencies":{"@babel/core":"7.28.5","@babel/helper-plugin-utils":"7.27.1"}},"@babel/plugin-syntax-typescript@7.27.1(@babel/core@7.28.5)":{"dependencies":{"@babel/core":"7.28.5","@babel/helper-plugin-utils":"7.27.1"}},"@babel/runtime@7.28.4":{},"@babel/template@7.27.2":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/parser":"7.28.5","@babel/types":"7.28.5"}},"@babel/traverse@7.28.5":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/generator":"7.28.5","@babel/helper-globals":"7.28.0","@babel/parser":"7.28.5","@babel/template":"7.27.2","@babel/types":"7.28.5","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"]},"@babel/types@7.28.5":{"dependencies":{"@babel/helper-string-parser":"7.27.1","@babel/helper-validator-identifier":"7.28.5"}},"@babel/types@7.29.0":{"dependencies":{"@babel/helper-string-parser":"7.27.1","@babel/helper-validator-identifier":"7.28.5"}},"@bcoe/v8-coverage@1.0.2":{},"@blazediff/core@1.9.1":{},"@braintree/sanitize-url@7.1.1":{},"@bufbuild/protobuf@2.12.0":{"optional":true},"@bundled-es-modules/cookie@2.0.1":{"dependencies":{"cookie":"0.7.2"},"optional":true},"@bundled-es-modules/statuses@1.0.1":{"dependencies":{"statuses":"2.0.2"},"optional":true},"@bundled-es-modules/tough-cookie@0.1.6":{"dependencies":{"@types/tough-cookie":"4.0.5","tough-cookie":"4.1.4"},"optional":true},"@changesets/apply-release-plan@7.0.13":{"dependencies":{"@changesets/config":"3.1.1","@changesets/get-version-range-type":"0.4.0","@changesets/git":"3.0.4","@changesets/should-skip-package":"0.1.2","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","detect-indent":"6.1.0","fs-extra":"7.0.1","lodash.startcase":"4.4.0","outdent":"0.5.0","prettier":"2.8.8","resolve-from":"5.0.0","semver":"7.7.3"}},"@changesets/assemble-release-plan@6.0.9":{"dependencies":{"@changesets/errors":"0.2.0","@changesets/get-dependents-graph":"2.1.3","@changesets/should-skip-package":"0.1.2","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","semver":"7.7.3"}},"@changesets/changelog-git@0.2.1":{"dependencies":{"@changesets/types":"6.1.0"}},"@changesets/cli@2.29.7(@types/node@24.10.2)":{"dependencies":{"@changesets/apply-release-plan":"7.0.13","@changesets/assemble-release-plan":"6.0.9","@changesets/changelog-git":"0.2.1","@changesets/config":"3.1.1","@changesets/errors":"0.2.0","@changesets/get-dependents-graph":"2.1.3","@changesets/get-release-plan":"4.0.13","@changesets/git":"3.0.4","@changesets/logger":"0.1.1","@changesets/pre":"2.0.2","@changesets/read":"0.6.5","@changesets/should-skip-package":"0.1.2","@changesets/types":"6.1.0","@changesets/write":"0.4.0","@inquirer/external-editor":"1.0.1(@types/node@24.10.2)","@manypkg/get-packages":"1.1.3","ansi-colors":"4.1.3","ci-info":"3.9.0","enquirer":"2.4.1","fs-extra":"7.0.1","mri":"1.2.0","p-limit":"2.3.0","package-manager-detector":"0.2.11","picocolors":"1.1.1","resolve-from":"5.0.0","semver":"7.7.3","spawndamnit":"3.0.1","term-size":"2.2.1"},"transitivePeerDependencies":["@types/node"]},"@changesets/config@3.1.1":{"dependencies":{"@changesets/errors":"0.2.0","@changesets/get-dependents-graph":"2.1.3","@changesets/logger":"0.1.1","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","fs-extra":"7.0.1","micromatch":"4.0.8"}},"@changesets/errors@0.2.0":{"dependencies":{"extendable-error":"0.1.7"}},"@changesets/get-dependents-graph@2.1.3":{"dependencies":{"@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","picocolors":"1.1.1","semver":"7.7.3"}},"@changesets/get-release-plan@4.0.13":{"dependencies":{"@changesets/assemble-release-plan":"6.0.9","@changesets/config":"3.1.1","@changesets/pre":"2.0.2","@changesets/read":"0.6.5","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3"}},"@changesets/get-version-range-type@0.4.0":{},"@changesets/git@3.0.4":{"dependencies":{"@changesets/errors":"0.2.0","@manypkg/get-packages":"1.1.3","is-subdir":"1.2.0","micromatch":"4.0.8","spawndamnit":"3.0.1"}},"@changesets/logger@0.1.1":{"dependencies":{"picocolors":"1.1.1"}},"@changesets/parse@0.4.1":{"dependencies":{"@changesets/types":"6.1.0","js-yaml":"3.14.1"}},"@changesets/pre@2.0.2":{"dependencies":{"@changesets/errors":"0.2.0","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","fs-extra":"7.0.1"}},"@changesets/read@0.6.5":{"dependencies":{"@changesets/git":"3.0.4","@changesets/logger":"0.1.1","@changesets/parse":"0.4.1","@changesets/types":"6.1.0","fs-extra":"7.0.1","p-filter":"2.1.0","picocolors":"1.1.1"}},"@changesets/should-skip-package@0.1.2":{"dependencies":{"@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3"}},"@changesets/types@4.1.0":{},"@changesets/types@6.1.0":{},"@changesets/write@0.4.0":{"dependencies":{"@changesets/types":"6.1.0","fs-extra":"7.0.1","human-id":"4.1.1","prettier":"2.8.8"}},"@chevrotain/cst-dts-gen@11.0.3":{"dependencies":{"@chevrotain/gast":"11.0.3","@chevrotain/types":"11.0.3","lodash-es":"4.17.21"}},"@chevrotain/gast@11.0.3":{"dependencies":{"@chevrotain/types":"11.0.3","lodash-es":"4.17.21"}},"@chevrotain/regexp-to-ast@11.0.3":{},"@chevrotain/types@11.0.3":{},"@chevrotain/utils@11.0.3":{},"@cloudflare/kv-asset-handler@0.5.0":{},"@cloudflare/unenv-preset@2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)":{"dependencies":{"unenv":"2.0.0-rc.24"},"optionalDependencies":{"workerd":"1.20260504.1"}},"@cloudflare/vite-plugin@1.36.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(workerd@1.20260504.1)(wrangler@4.88.0)":{"dependencies":{"@cloudflare/unenv-preset":"2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)","miniflare":"4.20260504.0","unenv":"2.0.0-rc.24","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","wrangler":"4.88.0","ws":"8.18.0"},"transitivePeerDependencies":["bufferutil","utf-8-validate","workerd"]},"@cloudflare/workerd-darwin-64@1.20260504.1":{"optional":true},"@cloudflare/workerd-darwin-arm64@1.20260504.1":{"optional":true},"@cloudflare/workerd-linux-64@1.20260504.1":{"optional":true},"@cloudflare/workerd-linux-arm64@1.20260504.1":{"optional":true},"@cloudflare/workerd-windows-64@1.20260504.1":{"optional":true},"@cspotcode/source-map-support@0.8.1":{"dependencies":{"@jridgewell/trace-mapping":"0.3.9"}},"@csstools/color-helpers@5.1.0":{},"@csstools/css-calc@2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)":{"dependencies":{"@csstools/css-parser-algorithms":"3.0.5(@csstools/css-tokenizer@3.0.4)","@csstools/css-tokenizer":"3.0.4"}},"@csstools/css-color-parser@3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)":{"dependencies":{"@csstools/color-helpers":"5.1.0","@csstools/css-calc":"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-parser-algorithms":"3.0.5(@csstools/css-tokenizer@3.0.4)","@csstools/css-tokenizer":"3.0.4"}},"@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4)":{"dependencies":{"@csstools/css-tokenizer":"3.0.4"}},"@csstools/css-syntax-patches-for-csstree@1.0.14(postcss@8.5.14)":{"dependencies":{"postcss":"8.5.14"}},"@csstools/css-tokenizer@3.0.4":{},"@emnapi/core@1.10.0":{"dependencies":{"@emnapi/wasi-threads":"1.2.1","tslib":"2.8.1"},"optional":true},"@emnapi/core@1.4.5":{"dependencies":{"@emnapi/wasi-threads":"1.0.4","tslib":"2.8.1"}},"@emnapi/runtime@1.10.0":{"dependencies":{"tslib":"2.8.1"},"optional":true},"@emnapi/runtime@1.4.5":{"dependencies":{"tslib":"2.8.1"}},"@emnapi/wasi-threads@1.0.4":{"dependencies":{"tslib":"2.8.1"}},"@emnapi/wasi-threads@1.2.1":{"dependencies":{"tslib":"2.8.1"},"optional":true},"@esbuild/aix-ppc64@0.25.12":{"optional":true},"@esbuild/aix-ppc64@0.27.3":{"optional":true},"@esbuild/android-arm64@0.25.12":{"optional":true},"@esbuild/android-arm64@0.27.3":{"optional":true},"@esbuild/android-arm@0.25.12":{"optional":true},"@esbuild/android-arm@0.27.3":{"optional":true},"@esbuild/android-x64@0.25.12":{"optional":true},"@esbuild/android-x64@0.27.3":{"optional":true},"@esbuild/darwin-arm64@0.25.12":{"optional":true},"@esbuild/darwin-arm64@0.27.3":{"optional":true},"@esbuild/darwin-x64@0.25.12":{"optional":true},"@esbuild/darwin-x64@0.27.3":{"optional":true},"@esbuild/freebsd-arm64@0.25.12":{"optional":true},"@esbuild/freebsd-arm64@0.27.3":{"optional":true},"@esbuild/freebsd-x64@0.25.12":{"optional":true},"@esbuild/freebsd-x64@0.27.3":{"optional":true},"@esbuild/linux-arm64@0.25.12":{"optional":true},"@esbuild/linux-arm64@0.27.3":{"optional":true},"@esbuild/linux-arm@0.25.12":{"optional":true},"@esbuild/linux-arm@0.27.3":{"optional":true},"@esbuild/linux-ia32@0.25.12":{"optional":true},"@esbuild/linux-ia32@0.27.3":{"optional":true},"@esbuild/linux-loong64@0.25.12":{"optional":true},"@esbuild/linux-loong64@0.27.3":{"optional":true},"@esbuild/linux-mips64el@0.25.12":{"optional":true},"@esbuild/linux-mips64el@0.27.3":{"optional":true},"@esbuild/linux-ppc64@0.25.12":{"optional":true},"@esbuild/linux-ppc64@0.27.3":{"optional":true},"@esbuild/linux-riscv64@0.25.12":{"optional":true},"@esbuild/linux-riscv64@0.27.3":{"optional":true},"@esbuild/linux-s390x@0.25.12":{"optional":true},"@esbuild/linux-s390x@0.27.3":{"optional":true},"@esbuild/linux-x64@0.25.12":{"optional":true},"@esbuild/linux-x64@0.27.3":{"optional":true},"@esbuild/netbsd-arm64@0.25.12":{"optional":true},"@esbuild/netbsd-arm64@0.27.3":{"optional":true},"@esbuild/netbsd-x64@0.25.12":{"optional":true},"@esbuild/netbsd-x64@0.27.3":{"optional":true},"@esbuild/openbsd-arm64@0.25.12":{"optional":true},"@esbuild/openbsd-arm64@0.27.3":{"optional":true},"@esbuild/openbsd-x64@0.25.12":{"optional":true},"@esbuild/openbsd-x64@0.27.3":{"optional":true},"@esbuild/openharmony-arm64@0.25.12":{"optional":true},"@esbuild/openharmony-arm64@0.27.3":{"optional":true},"@esbuild/sunos-x64@0.25.12":{"optional":true},"@esbuild/sunos-x64@0.27.3":{"optional":true},"@esbuild/win32-arm64@0.25.12":{"optional":true},"@esbuild/win32-arm64@0.27.3":{"optional":true},"@esbuild/win32-ia32@0.25.12":{"optional":true},"@esbuild/win32-ia32@0.27.3":{"optional":true},"@esbuild/win32-x64@0.25.12":{"optional":true},"@esbuild/win32-x64@0.27.3":{"optional":true},"@iconify/types@2.0.0":{},"@iconify/utils@3.0.2":{"dependencies":{"@antfu/install-pkg":"1.1.0","@antfu/utils":"9.3.0","@iconify/types":"2.0.0","debug":"4.4.3","globals":"15.15.0","kolorist":"1.8.0","local-pkg":"1.1.2","mlly":"1.8.0"},"transitivePeerDependencies":["supports-color"]},"@img/colour@1.1.0":{},"@img/sharp-darwin-arm64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-darwin-arm64":"1.2.4"},"optional":true},"@img/sharp-darwin-x64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-darwin-x64":"1.2.4"},"optional":true},"@img/sharp-libvips-darwin-arm64@1.2.4":{"optional":true},"@img/sharp-libvips-darwin-x64@1.2.4":{"optional":true},"@img/sharp-libvips-linux-arm64@1.2.4":{"optional":true},"@img/sharp-libvips-linux-arm@1.2.4":{"optional":true},"@img/sharp-libvips-linux-ppc64@1.2.4":{"optional":true},"@img/sharp-libvips-linux-riscv64@1.2.4":{"optional":true},"@img/sharp-libvips-linux-s390x@1.2.4":{"optional":true},"@img/sharp-libvips-linux-x64@1.2.4":{"optional":true},"@img/sharp-libvips-linuxmusl-arm64@1.2.4":{"optional":true},"@img/sharp-libvips-linuxmusl-x64@1.2.4":{"optional":true},"@img/sharp-linux-arm64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-arm64":"1.2.4"},"optional":true},"@img/sharp-linux-arm@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-arm":"1.2.4"},"optional":true},"@img/sharp-linux-ppc64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-ppc64":"1.2.4"},"optional":true},"@img/sharp-linux-riscv64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-riscv64":"1.2.4"},"optional":true},"@img/sharp-linux-s390x@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-s390x":"1.2.4"},"optional":true},"@img/sharp-linux-x64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-x64":"1.2.4"},"optional":true},"@img/sharp-linuxmusl-arm64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linuxmusl-arm64":"1.2.4"},"optional":true},"@img/sharp-linuxmusl-x64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linuxmusl-x64":"1.2.4"},"optional":true},"@img/sharp-wasm32@0.34.5":{"dependencies":{"@emnapi/runtime":"1.10.0"},"optional":true},"@img/sharp-win32-arm64@0.34.5":{"optional":true},"@img/sharp-win32-ia32@0.34.5":{"optional":true},"@img/sharp-win32-x64@0.34.5":{"optional":true},"@inquirer/ansi@1.0.2":{"optional":true},"@inquirer/confirm@5.1.21(@types/node@22.15.33)":{"dependencies":{"@inquirer/core":"10.3.2(@types/node@22.15.33)","@inquirer/type":"3.0.10(@types/node@22.15.33)"},"optionalDependencies":{"@types/node":"22.15.33"},"optional":true},"@inquirer/confirm@5.1.21(@types/node@24.10.2)":{"dependencies":{"@inquirer/core":"10.3.2(@types/node@24.10.2)","@inquirer/type":"3.0.10(@types/node@24.10.2)"},"optionalDependencies":{"@types/node":"24.10.2"},"optional":true},"@inquirer/core@10.3.2(@types/node@22.15.33)":{"dependencies":{"@inquirer/ansi":"1.0.2","@inquirer/figures":"1.0.15","@inquirer/type":"3.0.10(@types/node@22.15.33)","cli-width":"4.1.0","mute-stream":"2.0.0","signal-exit":"4.1.0","wrap-ansi":"6.2.0","yoctocolors-cjs":"2.1.3"},"optionalDependencies":{"@types/node":"22.15.33"},"optional":true},"@inquirer/core@10.3.2(@types/node@24.10.2)":{"dependencies":{"@inquirer/ansi":"1.0.2","@inquirer/figures":"1.0.15","@inquirer/type":"3.0.10(@types/node@24.10.2)","cli-width":"4.1.0","mute-stream":"2.0.0","signal-exit":"4.1.0","wrap-ansi":"6.2.0","yoctocolors-cjs":"2.1.3"},"optionalDependencies":{"@types/node":"24.10.2"},"optional":true},"@inquirer/external-editor@1.0.1(@types/node@24.10.2)":{"dependencies":{"chardet":"2.1.0","iconv-lite":"0.6.3"},"optionalDependencies":{"@types/node":"24.10.2"}},"@inquirer/figures@1.0.15":{"optional":true},"@inquirer/type@3.0.10(@types/node@22.15.33)":{"optionalDependencies":{"@types/node":"22.15.33"},"optional":true},"@inquirer/type@3.0.10(@types/node@24.10.2)":{"optionalDependencies":{"@types/node":"24.10.2"},"optional":true},"@isaacs/cliui@8.0.2":{"dependencies":{"string-width":"5.1.2","string-width-cjs":"string-width@4.2.3","strip-ansi":"7.1.2","strip-ansi-cjs":"strip-ansi@6.0.1","wrap-ansi":"8.1.0","wrap-ansi-cjs":"wrap-ansi@7.0.0"}},"@istanbuljs/schema@0.1.3":{},"@jest/diff-sequences@30.0.1":{},"@jest/get-type@30.1.0":{},"@jest/schemas@30.0.5":{"dependencies":{"@sinclair/typebox":"0.34.40"}},"@jridgewell/gen-mapping@0.3.13":{"dependencies":{"@jridgewell/sourcemap-codec":"1.5.5","@jridgewell/trace-mapping":"0.3.31"}},"@jridgewell/remapping@2.3.5":{"dependencies":{"@jridgewell/gen-mapping":"0.3.13","@jridgewell/trace-mapping":"0.3.31"}},"@jridgewell/resolve-uri@3.1.2":{},"@jridgewell/source-map@0.3.11":{"dependencies":{"@jridgewell/gen-mapping":"0.3.13","@jridgewell/trace-mapping":"0.3.31"},"optional":true},"@jridgewell/sourcemap-codec@1.5.5":{},"@jridgewell/trace-mapping@0.3.30":{"dependencies":{"@jridgewell/resolve-uri":"3.1.2","@jridgewell/sourcemap-codec":"1.5.5"}},"@jridgewell/trace-mapping@0.3.31":{"dependencies":{"@jridgewell/resolve-uri":"3.1.2","@jridgewell/sourcemap-codec":"1.5.5"}},"@jridgewell/trace-mapping@0.3.9":{"dependencies":{"@jridgewell/resolve-uri":"3.1.2","@jridgewell/sourcemap-codec":"1.5.5"}},"@jsonjoy.com/buffers@17.63.0(tslib@2.8.1)":{"dependencies":{"tslib":"2.8.1"}},"@jsonjoy.com/codegen@17.63.0(tslib@2.8.1)":{"dependencies":{"tslib":"2.8.1"}},"@jsonjoy.com/json-pointer@17.63.0(tslib@2.8.1)":{"dependencies":{"@jsonjoy.com/util":"17.63.0(tslib@2.8.1)","tslib":"2.8.1"}},"@jsonjoy.com/util@17.63.0(tslib@2.8.1)":{"dependencies":{"@jsonjoy.com/buffers":"17.63.0(tslib@2.8.1)","@jsonjoy.com/codegen":"17.63.0(tslib@2.8.1)","tslib":"2.8.1"}},"@lix-js/plugin-json@1.0.1(tslib@2.8.1)":{"dependencies":{"@jsonjoy.com/json-pointer":"17.63.0(tslib@2.8.1)","@lix-js/sdk":"0.5.1"},"transitivePeerDependencies":["tslib"]},"@lix-js/sdk@0.5.1":{"dependencies":{"@lix-js/server-protocol-schema":"0.1.1","@marcbachmann/cel-js":"2.5.2","@opral/zettel-ast":"0.1.0","@sqlite.org/sqlite-wasm":"3.50.4-build1","ajv":"8.17.1","chevrotain":"11.0.3","kysely":"0.28.7","uuid":"11.1.0"}},"@lix-js/server-protocol-schema@0.1.1":{},"@manypkg/find-root@1.1.0":{"dependencies":{"@babel/runtime":"7.28.4","@types/node":"12.20.55","find-up":"4.1.0","fs-extra":"8.1.0"}},"@manypkg/get-packages@1.1.3":{"dependencies":{"@babel/runtime":"7.28.4","@changesets/types":"4.1.0","@manypkg/find-root":"1.1.0","fs-extra":"8.1.0","globby":"11.1.0","read-yaml-file":"1.1.0"}},"@marcbachmann/cel-js@2.5.2":{},"@mermaid-js/parser@0.6.3":{"dependencies":{"langium":"3.3.1"}},"@mswjs/interceptors@0.39.8":{"dependencies":{"@open-draft/deferred-promise":"2.2.0","@open-draft/logger":"0.3.0","@open-draft/until":"2.1.0","is-node-process":"1.2.0","outvariant":"1.4.3","strict-event-emitter":"0.5.1"},"optional":true},"@napi-rs/wasm-runtime@0.2.4":{"dependencies":{"@emnapi/core":"1.4.5","@emnapi/runtime":"1.4.5","@tybys/wasm-util":"0.9.0"}},"@napi-rs/wasm-runtime@1.1.4(@emnapi/core@1.10.0)(@emnapi/runtime@1.10.0)":{"dependencies":{"@emnapi/core":"1.10.0","@emnapi/runtime":"1.10.0","@tybys/wasm-util":"0.10.2"},"optional":true},"@nodelib/fs.scandir@2.1.5":{"dependencies":{"@nodelib/fs.stat":"2.0.5","run-parallel":"1.2.0"}},"@nodelib/fs.stat@2.0.5":{},"@nodelib/fs.walk@1.2.8":{"dependencies":{"@nodelib/fs.scandir":"2.1.5","fastq":"1.17.1"}},"@nrwl/nx-cloud@19.1.0":{"dependencies":{"nx-cloud":"19.1.0"},"transitivePeerDependencies":["debug"]},"@nx/nx-darwin-arm64@21.4.1":{"optional":true},"@nx/nx-darwin-x64@21.4.1":{"optional":true},"@nx/nx-freebsd-x64@21.4.1":{"optional":true},"@nx/nx-linux-arm-gnueabihf@21.4.1":{"optional":true},"@nx/nx-linux-arm64-gnu@21.4.1":{"optional":true},"@nx/nx-linux-arm64-musl@21.4.1":{"optional":true},"@nx/nx-linux-x64-gnu@21.4.1":{"optional":true},"@nx/nx-linux-x64-musl@21.4.1":{"optional":true},"@nx/nx-win32-arm64-msvc@21.4.1":{"optional":true},"@nx/nx-win32-x64-msvc@21.4.1":{"optional":true},"@oozcitak/dom@2.0.2":{"dependencies":{"@oozcitak/infra":"2.0.2","@oozcitak/url":"3.0.0","@oozcitak/util":"10.0.0"}},"@oozcitak/infra@2.0.2":{"dependencies":{"@oozcitak/util":"10.0.0"}},"@oozcitak/url@3.0.0":{"dependencies":{"@oozcitak/infra":"2.0.2","@oozcitak/util":"10.0.0"}},"@oozcitak/util@10.0.0":{},"@open-draft/deferred-promise@2.2.0":{"optional":true},"@open-draft/logger@0.3.0":{"dependencies":{"is-node-process":"1.2.0","outvariant":"1.4.3"},"optional":true},"@open-draft/until@2.1.0":{"optional":true},"@opentelemetry/api-logs@0.208.0":{"dependencies":{"@opentelemetry/api":"1.9.0"}},"@opentelemetry/api@1.9.0":{},"@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/core@2.4.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/exporter-logs-otlp-http@0.208.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/api-logs":"0.208.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/otlp-exporter-base":"0.208.0(@opentelemetry/api@1.9.0)","@opentelemetry/otlp-transformer":"0.208.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-logs":"0.208.0(@opentelemetry/api@1.9.0)"}},"@opentelemetry/otlp-exporter-base@0.208.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/otlp-transformer":"0.208.0(@opentelemetry/api@1.9.0)"}},"@opentelemetry/otlp-transformer@0.208.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/api-logs":"0.208.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-logs":"0.208.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-metrics":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-trace-base":"2.2.0(@opentelemetry/api@1.9.0)","protobufjs":"7.5.4"}},"@opentelemetry/resources@2.2.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/resources@2.4.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.4.0(@opentelemetry/api@1.9.0)","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/sdk-logs@0.208.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/api-logs":"0.208.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.2.0(@opentelemetry/api@1.9.0)"}},"@opentelemetry/sdk-metrics@2.2.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.2.0(@opentelemetry/api@1.9.0)"}},"@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/semantic-conventions@1.38.0":{},"@opral/markdown-wc@0.9.0":{"dependencies":{"mermaid":"11.12.1","rehype-autolink-headings":"7.1.0","rehype-highlight":"7.0.2","rehype-parse":"9.0.1","rehype-raw":"7.0.0","rehype-remark":"10.0.1","rehype-sanitize":"6.0.0","rehype-slug":"6.0.0","rehype-stringify":"10.0.1","remark-frontmatter":"5.0.0","remark-gfm":"4.0.1","remark-parse":"11.0.0","remark-rehype":"11.1.2","remark-stringify":"11.0.0","unified":"11.0.5","unist-util-visit":"5.0.0","yaml":"2.8.1"},"transitivePeerDependencies":["supports-color"]},"@opral/zettel-ast@0.1.0":{"dependencies":{"@sinclair/typebox":"0.34.40"}},"@oxc-project/types@0.127.0":{},"@oxlint/darwin-arm64@1.26.0":{"optional":true},"@oxlint/darwin-x64@1.26.0":{"optional":true},"@oxlint/linux-arm64-gnu@1.26.0":{"optional":true},"@oxlint/linux-arm64-musl@1.26.0":{"optional":true},"@oxlint/linux-x64-gnu@1.26.0":{"optional":true},"@oxlint/linux-x64-musl@1.26.0":{"optional":true},"@oxlint/win32-arm64@1.26.0":{"optional":true},"@oxlint/win32-x64@1.26.0":{"optional":true},"@pkgjs/parseargs@0.11.0":{"optional":true},"@polka/url@1.0.0-next.29":{},"@poppinss/colors@4.1.5":{"dependencies":{"kleur":"4.1.5"}},"@poppinss/dumper@0.6.5":{"dependencies":{"@poppinss/colors":"4.1.5","@sindresorhus/is":"7.1.1","supports-color":"10.2.2"}},"@poppinss/exception@1.2.2":{},"@posthog/core@1.9.1":{"dependencies":{"cross-spawn":"7.0.6"}},"@posthog/types@1.321.2":{},"@promptbook/utils@0.69.5":{"dependencies":{"spacetrim":"0.11.59"},"optional":true},"@protobufjs/aspromise@1.1.2":{},"@protobufjs/base64@1.1.2":{},"@protobufjs/codegen@2.0.4":{},"@protobufjs/eventemitter@1.1.0":{},"@protobufjs/fetch@1.1.0":{"dependencies":{"@protobufjs/aspromise":"1.1.2","@protobufjs/inquire":"1.1.0"}},"@protobufjs/float@1.0.2":{},"@protobufjs/inquire@1.1.0":{},"@protobufjs/path@1.1.2":{},"@protobufjs/pool@1.1.0":{},"@protobufjs/utf8@1.1.0":{},"@puppeteer/browsers@2.13.1":{"dependencies":{"debug":"4.4.3","extract-zip":"2.0.1","progress":"2.0.3","proxy-agent":"6.5.0","semver":"7.7.4","tar-fs":"3.1.2","yargs":"17.7.2"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a","supports-color"],"optional":true},"@rolldown/binding-android-arm64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-darwin-arm64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-darwin-x64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-freebsd-x64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-arm64-gnu@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-arm64-musl@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-s390x-gnu@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-x64-gnu@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-x64-musl@1.0.0-rc.17":{"optional":true},"@rolldown/binding-openharmony-arm64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-wasm32-wasi@1.0.0-rc.17":{"dependencies":{"@emnapi/core":"1.10.0","@emnapi/runtime":"1.10.0","@napi-rs/wasm-runtime":"1.1.4(@emnapi/core@1.10.0)(@emnapi/runtime@1.10.0)"},"optional":true},"@rolldown/binding-win32-arm64-msvc@1.0.0-rc.17":{"optional":true},"@rolldown/binding-win32-x64-msvc@1.0.0-rc.17":{"optional":true},"@rolldown/pluginutils@1.0.0-beta.40":{},"@rolldown/pluginutils@1.0.0-rc.17":{},"@rolldown/pluginutils@1.0.0-rc.7":{},"@rollup/rollup-android-arm-eabi@4.53.2":{"optional":true},"@rollup/rollup-android-arm64@4.53.2":{"optional":true},"@rollup/rollup-darwin-arm64@4.53.2":{"optional":true},"@rollup/rollup-darwin-x64@4.53.2":{"optional":true},"@rollup/rollup-freebsd-arm64@4.53.2":{"optional":true},"@rollup/rollup-freebsd-x64@4.53.2":{"optional":true},"@rollup/rollup-linux-arm-gnueabihf@4.53.2":{"optional":true},"@rollup/rollup-linux-arm-musleabihf@4.53.2":{"optional":true},"@rollup/rollup-linux-arm64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-arm64-musl@4.53.2":{"optional":true},"@rollup/rollup-linux-loong64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-ppc64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-riscv64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-riscv64-musl@4.53.2":{"optional":true},"@rollup/rollup-linux-s390x-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-x64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-x64-musl@4.53.2":{"optional":true},"@rollup/rollup-openharmony-arm64@4.53.2":{"optional":true},"@rollup/rollup-win32-arm64-msvc@4.53.2":{"optional":true},"@rollup/rollup-win32-ia32-msvc@4.53.2":{"optional":true},"@rollup/rollup-win32-x64-gnu@4.53.2":{"optional":true},"@rollup/rollup-win32-x64-msvc@4.53.2":{"optional":true},"@shikijs/core@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0","@shikijs/vscode-textmate":"10.0.2","@types/hast":"3.0.4","hast-util-to-html":"9.0.5"}},"@shikijs/engine-javascript@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0","@shikijs/vscode-textmate":"10.0.2","oniguruma-to-es":"4.3.3"}},"@shikijs/engine-oniguruma@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0","@shikijs/vscode-textmate":"10.0.2"}},"@shikijs/langs@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0"}},"@shikijs/themes@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0"}},"@shikijs/types@3.15.0":{"dependencies":{"@shikijs/vscode-textmate":"10.0.2","@types/hast":"3.0.4"}},"@shikijs/vscode-textmate@10.0.2":{},"@sinclair/typebox@0.34.40":{},"@sindresorhus/is@7.1.1":{},"@speed-highlight/core@1.2.12":{},"@sqlite.org/sqlite-wasm@3.50.4-build1":{},"@standard-schema/spec@1.0.0":{},"@standard-schema/spec@1.1.0":{},"@tailwindcss/node@4.2.4":{"dependencies":{"@jridgewell/remapping":"2.3.5","enhanced-resolve":"5.21.0","jiti":"2.6.1","lightningcss":"1.32.0","magic-string":"0.30.21","source-map-js":"1.2.1","tailwindcss":"4.2.4"}},"@tailwindcss/oxide-android-arm64@4.2.4":{"optional":true},"@tailwindcss/oxide-darwin-arm64@4.2.4":{"optional":true},"@tailwindcss/oxide-darwin-x64@4.2.4":{"optional":true},"@tailwindcss/oxide-freebsd-x64@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-arm-gnueabihf@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-arm64-gnu@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-arm64-musl@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-x64-gnu@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-x64-musl@4.2.4":{"optional":true},"@tailwindcss/oxide-wasm32-wasi@4.2.4":{"optional":true},"@tailwindcss/oxide-win32-arm64-msvc@4.2.4":{"optional":true},"@tailwindcss/oxide-win32-x64-msvc@4.2.4":{"optional":true},"@tailwindcss/oxide@4.2.4":{"optionalDependencies":{"@tailwindcss/oxide-android-arm64":"4.2.4","@tailwindcss/oxide-darwin-arm64":"4.2.4","@tailwindcss/oxide-darwin-x64":"4.2.4","@tailwindcss/oxide-freebsd-x64":"4.2.4","@tailwindcss/oxide-linux-arm-gnueabihf":"4.2.4","@tailwindcss/oxide-linux-arm64-gnu":"4.2.4","@tailwindcss/oxide-linux-arm64-musl":"4.2.4","@tailwindcss/oxide-linux-x64-gnu":"4.2.4","@tailwindcss/oxide-linux-x64-musl":"4.2.4","@tailwindcss/oxide-wasm32-wasi":"4.2.4","@tailwindcss/oxide-win32-arm64-msvc":"4.2.4","@tailwindcss/oxide-win32-x64-msvc":"4.2.4"}},"@tailwindcss/vite@4.2.4(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@tailwindcss/node":"4.2.4","@tailwindcss/oxide":"4.2.4","tailwindcss":"4.2.4","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@tanstack/history@1.161.6":{},"@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@tanstack/history":"1.161.6","@tanstack/react-store":"0.9.3(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-core":"1.169.2","isbot":"5.1.28","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"}},"@tanstack/react-start-client@1.166.48(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-core":"1.169.2","@tanstack/start-client-core":"1.168.2","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"}},"@tanstack/react-start-rsc@0.0.43(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/react-start-server":"1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-core":"1.169.2","@tanstack/router-utils":"1.161.8","@tanstack/start-client-core":"1.168.2","@tanstack/start-fn-stubs":"1.161.6","@tanstack/start-plugin-core":"1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))","@tanstack/start-server-core":"1.167.30","@tanstack/start-storage-context":"1.166.35","pathe":"2.0.3","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"},"transitivePeerDependencies":["@rsbuild/core","crossws","supports-color","vite","vite-plugin-solid","webpack"]},"@tanstack/react-start-server@1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@tanstack/history":"1.161.6","@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-core":"1.169.2","@tanstack/start-client-core":"1.168.2","@tanstack/start-server-core":"1.167.30","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"},"transitivePeerDependencies":["crossws"]},"@tanstack/react-start@1.167.64(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/react-start-client":"1.166.48(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/react-start-rsc":"0.0.43(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))","@tanstack/react-start-server":"1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-utils":"1.161.8","@tanstack/start-client-core":"1.168.2","@tanstack/start-plugin-core":"1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))","@tanstack/start-server-core":"1.167.30","pathe":"2.0.3","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"},"optionalDependencies":{"vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"transitivePeerDependencies":["@rspack/core","crossws","react-server-dom-rspack","supports-color","vite-plugin-solid","webpack"]},"@tanstack/react-store@0.9.3(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@tanstack/store":"0.9.3","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)","use-sync-external-store":"1.6.0(react@19.2.0)"}},"@tanstack/router-core@1.169.2":{"dependencies":{"@tanstack/history":"1.161.6","cookie-es":"3.1.1","seroval":"1.5.4","seroval-plugins":"1.5.4(seroval@1.5.4)"}},"@tanstack/router-generator@1.166.41":{"dependencies":{"@babel/types":"7.28.5","@tanstack/router-core":"1.169.2","@tanstack/router-utils":"1.161.8","@tanstack/virtual-file-routes":"1.161.7","jiti":"2.6.1","magic-string":"0.30.21","prettier":"3.6.2","zod":"3.25.76"},"transitivePeerDependencies":["supports-color"]},"@tanstack/router-plugin@1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@babel/core":"7.28.5","@babel/plugin-syntax-jsx":"7.27.1(@babel/core@7.28.5)","@babel/plugin-syntax-typescript":"7.27.1(@babel/core@7.28.5)","@babel/template":"7.27.2","@babel/traverse":"7.28.5","@babel/types":"7.28.5","@tanstack/router-core":"1.169.2","@tanstack/router-generator":"1.166.41","@tanstack/router-utils":"1.161.8","@tanstack/virtual-file-routes":"1.161.7","chokidar":"3.6.0","unplugin":"3.0.0","zod":"3.25.76"},"optionalDependencies":{"@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","webpack":"5.99.9(esbuild@0.27.3)"},"transitivePeerDependencies":["supports-color"]},"@tanstack/router-utils@1.161.8":{"dependencies":{"@babel/core":"7.28.5","@babel/generator":"7.28.5","@babel/parser":"7.28.5","@babel/types":"7.28.5","ansis":"4.1.0","babel-dead-code-elimination":"1.0.12","diff":"8.0.2","pathe":"2.0.3","tinyglobby":"0.2.16"},"transitivePeerDependencies":["supports-color"]},"@tanstack/start-client-core@1.168.2":{"dependencies":{"@tanstack/router-core":"1.169.2","@tanstack/start-fn-stubs":"1.161.6","@tanstack/start-storage-context":"1.166.35","seroval":"1.5.4"}},"@tanstack/start-fn-stubs@1.161.6":{},"@tanstack/start-plugin-core@1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/core":"7.28.5","@babel/types":"7.28.5","@rolldown/pluginutils":"1.0.0-beta.40","@tanstack/router-core":"1.169.2","@tanstack/router-generator":"1.166.41","@tanstack/router-plugin":"1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))","@tanstack/router-utils":"1.161.8","@tanstack/start-client-core":"1.168.2","@tanstack/start-server-core":"1.167.30","cheerio":"1.1.2","exsolve":"1.0.8","lightningcss":"1.32.0","pathe":"2.0.3","picomatch":"4.0.3","seroval":"1.5.4","source-map":"0.7.6","srvx":"0.11.15","tinyglobby":"0.2.16","ufo":"1.6.1","vitefu":"1.1.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","xmlbuilder2":"4.0.3","zod":"3.25.76"},"optionalDependencies":{"vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"transitivePeerDependencies":["@tanstack/react-router","crossws","supports-color","vite-plugin-solid","webpack"]},"@tanstack/start-server-core@1.167.30":{"dependencies":{"@tanstack/history":"1.161.6","@tanstack/router-core":"1.169.2","@tanstack/start-client-core":"1.168.2","@tanstack/start-storage-context":"1.166.35","fetchdts":"0.1.7","h3-v2":"h3@2.0.1-rc.20","seroval":"1.5.4"},"transitivePeerDependencies":["crossws"]},"@tanstack/start-storage-context@1.166.35":{"dependencies":{"@tanstack/router-core":"1.169.2"}},"@tanstack/store@0.9.3":{},"@tanstack/virtual-file-routes@1.161.7":{},"@testing-library/dom@10.4.1":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/runtime":"7.28.4","@types/aria-query":"5.0.4","aria-query":"5.3.0","dom-accessibility-api":"0.5.16","lz-string":"1.5.0","picocolors":"1.1.1","pretty-format":"27.5.1"}},"@testing-library/react@16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@babel/runtime":"7.28.4","@testing-library/dom":"10.4.1","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"},"optionalDependencies":{"@types/react":"19.2.7","@types/react-dom":"19.2.3(@types/react@19.2.7)"}},"@testing-library/user-event@14.6.1(@testing-library/dom@10.4.1)":{"dependencies":{"@testing-library/dom":"10.4.1"},"optional":true},"@tootallnate/quickjs-emscripten@0.23.0":{"optional":true},"@tybys/wasm-util@0.10.2":{"dependencies":{"tslib":"2.8.1"},"optional":true},"@tybys/wasm-util@0.9.0":{"dependencies":{"tslib":"2.8.1"}},"@types/aria-query@5.0.4":{},"@types/chai@5.2.2":{"dependencies":{"@types/deep-eql":"4.0.2"}},"@types/chai@5.2.3":{"dependencies":{"@types/deep-eql":"4.0.2","assertion-error":"2.0.1"}},"@types/cookie@0.6.0":{"optional":true},"@types/d3-array@3.2.1":{},"@types/d3-axis@3.0.6":{"dependencies":{"@types/d3-selection":"3.0.11"}},"@types/d3-brush@3.0.6":{"dependencies":{"@types/d3-selection":"3.0.11"}},"@types/d3-chord@3.0.6":{},"@types/d3-color@3.1.3":{},"@types/d3-contour@3.0.6":{"dependencies":{"@types/d3-array":"3.2.1","@types/geojson":"7946.0.15"}},"@types/d3-delaunay@6.0.4":{},"@types/d3-dispatch@3.0.6":{},"@types/d3-drag@3.0.7":{"dependencies":{"@types/d3-selection":"3.0.11"}},"@types/d3-dsv@3.0.7":{},"@types/d3-ease@3.0.2":{},"@types/d3-fetch@3.0.7":{"dependencies":{"@types/d3-dsv":"3.0.7"}},"@types/d3-force@3.0.10":{},"@types/d3-format@3.0.4":{},"@types/d3-geo@3.1.0":{"dependencies":{"@types/geojson":"7946.0.15"}},"@types/d3-hierarchy@3.1.7":{},"@types/d3-interpolate@3.0.4":{"dependencies":{"@types/d3-color":"3.1.3"}},"@types/d3-path@3.1.0":{},"@types/d3-polygon@3.0.2":{},"@types/d3-quadtree@3.0.6":{},"@types/d3-random@3.0.3":{},"@types/d3-scale-chromatic@3.1.0":{},"@types/d3-scale@4.0.8":{"dependencies":{"@types/d3-time":"3.0.4"}},"@types/d3-selection@3.0.11":{},"@types/d3-shape@3.1.7":{"dependencies":{"@types/d3-path":"3.1.0"}},"@types/d3-time-format@4.0.3":{},"@types/d3-time@3.0.4":{},"@types/d3-timer@3.0.2":{},"@types/d3-transition@3.0.9":{"dependencies":{"@types/d3-selection":"3.0.11"}},"@types/d3-zoom@3.0.8":{"dependencies":{"@types/d3-interpolate":"3.0.4","@types/d3-selection":"3.0.11"}},"@types/d3@7.4.3":{"dependencies":{"@types/d3-array":"3.2.1","@types/d3-axis":"3.0.6","@types/d3-brush":"3.0.6","@types/d3-chord":"3.0.6","@types/d3-color":"3.1.3","@types/d3-contour":"3.0.6","@types/d3-delaunay":"6.0.4","@types/d3-dispatch":"3.0.6","@types/d3-drag":"3.0.7","@types/d3-dsv":"3.0.7","@types/d3-ease":"3.0.2","@types/d3-fetch":"3.0.7","@types/d3-force":"3.0.10","@types/d3-format":"3.0.4","@types/d3-geo":"3.1.0","@types/d3-hierarchy":"3.1.7","@types/d3-interpolate":"3.0.4","@types/d3-path":"3.1.0","@types/d3-polygon":"3.0.2","@types/d3-quadtree":"3.0.6","@types/d3-random":"3.0.3","@types/d3-scale":"4.0.8","@types/d3-scale-chromatic":"3.1.0","@types/d3-selection":"3.0.11","@types/d3-shape":"3.1.7","@types/d3-time":"3.0.4","@types/d3-time-format":"4.0.3","@types/d3-timer":"3.0.2","@types/d3-transition":"3.0.9","@types/d3-zoom":"3.0.8"}},"@types/debug@4.1.12":{"dependencies":{"@types/ms":"2.1.0"}},"@types/deep-eql@4.0.2":{},"@types/eslint-scope@3.7.7":{"dependencies":{"@types/eslint":"9.6.1","@types/estree":"1.0.9"},"optional":true},"@types/eslint@9.6.1":{"dependencies":{"@types/estree":"1.0.9","@types/json-schema":"7.0.15"},"optional":true},"@types/estree@1.0.8":{},"@types/estree@1.0.9":{"optional":true},"@types/geojson@7946.0.15":{},"@types/hast@3.0.4":{"dependencies":{"@types/unist":"3.0.3"}},"@types/json-schema@7.0.15":{"optional":true},"@types/mdast@4.0.4":{"dependencies":{"@types/unist":"3.0.3"}},"@types/ms@2.1.0":{},"@types/node@12.20.55":{},"@types/node@20.19.39":{"dependencies":{"undici-types":"6.21.0"},"optional":true},"@types/node@22.15.33":{"dependencies":{"undici-types":"6.21.0"}},"@types/node@22.19.17":{"dependencies":{"undici-types":"6.21.0"},"optional":true},"@types/node@24.10.2":{"dependencies":{"undici-types":"7.16.0"},"optional":true},"@types/react-dom@19.2.3(@types/react@19.2.7)":{"dependencies":{"@types/react":"19.2.7"}},"@types/react@19.2.7":{"dependencies":{"csstype":"3.2.3"}},"@types/sinonjs__fake-timers@8.1.5":{"optional":true},"@types/statuses@2.0.6":{"optional":true},"@types/tough-cookie@4.0.5":{"optional":true},"@types/trusted-types@2.0.7":{"optional":true},"@types/unist@3.0.3":{},"@types/whatwg-mimetype@3.0.2":{"optional":true},"@types/which@2.0.2":{"optional":true},"@types/ws@8.18.1":{"dependencies":{"@types/node":"22.19.17"},"optional":true},"@types/yauzl@2.10.3":{"dependencies":{"@types/node":"22.19.17"},"optional":true},"@ungap/structured-clone@1.2.1":{},"@vitejs/plugin-react@6.0.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@rolldown/pluginutils":"1.0.0-rc.7","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/browser@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)":{"dependencies":{"@testing-library/dom":"10.4.1","@testing-library/user-event":"14.6.1(@testing-library/dom@10.4.1)","@vitest/mocker":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/utils":"3.2.4","magic-string":"0.30.21","sirv":"3.0.2","tinyrainbow":"2.0.0","vitest":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","ws":"8.20.0"},"optionalDependencies":{"playwright":"1.55.0","webdriverio":"9.2.1"},"transitivePeerDependencies":["bufferutil","msw","utf-8-validate","vite"],"optional":true},"@vitest/browser@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)":{"dependencies":{"@testing-library/dom":"10.4.1","@testing-library/user-event":"14.6.1(@testing-library/dom@10.4.1)","@vitest/mocker":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/utils":"3.2.4","magic-string":"0.30.21","sirv":"3.0.2","tinyrainbow":"2.0.0","vitest":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","ws":"8.20.0"},"optionalDependencies":{"playwright":"1.55.0","webdriverio":"9.2.1"},"transitivePeerDependencies":["bufferutil","msw","utf-8-validate","vite"],"optional":true},"@vitest/browser@4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)":{"dependencies":{"@blazediff/core":"1.9.1","@vitest/mocker":"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/utils":"4.1.5","magic-string":"0.30.21","pngjs":"7.0.0","sirv":"3.0.2","tinyrainbow":"3.1.0","vitest":"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","ws":"8.20.0"},"transitivePeerDependencies":["bufferutil","msw","utf-8-validate","vite"]},"@vitest/coverage-v8@3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)":{"dependencies":{"@ampproject/remapping":"2.3.0","@bcoe/v8-coverage":"1.0.2","ast-v8-to-istanbul":"0.3.4","debug":"4.4.1","istanbul-lib-coverage":"3.2.2","istanbul-lib-report":"3.0.1","istanbul-lib-source-maps":"5.0.6","istanbul-reports":"3.2.0","magic-string":"0.30.18","magicast":"0.3.5","std-env":"3.9.0","test-exclude":"7.0.1","tinyrainbow":"2.0.0","vitest":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"optionalDependencies":{"@vitest/browser":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)"},"transitivePeerDependencies":["supports-color"]},"@vitest/coverage-v8@4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)":{"dependencies":{"@bcoe/v8-coverage":"1.0.2","@vitest/utils":"4.1.5","ast-v8-to-istanbul":"1.0.0","istanbul-lib-coverage":"3.2.2","istanbul-lib-report":"3.0.1","istanbul-reports":"3.2.0","magicast":"0.5.2","obug":"2.1.1","std-env":"4.1.0","tinyrainbow":"3.1.0","vitest":"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"optionalDependencies":{"@vitest/browser":"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)"}},"@vitest/expect@3.2.4":{"dependencies":{"@types/chai":"5.2.2","@vitest/spy":"3.2.4","@vitest/utils":"3.2.4","chai":"5.3.3","tinyrainbow":"2.0.0"}},"@vitest/expect@4.0.18":{"dependencies":{"@standard-schema/spec":"1.0.0","@types/chai":"5.2.3","@vitest/spy":"4.0.18","@vitest/utils":"4.0.18","chai":"6.2.2","tinyrainbow":"3.1.0"}},"@vitest/expect@4.1.5":{"dependencies":{"@standard-schema/spec":"1.1.0","@types/chai":"5.2.3","@vitest/spy":"4.1.5","@vitest/utils":"4.1.5","chai":"6.2.2","tinyrainbow":"3.1.0"}},"@vitest/mocker@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/spy":"3.2.4","estree-walker":"3.0.3","magic-string":"0.30.21"},"optionalDependencies":{"msw":"2.10.2(@types/node@24.10.2)(typescript@5.8.3)","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/mocker@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/spy":"3.2.4","estree-walker":"3.0.3","magic-string":"0.30.21"},"optionalDependencies":{"msw":"2.10.2(@types/node@24.10.2)(typescript@5.9.3)","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/mocker@4.0.18(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/spy":"4.0.18","estree-walker":"3.0.3","magic-string":"0.30.21"},"optionalDependencies":{"msw":"2.10.2(@types/node@24.10.2)(typescript@5.9.3)","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/mocker@4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/spy":"4.1.5","estree-walker":"3.0.3","magic-string":"0.30.21"},"optionalDependencies":{"msw":"2.10.2(@types/node@22.15.33)(typescript@5.8.3)","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/pretty-format@3.2.4":{"dependencies":{"tinyrainbow":"2.0.0"}},"@vitest/pretty-format@4.0.18":{"dependencies":{"tinyrainbow":"3.1.0"}},"@vitest/pretty-format@4.1.5":{"dependencies":{"tinyrainbow":"3.1.0"}},"@vitest/runner@3.2.4":{"dependencies":{"@vitest/utils":"3.2.4","pathe":"2.0.3","strip-literal":"3.0.0"}},"@vitest/runner@4.0.18":{"dependencies":{"@vitest/utils":"4.0.18","pathe":"2.0.3"}},"@vitest/runner@4.1.5":{"dependencies":{"@vitest/utils":"4.1.5","pathe":"2.0.3"}},"@vitest/snapshot@3.2.4":{"dependencies":{"@vitest/pretty-format":"3.2.4","magic-string":"0.30.21","pathe":"2.0.3"}},"@vitest/snapshot@4.0.18":{"dependencies":{"@vitest/pretty-format":"4.0.18","magic-string":"0.30.21","pathe":"2.0.3"}},"@vitest/snapshot@4.1.5":{"dependencies":{"@vitest/pretty-format":"4.1.5","@vitest/utils":"4.1.5","magic-string":"0.30.21","pathe":"2.0.3"}},"@vitest/spy@3.2.4":{"dependencies":{"tinyspy":"4.0.3"}},"@vitest/spy@4.0.18":{},"@vitest/spy@4.1.5":{},"@vitest/utils@3.2.4":{"dependencies":{"@vitest/pretty-format":"3.2.4","loupe":"3.2.1","tinyrainbow":"2.0.0"}},"@vitest/utils@4.0.18":{"dependencies":{"@vitest/pretty-format":"4.0.18","tinyrainbow":"3.1.0"}},"@vitest/utils@4.1.5":{"dependencies":{"@vitest/pretty-format":"4.1.5","convert-source-map":"2.0.0","tinyrainbow":"3.1.0"}},"@wdio/config@9.1.3":{"dependencies":{"@wdio/logger":"9.1.3","@wdio/types":"9.1.3","@wdio/utils":"9.1.3","decamelize":"6.0.1","deepmerge-ts":"7.1.5","glob":"10.5.0","import-meta-resolve":"4.2.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a","supports-color"],"optional":true},"@wdio/logger@8.38.0":{"dependencies":{"chalk":"5.6.2","loglevel":"1.9.2","loglevel-plugin-prefix":"0.8.4","strip-ansi":"7.2.0"},"optional":true},"@wdio/logger@9.1.3":{"dependencies":{"chalk":"5.6.2","loglevel":"1.9.2","loglevel-plugin-prefix":"0.8.4","strip-ansi":"7.2.0"},"optional":true},"@wdio/protocols@9.2.0":{"optional":true},"@wdio/repl@9.0.8":{"dependencies":{"@types/node":"20.19.39"},"optional":true},"@wdio/types@9.1.3":{"dependencies":{"@types/node":"20.19.39"},"optional":true},"@wdio/utils@9.1.3":{"dependencies":{"@puppeteer/browsers":"2.13.1","@wdio/logger":"9.1.3","@wdio/types":"9.1.3","decamelize":"6.0.1","deepmerge-ts":"7.1.5","edgedriver":"5.6.1","geckodriver":"4.5.1","get-port":"7.2.0","import-meta-resolve":"4.2.0","locate-app":"2.5.0","safaridriver":"0.1.2","split2":"4.2.0","wait-port":"1.1.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a","supports-color"],"optional":true},"@webassemblyjs/ast@1.14.1":{"dependencies":{"@webassemblyjs/helper-numbers":"1.13.2","@webassemblyjs/helper-wasm-bytecode":"1.13.2"},"optional":true},"@webassemblyjs/floating-point-hex-parser@1.13.2":{"optional":true},"@webassemblyjs/helper-api-error@1.13.2":{"optional":true},"@webassemblyjs/helper-buffer@1.14.1":{"optional":true},"@webassemblyjs/helper-numbers@1.13.2":{"dependencies":{"@webassemblyjs/floating-point-hex-parser":"1.13.2","@webassemblyjs/helper-api-error":"1.13.2","@xtuc/long":"4.2.2"},"optional":true},"@webassemblyjs/helper-wasm-bytecode@1.13.2":{"optional":true},"@webassemblyjs/helper-wasm-section@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-buffer":"1.14.1","@webassemblyjs/helper-wasm-bytecode":"1.13.2","@webassemblyjs/wasm-gen":"1.14.1"},"optional":true},"@webassemblyjs/ieee754@1.13.2":{"dependencies":{"@xtuc/ieee754":"1.2.0"},"optional":true},"@webassemblyjs/leb128@1.13.2":{"dependencies":{"@xtuc/long":"4.2.2"},"optional":true},"@webassemblyjs/utf8@1.13.2":{"optional":true},"@webassemblyjs/wasm-edit@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-buffer":"1.14.1","@webassemblyjs/helper-wasm-bytecode":"1.13.2","@webassemblyjs/helper-wasm-section":"1.14.1","@webassemblyjs/wasm-gen":"1.14.1","@webassemblyjs/wasm-opt":"1.14.1","@webassemblyjs/wasm-parser":"1.14.1","@webassemblyjs/wast-printer":"1.14.1"},"optional":true},"@webassemblyjs/wasm-gen@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-wasm-bytecode":"1.13.2","@webassemblyjs/ieee754":"1.13.2","@webassemblyjs/leb128":"1.13.2","@webassemblyjs/utf8":"1.13.2"},"optional":true},"@webassemblyjs/wasm-opt@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-buffer":"1.14.1","@webassemblyjs/wasm-gen":"1.14.1","@webassemblyjs/wasm-parser":"1.14.1"},"optional":true},"@webassemblyjs/wasm-parser@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-api-error":"1.13.2","@webassemblyjs/helper-wasm-bytecode":"1.13.2","@webassemblyjs/ieee754":"1.13.2","@webassemblyjs/leb128":"1.13.2","@webassemblyjs/utf8":"1.13.2"},"optional":true},"@webassemblyjs/wast-printer@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@xtuc/long":"4.2.2"},"optional":true},"@xtuc/ieee754@1.2.0":{"optional":true},"@xtuc/long@4.2.2":{"optional":true},"@yarnpkg/lockfile@1.1.0":{},"@yarnpkg/parsers@3.0.2":{"dependencies":{"js-yaml":"3.14.1","tslib":"2.8.1"}},"@zip.js/zip.js@2.8.26":{"optional":true},"@zkochan/js-yaml@0.0.7":{"dependencies":{"argparse":"2.0.1"}},"abort-controller@3.0.0":{"dependencies":{"event-target-shim":"5.0.1"},"optional":true},"acorn@8.16.0":{},"agent-base@7.1.3":{},"agent-base@7.1.4":{"optional":true},"ajv-formats@2.1.1(ajv@8.20.0)":{"optionalDependencies":{"ajv":"8.20.0"},"optional":true},"ajv-keywords@5.1.0(ajv@8.20.0)":{"dependencies":{"ajv":"8.20.0","fast-deep-equal":"3.1.3"},"optional":true},"ajv@8.17.1":{"dependencies":{"fast-deep-equal":"3.1.3","fast-uri":"3.0.3","json-schema-traverse":"1.0.0","require-from-string":"2.0.2"}},"ajv@8.20.0":{"dependencies":{"fast-deep-equal":"3.1.3","fast-uri":"3.1.2","json-schema-traverse":"1.0.0","require-from-string":"2.0.2"},"optional":true},"ansi-colors@4.1.3":{},"ansi-regex@5.0.1":{},"ansi-regex@6.1.0":{},"ansi-regex@6.2.2":{"optional":true},"ansi-styles@4.3.0":{"dependencies":{"color-convert":"2.0.1"}},"ansi-styles@5.2.0":{},"ansi-styles@6.2.1":{},"ansis@4.1.0":{},"anymatch@3.1.3":{"dependencies":{"normalize-path":"3.0.0","picomatch":"2.3.1"}},"archiver-utils@5.0.2":{"dependencies":{"glob":"10.5.0","graceful-fs":"4.2.11","is-stream":"2.0.1","lazystream":"1.0.1","lodash":"4.18.1","normalize-path":"3.0.0","readable-stream":"4.7.0"},"optional":true},"archiver@7.0.1":{"dependencies":{"archiver-utils":"5.0.2","async":"3.2.6","buffer-crc32":"1.0.0","readable-stream":"4.7.0","readdir-glob":"1.1.3","tar-stream":"3.2.0","zip-stream":"6.0.1"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a"],"optional":true},"argparse@1.0.10":{"dependencies":{"sprintf-js":"1.0.3"}},"argparse@2.0.1":{},"aria-query@5.3.0":{"dependencies":{"dequal":"2.0.3"}},"aria-query@5.3.2":{"optional":true},"array-union@2.1.0":{},"assertion-error@2.0.1":{},"ast-types@0.13.4":{"dependencies":{"tslib":"2.8.1"},"optional":true},"ast-v8-to-istanbul@0.3.4":{"dependencies":{"@jridgewell/trace-mapping":"0.3.30","estree-walker":"3.0.3","js-tokens":"9.0.1"}},"ast-v8-to-istanbul@1.0.0":{"dependencies":{"@jridgewell/trace-mapping":"0.3.31","estree-walker":"3.0.3","js-tokens":"10.0.0"}},"async@3.2.6":{"optional":true},"asynckit@0.4.0":{},"axios@1.11.0":{"dependencies":{"follow-redirects":"1.15.11","form-data":"4.0.4","proxy-from-env":"1.1.0"},"transitivePeerDependencies":["debug"]},"b4a@1.8.1":{"optional":true},"babel-dead-code-elimination@1.0.12":{"dependencies":{"@babel/core":"7.28.5","@babel/parser":"7.28.5","@babel/traverse":"7.28.5","@babel/types":"7.28.5"},"transitivePeerDependencies":["supports-color"]},"bail@2.0.2":{},"balanced-match@1.0.2":{},"bare-events@2.8.2":{"optional":true},"bare-fs@4.7.1":{"dependencies":{"bare-events":"2.8.2","bare-path":"3.0.0","bare-stream":"2.13.1(bare-events@2.8.2)","bare-url":"2.4.3","fast-fifo":"1.3.2"},"transitivePeerDependencies":["bare-abort-controller","react-native-b4a"],"optional":true},"bare-os@3.9.1":{"optional":true},"bare-path@3.0.0":{"dependencies":{"bare-os":"3.9.1"},"optional":true},"bare-stream@2.13.1(bare-events@2.8.2)":{"dependencies":{"streamx":"2.25.0","teex":"1.0.1"},"optionalDependencies":{"bare-events":"2.8.2"},"transitivePeerDependencies":["react-native-b4a"],"optional":true},"bare-url@2.4.3":{"dependencies":{"bare-path":"3.0.0"},"optional":true},"base64-js@1.5.1":{},"baseline-browser-mapping@2.10.27":{"optional":true},"basic-ftp@5.3.1":{"optional":true},"better-path-resolve@1.0.0":{"dependencies":{"is-windows":"1.0.2"}},"better-sqlite3@12.9.0":{"dependencies":{"bindings":"1.5.0","prebuild-install":"7.1.3"}},"bidi-js@1.0.3":{"dependencies":{"require-from-string":"2.0.2"}},"binary-extensions@2.3.0":{},"bindings@1.5.0":{"dependencies":{"file-uri-to-path":"1.0.0"}},"bl@4.1.0":{"dependencies":{"buffer":"5.7.1","inherits":"2.0.4","readable-stream":"3.6.2"}},"blake3-wasm@2.1.5":{},"boolbase@1.0.0":{},"brace-expansion@2.0.2":{"dependencies":{"balanced-match":"1.0.2"}},"brace-expansion@2.1.0":{"dependencies":{"balanced-match":"1.0.2"},"optional":true},"braces@3.0.3":{"dependencies":{"fill-range":"7.1.1"}},"browserslist@4.25.3":{"dependencies":{"caniuse-lite":"1.0.30001737","electron-to-chromium":"1.5.211","node-releases":"2.0.19","update-browserslist-db":"1.1.3(browserslist@4.25.3)"}},"browserslist@4.28.2":{"dependencies":{"baseline-browser-mapping":"2.10.27","caniuse-lite":"1.0.30001792","electron-to-chromium":"1.5.352","node-releases":"2.0.38","update-browserslist-db":"1.2.3(browserslist@4.28.2)"},"optional":true},"buffer-builder@0.2.0":{"optional":true},"buffer-crc32@0.2.13":{"optional":true},"buffer-crc32@1.0.0":{"optional":true},"buffer-from@1.1.2":{"optional":true},"buffer@5.7.1":{"dependencies":{"base64-js":"1.5.1","ieee754":"1.2.1"}},"buffer@6.0.3":{"dependencies":{"base64-js":"1.5.1","ieee754":"1.2.1"},"optional":true},"cac@6.7.14":{},"call-bind-apply-helpers@1.0.2":{"dependencies":{"es-errors":"1.3.0","function-bind":"1.1.2"}},"caniuse-lite@1.0.30001737":{},"caniuse-lite@1.0.30001792":{"optional":true},"ccount@2.0.1":{},"chai@5.3.3":{"dependencies":{"assertion-error":"2.0.1","check-error":"2.1.1","deep-eql":"5.0.2","loupe":"3.2.1","pathval":"2.0.1"}},"chai@6.2.2":{},"chalk@4.1.2":{"dependencies":{"ansi-styles":"4.3.0","supports-color":"7.2.0"}},"chalk@5.6.2":{"optional":true},"character-entities-html4@2.1.0":{},"character-entities-legacy@3.0.0":{},"character-entities@2.0.2":{},"chardet@2.1.0":{},"check-error@2.1.1":{},"cheerio-select@2.1.0":{"dependencies":{"boolbase":"1.0.0","css-select":"5.1.0","css-what":"6.1.0","domelementtype":"2.3.0","domhandler":"5.0.3","domutils":"3.2.2"}},"cheerio@1.1.2":{"dependencies":{"cheerio-select":"2.1.0","dom-serializer":"2.0.0","domhandler":"5.0.3","domutils":"3.2.2","encoding-sniffer":"0.2.1","htmlparser2":"10.0.0","parse5":"7.3.0","parse5-htmlparser2-tree-adapter":"7.1.0","parse5-parser-stream":"7.1.2","undici":"7.16.0","whatwg-mimetype":"4.0.0"}},"cheerio@1.2.0":{"dependencies":{"cheerio-select":"2.1.0","dom-serializer":"2.0.0","domhandler":"5.0.3","domutils":"3.2.2","encoding-sniffer":"0.2.1","htmlparser2":"10.1.0","parse5":"7.3.0","parse5-htmlparser2-tree-adapter":"7.1.0","parse5-parser-stream":"7.1.2","undici":"7.25.0","whatwg-mimetype":"4.0.0"},"optional":true},"chevrotain-allstar@0.3.1(chevrotain@11.0.3)":{"dependencies":{"chevrotain":"11.0.3","lodash-es":"4.17.21"}},"chevrotain@11.0.3":{"dependencies":{"@chevrotain/cst-dts-gen":"11.0.3","@chevrotain/gast":"11.0.3","@chevrotain/regexp-to-ast":"11.0.3","@chevrotain/types":"11.0.3","@chevrotain/utils":"11.0.3","lodash-es":"4.17.21"}},"chokidar@3.6.0":{"dependencies":{"anymatch":"3.1.3","braces":"3.0.3","glob-parent":"5.1.2","is-binary-path":"2.1.0","is-glob":"4.0.3","normalize-path":"3.0.0","readdirp":"3.6.0"},"optionalDependencies":{"fsevents":"2.3.3"}},"chownr@1.1.4":{},"chownr@2.0.0":{},"chrome-trace-event@1.0.4":{"optional":true},"ci-info@3.9.0":{},"cli-cursor@3.1.0":{"dependencies":{"restore-cursor":"3.1.0"}},"cli-spinners@2.6.1":{},"cli-spinners@2.9.2":{},"cli-width@4.1.0":{"optional":true},"cliui@8.0.1":{"dependencies":{"string-width":"4.2.3","strip-ansi":"6.0.1","wrap-ansi":"7.0.0"}},"clone@1.0.4":{},"color-convert@2.0.1":{"dependencies":{"color-name":"1.1.4"}},"color-name@1.1.4":{},"colorjs.io@0.5.2":{"optional":true},"combined-stream@1.0.8":{"dependencies":{"delayed-stream":"1.0.0"}},"comma-separated-tokens@2.0.3":{},"commander@2.20.3":{"optional":true},"commander@7.2.0":{},"commander@8.3.0":{},"commander@9.5.0":{"optional":true},"compress-commons@6.0.2":{"dependencies":{"crc-32":"1.2.2","crc32-stream":"6.0.0","is-stream":"2.0.1","normalize-path":"3.0.0","readable-stream":"4.7.0"},"optional":true},"confbox@0.1.8":{},"confbox@0.2.2":{},"convert-source-map@2.0.0":{},"cookie-es@3.1.1":{},"cookie@0.7.2":{"optional":true},"cookie@1.0.2":{},"core-js@3.46.0":{},"core-util-is@1.0.3":{"optional":true},"cose-base@1.0.3":{"dependencies":{"layout-base":"1.0.2"}},"cose-base@2.2.0":{"dependencies":{"layout-base":"2.0.1"}},"crc-32@1.2.2":{"optional":true},"crc32-stream@6.0.0":{"dependencies":{"crc-32":"1.2.2","readable-stream":"4.7.0"},"optional":true},"cross-spawn@7.0.6":{"dependencies":{"path-key":"3.1.1","shebang-command":"2.0.0","which":"2.0.2"}},"css-select@5.1.0":{"dependencies":{"boolbase":"1.0.0","css-what":"6.1.0","domhandler":"5.0.3","domutils":"3.2.2","nth-check":"2.1.1"}},"css-shorthand-properties@1.1.2":{"optional":true},"css-tree@3.1.0":{"dependencies":{"mdn-data":"2.12.2","source-map-js":"1.2.1"}},"css-value@0.0.1":{"optional":true},"css-what@6.1.0":{},"cssstyle@4.3.1":{"dependencies":{"@asamuzakjp/css-color":"3.1.4","rrweb-cssom":"0.8.0"}},"cssstyle@5.3.4(postcss@8.5.14)":{"dependencies":{"@asamuzakjp/css-color":"4.1.0","@csstools/css-syntax-patches-for-csstree":"1.0.14(postcss@8.5.14)","css-tree":"3.1.0"},"transitivePeerDependencies":["postcss"]},"csstype@3.2.3":{},"cytoscape-cose-bilkent@4.1.0(cytoscape@3.30.4)":{"dependencies":{"cose-base":"1.0.3","cytoscape":"3.30.4"}},"cytoscape-fcose@2.2.0(cytoscape@3.30.4)":{"dependencies":{"cose-base":"2.2.0","cytoscape":"3.30.4"}},"cytoscape@3.30.4":{},"d3-array@2.12.1":{"dependencies":{"internmap":"1.0.1"}},"d3-array@3.2.4":{"dependencies":{"internmap":"2.0.3"}},"d3-axis@3.0.0":{},"d3-brush@3.0.0":{"dependencies":{"d3-dispatch":"3.0.1","d3-drag":"3.0.0","d3-interpolate":"3.0.1","d3-selection":"3.0.0","d3-transition":"3.0.1(d3-selection@3.0.0)"}},"d3-chord@3.0.1":{"dependencies":{"d3-path":"3.1.0"}},"d3-color@3.1.0":{},"d3-contour@4.0.2":{"dependencies":{"d3-array":"3.2.4"}},"d3-delaunay@6.0.4":{"dependencies":{"delaunator":"5.0.1"}},"d3-dispatch@3.0.1":{},"d3-drag@3.0.0":{"dependencies":{"d3-dispatch":"3.0.1","d3-selection":"3.0.0"}},"d3-dsv@3.0.1":{"dependencies":{"commander":"7.2.0","iconv-lite":"0.6.3","rw":"1.3.3"}},"d3-ease@3.0.1":{},"d3-fetch@3.0.1":{"dependencies":{"d3-dsv":"3.0.1"}},"d3-force@3.0.0":{"dependencies":{"d3-dispatch":"3.0.1","d3-quadtree":"3.0.1","d3-timer":"3.0.1"}},"d3-format@3.1.0":{},"d3-geo@3.1.1":{"dependencies":{"d3-array":"3.2.4"}},"d3-hierarchy@3.1.2":{},"d3-interpolate@3.0.1":{"dependencies":{"d3-color":"3.1.0"}},"d3-path@1.0.9":{},"d3-path@3.1.0":{},"d3-polygon@3.0.1":{},"d3-quadtree@3.0.1":{},"d3-random@3.0.1":{},"d3-sankey@0.12.3":{"dependencies":{"d3-array":"2.12.1","d3-shape":"1.3.7"}},"d3-scale-chromatic@3.1.0":{"dependencies":{"d3-color":"3.1.0","d3-interpolate":"3.0.1"}},"d3-scale@4.0.2":{"dependencies":{"d3-array":"3.2.4","d3-format":"3.1.0","d3-interpolate":"3.0.1","d3-time":"3.1.0","d3-time-format":"4.1.0"}},"d3-selection@3.0.0":{},"d3-shape@1.3.7":{"dependencies":{"d3-path":"1.0.9"}},"d3-shape@3.2.0":{"dependencies":{"d3-path":"3.1.0"}},"d3-time-format@4.1.0":{"dependencies":{"d3-time":"3.1.0"}},"d3-time@3.1.0":{"dependencies":{"d3-array":"3.2.4"}},"d3-timer@3.0.1":{},"d3-transition@3.0.1(d3-selection@3.0.0)":{"dependencies":{"d3-color":"3.1.0","d3-dispatch":"3.0.1","d3-ease":"3.0.1","d3-interpolate":"3.0.1","d3-selection":"3.0.0","d3-timer":"3.0.1"}},"d3-zoom@3.0.0":{"dependencies":{"d3-dispatch":"3.0.1","d3-drag":"3.0.0","d3-interpolate":"3.0.1","d3-selection":"3.0.0","d3-transition":"3.0.1(d3-selection@3.0.0)"}},"d3@7.9.0":{"dependencies":{"d3-array":"3.2.4","d3-axis":"3.0.0","d3-brush":"3.0.0","d3-chord":"3.0.1","d3-color":"3.1.0","d3-contour":"4.0.2","d3-delaunay":"6.0.4","d3-dispatch":"3.0.1","d3-drag":"3.0.0","d3-dsv":"3.0.1","d3-ease":"3.0.1","d3-fetch":"3.0.1","d3-force":"3.0.0","d3-format":"3.1.0","d3-geo":"3.1.1","d3-hierarchy":"3.1.2","d3-interpolate":"3.0.1","d3-path":"3.1.0","d3-polygon":"3.0.1","d3-quadtree":"3.0.1","d3-random":"3.0.1","d3-scale":"4.0.2","d3-scale-chromatic":"3.1.0","d3-selection":"3.0.0","d3-shape":"3.2.0","d3-time":"3.1.0","d3-time-format":"4.1.0","d3-timer":"3.0.1","d3-transition":"3.0.1(d3-selection@3.0.0)","d3-zoom":"3.0.0"}},"dagre-d3-es@7.0.13":{"dependencies":{"d3":"7.9.0","lodash-es":"4.17.21"}},"data-uri-to-buffer@4.0.1":{"optional":true},"data-uri-to-buffer@6.0.2":{"optional":true},"data-urls@5.0.0":{"dependencies":{"whatwg-mimetype":"4.0.0","whatwg-url":"14.2.0"}},"data-urls@6.0.0":{"dependencies":{"whatwg-mimetype":"4.0.0","whatwg-url":"15.1.0"}},"dayjs@1.11.19":{},"debug@4.4.1":{"dependencies":{"ms":"2.1.3"}},"debug@4.4.3":{"dependencies":{"ms":"2.1.3"}},"decamelize@6.0.1":{"optional":true},"decimal.js@10.6.0":{},"decode-named-character-reference@1.0.2":{"dependencies":{"character-entities":"2.0.2"}},"decompress-response@6.0.0":{"dependencies":{"mimic-response":"3.1.0"}},"deep-eql@5.0.2":{},"deep-extend@0.6.0":{},"deepmerge-ts@7.1.5":{"optional":true},"defaults@1.0.4":{"dependencies":{"clone":"1.0.4"}},"define-lazy-prop@2.0.0":{},"degenerator@5.0.1":{"dependencies":{"ast-types":"0.13.4","escodegen":"2.1.0","esprima":"4.0.1"},"optional":true},"delaunator@5.0.1":{"dependencies":{"robust-predicates":"3.0.2"}},"delayed-stream@1.0.0":{},"dequal@2.0.3":{},"detect-indent@6.1.0":{},"detect-libc@2.1.2":{},"devlop@1.1.0":{"dependencies":{"dequal":"2.0.3"}},"diff@8.0.2":{},"dir-glob@3.0.1":{"dependencies":{"path-type":"4.0.0"}},"dom-accessibility-api@0.5.16":{},"dom-serializer@2.0.0":{"dependencies":{"domelementtype":"2.3.0","domhandler":"5.0.3","entities":"4.5.0"}},"domelementtype@2.3.0":{},"domhandler@5.0.3":{"dependencies":{"domelementtype":"2.3.0"}},"dompurify@3.3.1":{"optionalDependencies":{"@types/trusted-types":"2.0.7"}},"domutils@3.2.2":{"dependencies":{"dom-serializer":"2.0.0","domelementtype":"2.3.0","domhandler":"5.0.3"}},"dotenv-expand@11.0.7":{"dependencies":{"dotenv":"16.5.0"}},"dotenv@10.0.0":{},"dotenv@16.4.7":{},"dotenv@16.5.0":{},"dunder-proto@1.0.1":{"dependencies":{"call-bind-apply-helpers":"1.0.2","es-errors":"1.3.0","gopd":"1.2.0"}},"eastasianwidth@0.2.0":{},"edge-paths@3.0.5":{"dependencies":{"@types/which":"2.0.2","which":"2.0.2"},"optional":true},"edgedriver@5.6.1":{"dependencies":{"@wdio/logger":"8.38.0","@zip.js/zip.js":"2.8.26","decamelize":"6.0.1","edge-paths":"3.0.5","fast-xml-parser":"4.5.6","node-fetch":"3.3.2","which":"4.0.0"},"optional":true},"electron-to-chromium@1.5.211":{},"electron-to-chromium@1.5.352":{"optional":true},"emoji-regex@8.0.0":{},"emoji-regex@9.2.2":{},"encoding-sniffer@0.2.1":{"dependencies":{"iconv-lite":"0.6.3","whatwg-encoding":"3.1.1"}},"end-of-stream@1.4.5":{"dependencies":{"once":"1.4.0"}},"enhanced-resolve@5.21.0":{"dependencies":{"graceful-fs":"4.2.11","tapable":"2.3.3"}},"enquirer@2.3.6":{"dependencies":{"ansi-colors":"4.1.3"}},"enquirer@2.4.1":{"dependencies":{"ansi-colors":"4.1.3","strip-ansi":"6.0.1"}},"entities@4.5.0":{},"entities@6.0.1":{},"entities@7.0.1":{"optional":true},"error-stack-parser-es@1.0.5":{},"es-define-property@1.0.1":{},"es-errors@1.3.0":{},"es-module-lexer@1.7.0":{},"es-module-lexer@2.1.0":{},"es-object-atoms@1.1.1":{"dependencies":{"es-errors":"1.3.0"}},"es-set-tostringtag@2.1.0":{"dependencies":{"es-errors":"1.3.0","get-intrinsic":"1.3.0","has-tostringtag":"1.0.2","hasown":"2.0.2"}},"esbuild@0.25.12":{"optionalDependencies":{"@esbuild/aix-ppc64":"0.25.12","@esbuild/android-arm":"0.25.12","@esbuild/android-arm64":"0.25.12","@esbuild/android-x64":"0.25.12","@esbuild/darwin-arm64":"0.25.12","@esbuild/darwin-x64":"0.25.12","@esbuild/freebsd-arm64":"0.25.12","@esbuild/freebsd-x64":"0.25.12","@esbuild/linux-arm":"0.25.12","@esbuild/linux-arm64":"0.25.12","@esbuild/linux-ia32":"0.25.12","@esbuild/linux-loong64":"0.25.12","@esbuild/linux-mips64el":"0.25.12","@esbuild/linux-ppc64":"0.25.12","@esbuild/linux-riscv64":"0.25.12","@esbuild/linux-s390x":"0.25.12","@esbuild/linux-x64":"0.25.12","@esbuild/netbsd-arm64":"0.25.12","@esbuild/netbsd-x64":"0.25.12","@esbuild/openbsd-arm64":"0.25.12","@esbuild/openbsd-x64":"0.25.12","@esbuild/openharmony-arm64":"0.25.12","@esbuild/sunos-x64":"0.25.12","@esbuild/win32-arm64":"0.25.12","@esbuild/win32-ia32":"0.25.12","@esbuild/win32-x64":"0.25.12"}},"esbuild@0.27.3":{"optionalDependencies":{"@esbuild/aix-ppc64":"0.27.3","@esbuild/android-arm":"0.27.3","@esbuild/android-arm64":"0.27.3","@esbuild/android-x64":"0.27.3","@esbuild/darwin-arm64":"0.27.3","@esbuild/darwin-x64":"0.27.3","@esbuild/freebsd-arm64":"0.27.3","@esbuild/freebsd-x64":"0.27.3","@esbuild/linux-arm":"0.27.3","@esbuild/linux-arm64":"0.27.3","@esbuild/linux-ia32":"0.27.3","@esbuild/linux-loong64":"0.27.3","@esbuild/linux-mips64el":"0.27.3","@esbuild/linux-ppc64":"0.27.3","@esbuild/linux-riscv64":"0.27.3","@esbuild/linux-s390x":"0.27.3","@esbuild/linux-x64":"0.27.3","@esbuild/netbsd-arm64":"0.27.3","@esbuild/netbsd-x64":"0.27.3","@esbuild/openbsd-arm64":"0.27.3","@esbuild/openbsd-x64":"0.27.3","@esbuild/openharmony-arm64":"0.27.3","@esbuild/sunos-x64":"0.27.3","@esbuild/win32-arm64":"0.27.3","@esbuild/win32-ia32":"0.27.3","@esbuild/win32-x64":"0.27.3"}},"escalade@3.2.0":{},"escape-string-regexp@1.0.5":{},"escape-string-regexp@5.0.0":{},"escodegen@2.1.0":{"dependencies":{"esprima":"4.0.1","estraverse":"5.3.0","esutils":"2.0.3"},"optionalDependencies":{"source-map":"0.6.1"},"optional":true},"eslint-scope@5.1.1":{"dependencies":{"esrecurse":"4.3.0","estraverse":"4.3.0"},"optional":true},"esprima@4.0.1":{},"esrecurse@4.3.0":{"dependencies":{"estraverse":"5.3.0"},"optional":true},"estraverse@4.3.0":{"optional":true},"estraverse@5.3.0":{"optional":true},"estree-walker@3.0.3":{"dependencies":{"@types/estree":"1.0.8"}},"esutils@2.0.3":{"optional":true},"event-target-shim@5.0.1":{"optional":true},"events-universal@1.0.1":{"dependencies":{"bare-events":"2.8.2"},"transitivePeerDependencies":["bare-abort-controller"],"optional":true},"events@3.3.0":{"optional":true},"expand-template@2.0.3":{},"expect-type@1.2.2":{},"expect-type@1.3.0":{},"exsolve@1.0.8":{},"extend@3.0.2":{},"extendable-error@0.1.7":{},"extract-zip@2.0.1":{"dependencies":{"debug":"4.4.3","get-stream":"5.2.0","yauzl":"2.10.0"},"optionalDependencies":{"@types/yauzl":"2.10.3"},"transitivePeerDependencies":["supports-color"],"optional":true},"fast-deep-equal@2.0.1":{"optional":true},"fast-deep-equal@3.1.3":{},"fast-fifo@1.3.2":{"optional":true},"fast-glob@3.3.3":{"dependencies":{"@nodelib/fs.stat":"2.0.5","@nodelib/fs.walk":"1.2.8","glob-parent":"5.1.2","merge2":"1.4.1","micromatch":"4.0.8"}},"fast-uri@3.0.3":{},"fast-uri@3.1.2":{"optional":true},"fast-xml-parser@4.5.6":{"dependencies":{"strnum":"1.1.2"},"optional":true},"fastq@1.17.1":{"dependencies":{"reusify":"1.0.4"}},"fault@2.0.1":{"dependencies":{"format":"0.2.2"}},"fd-slicer@1.1.0":{"dependencies":{"pend":"1.2.0"},"optional":true},"fdir@6.5.0(picomatch@4.0.4)":{"optionalDependencies":{"picomatch":"4.0.4"}},"fetch-blob@3.2.0":{"dependencies":{"node-domexception":"1.0.0","web-streams-polyfill":"3.3.3"},"optional":true},"fetchdts@0.1.7":{},"fflate@0.4.8":{},"figures@3.2.0":{"dependencies":{"escape-string-regexp":"1.0.5"}},"file-uri-to-path@1.0.0":{},"fill-range@7.1.1":{"dependencies":{"to-regex-range":"5.0.1"}},"find-up@4.1.0":{"dependencies":{"locate-path":"5.0.0","path-exists":"4.0.0"}},"flat@5.0.2":{},"follow-redirects@1.15.11":{},"foreground-child@3.3.1":{"dependencies":{"cross-spawn":"7.0.6","signal-exit":"4.1.0"}},"form-data@4.0.4":{"dependencies":{"asynckit":"0.4.0","combined-stream":"1.0.8","es-set-tostringtag":"2.1.0","hasown":"2.0.2","mime-types":"2.1.35"}},"format@0.2.2":{},"formdata-polyfill@4.0.10":{"dependencies":{"fetch-blob":"3.2.0"},"optional":true},"front-matter@4.0.2":{"dependencies":{"js-yaml":"3.14.1"}},"fs-constants@1.0.0":{},"fs-extra@11.3.1":{"dependencies":{"graceful-fs":"4.2.11","jsonfile":"6.2.0","universalify":"2.0.1"}},"fs-extra@7.0.1":{"dependencies":{"graceful-fs":"4.2.11","jsonfile":"4.0.0","universalify":"0.1.2"}},"fs-extra@8.1.0":{"dependencies":{"graceful-fs":"4.2.11","jsonfile":"4.0.0","universalify":"0.1.2"}},"fs-minipass@2.1.0":{"dependencies":{"minipass":"3.3.6"}},"fsevents@2.3.2":{"optional":true},"fsevents@2.3.3":{"optional":true},"function-bind@1.1.2":{},"geckodriver@4.5.1":{"dependencies":{"@wdio/logger":"9.1.3","@zip.js/zip.js":"2.8.26","decamelize":"6.0.1","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","node-fetch":"3.3.2","tar-fs":"3.1.2","which":"4.0.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a","supports-color"],"optional":true},"gensync@1.0.0-beta.2":{},"get-caller-file@2.0.5":{},"get-intrinsic@1.3.0":{"dependencies":{"call-bind-apply-helpers":"1.0.2","es-define-property":"1.0.1","es-errors":"1.3.0","es-object-atoms":"1.1.1","function-bind":"1.1.2","get-proto":"1.0.1","gopd":"1.2.0","has-symbols":"1.1.0","hasown":"2.0.2","math-intrinsics":"1.1.0"}},"get-port@7.2.0":{"optional":true},"get-proto@1.0.1":{"dependencies":{"dunder-proto":"1.0.1","es-object-atoms":"1.1.1"}},"get-stream@5.2.0":{"dependencies":{"pump":"3.0.4"},"optional":true},"get-tsconfig@4.14.0":{"dependencies":{"resolve-pkg-maps":"1.0.0"},"optional":true},"get-uri@6.0.5":{"dependencies":{"basic-ftp":"5.3.1","data-uri-to-buffer":"6.0.2","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"],"optional":true},"github-from-package@0.0.0":{},"github-slugger@2.0.0":{},"glob-parent@5.1.2":{"dependencies":{"is-glob":"4.0.3"}},"glob-to-regexp@0.4.1":{"optional":true},"glob@10.4.5":{"dependencies":{"foreground-child":"3.3.1","jackspeak":"3.4.3","minimatch":"9.0.5","minipass":"7.1.2","package-json-from-dist":"1.0.1","path-scurry":"1.11.1"}},"glob@10.5.0":{"dependencies":{"foreground-child":"3.3.1","jackspeak":"3.4.3","minimatch":"9.0.9","minipass":"7.1.3","package-json-from-dist":"1.0.1","path-scurry":"1.11.1"},"optional":true},"globals@15.15.0":{},"globby@11.1.0":{"dependencies":{"array-union":"2.1.0","dir-glob":"3.0.1","fast-glob":"3.3.3","ignore":"5.3.2","merge2":"1.4.1","slash":"3.0.0"}},"gopd@1.2.0":{},"graceful-fs@4.2.11":{},"grapheme-splitter@1.0.4":{"optional":true},"graphql@16.14.0":{"optional":true},"h3@2.0.1-rc.20":{"dependencies":{"rou3":"0.8.1","srvx":"0.11.15"}},"hachure-fill@0.5.2":{},"happy-dom@18.0.1":{"dependencies":{"@types/node":"20.19.39","@types/whatwg-mimetype":"3.0.2","whatwg-mimetype":"3.0.0"},"optional":true},"has-flag@4.0.0":{},"has-symbols@1.1.0":{},"has-tostringtag@1.0.2":{"dependencies":{"has-symbols":"1.1.0"}},"hasown@2.0.2":{"dependencies":{"function-bind":"1.1.2"}},"hast-util-embedded@3.0.0":{"dependencies":{"@types/hast":"3.0.4","hast-util-is-element":"3.0.0"}},"hast-util-from-html@2.0.3":{"dependencies":{"@types/hast":"3.0.4","devlop":"1.1.0","hast-util-from-parse5":"8.0.3","parse5":"7.3.0","vfile":"6.0.3","vfile-message":"4.0.2"}},"hast-util-from-parse5@8.0.3":{"dependencies":{"@types/hast":"3.0.4","@types/unist":"3.0.3","devlop":"1.1.0","hastscript":"9.0.1","property-information":"7.1.0","vfile":"6.0.3","vfile-location":"5.0.3","web-namespaces":"2.0.1"}},"hast-util-has-property@3.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-heading-rank@3.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-is-body-ok-link@3.0.1":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-is-element@3.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-minify-whitespace@1.0.1":{"dependencies":{"@types/hast":"3.0.4","hast-util-embedded":"3.0.0","hast-util-is-element":"3.0.0","hast-util-whitespace":"3.0.0","unist-util-is":"6.0.0"}},"hast-util-parse-selector@4.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-phrasing@3.0.1":{"dependencies":{"@types/hast":"3.0.4","hast-util-embedded":"3.0.0","hast-util-has-property":"3.0.0","hast-util-is-body-ok-link":"3.0.1","hast-util-is-element":"3.0.0"}},"hast-util-raw@9.1.0":{"dependencies":{"@types/hast":"3.0.4","@types/unist":"3.0.3","@ungap/structured-clone":"1.2.1","hast-util-from-parse5":"8.0.3","hast-util-to-parse5":"8.0.0","html-void-elements":"3.0.0","mdast-util-to-hast":"13.2.0","parse5":"7.3.0","unist-util-position":"5.0.0","unist-util-visit":"5.0.0","vfile":"6.0.3","web-namespaces":"2.0.1","zwitch":"2.0.4"}},"hast-util-sanitize@5.0.2":{"dependencies":{"@types/hast":"3.0.4","@ungap/structured-clone":"1.2.1","unist-util-position":"5.0.0"}},"hast-util-to-html@9.0.5":{"dependencies":{"@types/hast":"3.0.4","@types/unist":"3.0.3","ccount":"2.0.1","comma-separated-tokens":"2.0.3","hast-util-whitespace":"3.0.0","html-void-elements":"3.0.0","mdast-util-to-hast":"13.2.0","property-information":"7.1.0","space-separated-tokens":"2.0.2","stringify-entities":"4.0.4","zwitch":"2.0.4"}},"hast-util-to-mdast@10.1.2":{"dependencies":{"@types/hast":"3.0.4","@types/mdast":"4.0.4","@ungap/structured-clone":"1.2.1","hast-util-phrasing":"3.0.1","hast-util-to-html":"9.0.5","hast-util-to-text":"4.0.2","hast-util-whitespace":"3.0.0","mdast-util-phrasing":"4.1.0","mdast-util-to-hast":"13.2.0","mdast-util-to-string":"4.0.0","rehype-minify-whitespace":"6.0.2","trim-trailing-lines":"2.1.0","unist-util-position":"5.0.0","unist-util-visit":"5.0.0"}},"hast-util-to-parse5@8.0.0":{"dependencies":{"@types/hast":"3.0.4","comma-separated-tokens":"2.0.3","devlop":"1.1.0","property-information":"6.5.0","space-separated-tokens":"2.0.2","web-namespaces":"2.0.1","zwitch":"2.0.4"}},"hast-util-to-string@3.0.1":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-to-text@4.0.2":{"dependencies":{"@types/hast":"3.0.4","@types/unist":"3.0.3","hast-util-is-element":"3.0.0","unist-util-find-after":"5.0.0"}},"hast-util-whitespace@3.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hastscript@9.0.1":{"dependencies":{"@types/hast":"3.0.4","comma-separated-tokens":"2.0.3","hast-util-parse-selector":"4.0.0","property-information":"7.1.0","space-separated-tokens":"2.0.2"}},"headers-polyfill@4.0.3":{"optional":true},"highlight.js@11.11.1":{},"html-encoding-sniffer@4.0.0":{"dependencies":{"whatwg-encoding":"3.1.1"}},"html-escaper@2.0.2":{},"html-void-elements@3.0.0":{},"htmlfy@0.3.2":{"optional":true},"htmlparser2@10.0.0":{"dependencies":{"domelementtype":"2.3.0","domhandler":"5.0.3","domutils":"3.2.2","entities":"6.0.1"}},"htmlparser2@10.1.0":{"dependencies":{"domelementtype":"2.3.0","domhandler":"5.0.3","domutils":"3.2.2","entities":"7.0.1"},"optional":true},"http-proxy-agent@7.0.2":{"dependencies":{"agent-base":"7.1.3","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"]},"https-proxy-agent@7.0.2":{"dependencies":{"agent-base":"7.1.3","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"]},"https-proxy-agent@7.0.6":{"dependencies":{"agent-base":"7.1.3","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"]},"human-id@4.1.1":{},"iconv-lite@0.6.3":{"dependencies":{"safer-buffer":"2.1.2"}},"ieee754@1.2.1":{},"ignore@5.3.2":{},"immediate@3.0.6":{"optional":true},"immutable@5.1.5":{"optional":true},"import-meta-resolve@4.2.0":{"optional":true},"inherits@2.0.4":{},"ini@1.3.8":{},"ini@4.1.3":{},"internmap@1.0.1":{},"internmap@2.0.3":{},"ip-address@10.2.0":{"optional":true},"is-binary-path@2.1.0":{"dependencies":{"binary-extensions":"2.3.0"}},"is-docker@2.2.1":{},"is-extglob@2.1.1":{},"is-fullwidth-code-point@3.0.0":{},"is-glob@4.0.3":{"dependencies":{"is-extglob":"2.1.1"}},"is-interactive@1.0.0":{},"is-node-process@1.2.0":{"optional":true},"is-number@7.0.0":{},"is-plain-obj@4.1.0":{},"is-potential-custom-element-name@1.0.1":{},"is-stream@2.0.1":{"optional":true},"is-subdir@1.2.0":{"dependencies":{"better-path-resolve":"1.0.0"}},"is-unicode-supported@0.1.0":{},"is-windows@1.0.2":{},"is-wsl@2.2.0":{"dependencies":{"is-docker":"2.2.1"}},"isarray@1.0.0":{"optional":true},"isbot@5.1.28":{},"isexe@2.0.0":{},"isexe@3.1.5":{"optional":true},"istanbul-lib-coverage@3.2.2":{},"istanbul-lib-report@3.0.1":{"dependencies":{"istanbul-lib-coverage":"3.2.2","make-dir":"4.0.0","supports-color":"7.2.0"}},"istanbul-lib-source-maps@5.0.6":{"dependencies":{"@jridgewell/trace-mapping":"0.3.31","debug":"4.4.3","istanbul-lib-coverage":"3.2.2"},"transitivePeerDependencies":["supports-color"]},"istanbul-reports@3.2.0":{"dependencies":{"html-escaper":"2.0.2","istanbul-lib-report":"3.0.1"}},"jackspeak@3.4.3":{"dependencies":{"@isaacs/cliui":"8.0.2"},"optionalDependencies":{"@pkgjs/parseargs":"0.11.0"}},"jest-diff@30.1.1":{"dependencies":{"@jest/diff-sequences":"30.0.1","@jest/get-type":"30.1.0","chalk":"4.1.2","pretty-format":"30.0.5"}},"jest-worker@27.5.1":{"dependencies":{"@types/node":"22.19.17","merge-stream":"2.0.0","supports-color":"8.1.1"},"optional":true},"jiti@2.6.1":{},"js-tokens@10.0.0":{},"js-tokens@4.0.0":{},"js-tokens@9.0.1":{},"js-yaml@3.14.1":{"dependencies":{"argparse":"1.0.10","esprima":"4.0.1"}},"js-yaml@4.1.1":{"dependencies":{"argparse":"2.0.1"}},"jsdom@26.1.0":{"dependencies":{"cssstyle":"4.3.1","data-urls":"5.0.0","decimal.js":"10.6.0","html-encoding-sniffer":"4.0.0","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","is-potential-custom-element-name":"1.0.1","nwsapi":"2.2.20","parse5":"7.3.0","rrweb-cssom":"0.8.0","saxes":"6.0.0","symbol-tree":"3.2.4","tough-cookie":"5.1.2","w3c-xmlserializer":"5.0.0","webidl-conversions":"7.0.0","whatwg-encoding":"3.1.1","whatwg-mimetype":"4.0.0","whatwg-url":"14.2.0","ws":"8.18.3","xml-name-validator":"5.0.0"},"transitivePeerDependencies":["bufferutil","supports-color","utf-8-validate"]},"jsdom@27.3.0(postcss@8.5.14)":{"dependencies":{"@acemir/cssom":"0.9.28","@asamuzakjp/dom-selector":"6.7.6","cssstyle":"5.3.4(postcss@8.5.14)","data-urls":"6.0.0","decimal.js":"10.6.0","html-encoding-sniffer":"4.0.0","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","is-potential-custom-element-name":"1.0.1","parse5":"8.0.0","saxes":"6.0.0","symbol-tree":"3.2.4","tough-cookie":"6.0.0","w3c-xmlserializer":"5.0.0","webidl-conversions":"8.0.0","whatwg-encoding":"3.1.1","whatwg-mimetype":"4.0.0","whatwg-url":"15.1.0","ws":"8.18.3","xml-name-validator":"5.0.0"},"transitivePeerDependencies":["bufferutil","postcss","supports-color","utf-8-validate"]},"jsesc@3.1.0":{},"json-parse-even-better-errors@2.3.1":{"optional":true},"json-schema-to-ts@3.1.1":{"dependencies":{"@babel/runtime":"7.28.4","ts-algebra":"2.0.0"}},"json-schema-traverse@1.0.0":{},"json5@2.2.3":{},"jsonc-parser@3.2.0":{},"jsonfile@4.0.0":{"optionalDependencies":{"graceful-fs":"4.2.11"}},"jsonfile@6.2.0":{"dependencies":{"universalify":"2.0.1"},"optionalDependencies":{"graceful-fs":"4.2.11"}},"jszip@3.10.1":{"dependencies":{"lie":"3.3.0","pako":"1.0.11","readable-stream":"2.3.8","setimmediate":"1.0.5"},"optional":true},"katex@0.16.22":{"dependencies":{"commander":"8.3.0"}},"khroma@2.1.0":{},"kleur@4.1.5":{},"kolorist@1.8.0":{},"kysely@0.28.7":{},"langium@3.3.1":{"dependencies":{"chevrotain":"11.0.3","chevrotain-allstar":"0.3.1(chevrotain@11.0.3)","vscode-languageserver":"9.0.1","vscode-languageserver-textdocument":"1.0.12","vscode-uri":"3.0.8"}},"layout-base@1.0.2":{},"layout-base@2.0.1":{},"lazystream@1.0.1":{"dependencies":{"readable-stream":"2.3.8"},"optional":true},"lie@3.3.0":{"dependencies":{"immediate":"3.0.6"},"optional":true},"lightningcss-android-arm64@1.32.0":{"optional":true},"lightningcss-darwin-arm64@1.32.0":{"optional":true},"lightningcss-darwin-x64@1.32.0":{"optional":true},"lightningcss-freebsd-x64@1.32.0":{"optional":true},"lightningcss-linux-arm-gnueabihf@1.32.0":{"optional":true},"lightningcss-linux-arm64-gnu@1.32.0":{"optional":true},"lightningcss-linux-arm64-musl@1.32.0":{"optional":true},"lightningcss-linux-x64-gnu@1.32.0":{"optional":true},"lightningcss-linux-x64-musl@1.32.0":{"optional":true},"lightningcss-win32-arm64-msvc@1.32.0":{"optional":true},"lightningcss-win32-x64-msvc@1.32.0":{"optional":true},"lightningcss@1.32.0":{"dependencies":{"detect-libc":"2.1.2"},"optionalDependencies":{"lightningcss-android-arm64":"1.32.0","lightningcss-darwin-arm64":"1.32.0","lightningcss-darwin-x64":"1.32.0","lightningcss-freebsd-x64":"1.32.0","lightningcss-linux-arm-gnueabihf":"1.32.0","lightningcss-linux-arm64-gnu":"1.32.0","lightningcss-linux-arm64-musl":"1.32.0","lightningcss-linux-x64-gnu":"1.32.0","lightningcss-linux-x64-musl":"1.32.0","lightningcss-win32-arm64-msvc":"1.32.0","lightningcss-win32-x64-msvc":"1.32.0"}},"lines-and-columns@2.0.3":{},"loader-runner@4.3.2":{"optional":true},"local-pkg@1.1.2":{"dependencies":{"mlly":"1.8.0","pkg-types":"2.3.0","quansync":"0.2.11"}},"locate-app@2.5.0":{"dependencies":{"@promptbook/utils":"0.69.5","type-fest":"4.26.0","userhome":"1.0.1"},"optional":true},"locate-path@5.0.0":{"dependencies":{"p-locate":"4.1.0"}},"lodash-es@4.17.21":{},"lodash.clonedeep@4.5.0":{"optional":true},"lodash.startcase@4.4.0":{},"lodash.zip@4.2.0":{"optional":true},"lodash@4.18.1":{"optional":true},"log-symbols@4.1.0":{"dependencies":{"chalk":"4.1.2","is-unicode-supported":"0.1.0"}},"loglevel-plugin-prefix@0.8.4":{"optional":true},"loglevel@1.9.2":{"optional":true},"long@5.3.2":{},"longest-streak@3.1.0":{},"loupe@3.2.1":{},"lowlight@3.3.0":{"dependencies":{"@types/hast":"3.0.4","devlop":"1.1.0","highlight.js":"11.11.1"}},"lru-cache@10.4.3":{},"lru-cache@11.2.4":{},"lru-cache@5.1.1":{"dependencies":{"yallist":"3.1.1"}},"lru-cache@7.18.3":{"optional":true},"lucide-react@0.544.0(react@19.2.0)":{"dependencies":{"react":"19.2.0"}},"lz-string@1.5.0":{},"magic-string@0.30.18":{"dependencies":{"@jridgewell/sourcemap-codec":"1.5.5"}},"magic-string@0.30.21":{"dependencies":{"@jridgewell/sourcemap-codec":"1.5.5"}},"magicast@0.3.5":{"dependencies":{"@babel/parser":"7.28.5","@babel/types":"7.28.5","source-map-js":"1.2.1"}},"magicast@0.5.2":{"dependencies":{"@babel/parser":"7.29.3","@babel/types":"7.29.0","source-map-js":"1.2.1"}},"make-dir@4.0.0":{"dependencies":{"semver":"7.7.3"}},"markdown-table@3.0.4":{},"marked@16.4.2":{},"math-intrinsics@1.1.0":{},"mdast-util-find-and-replace@3.0.2":{"dependencies":{"@types/mdast":"4.0.4","escape-string-regexp":"5.0.0","unist-util-is":"6.0.0","unist-util-visit-parents":"6.0.1"}},"mdast-util-from-markdown@2.0.2":{"dependencies":{"@types/mdast":"4.0.4","@types/unist":"3.0.3","decode-named-character-reference":"1.0.2","devlop":"1.1.0","mdast-util-to-string":"4.0.0","micromark":"4.0.1","micromark-util-decode-numeric-character-reference":"2.0.2","micromark-util-decode-string":"2.0.1","micromark-util-normalize-identifier":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1","unist-util-stringify-position":"4.0.0"},"transitivePeerDependencies":["supports-color"]},"mdast-util-frontmatter@2.0.1":{"dependencies":{"@types/mdast":"4.0.4","devlop":"1.1.0","escape-string-regexp":"5.0.0","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2","micromark-extension-frontmatter":"2.0.0"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm-autolink-literal@2.0.1":{"dependencies":{"@types/mdast":"4.0.4","ccount":"2.0.1","devlop":"1.1.0","mdast-util-find-and-replace":"3.0.2","micromark-util-character":"2.1.1"}},"mdast-util-gfm-footnote@2.0.0":{"dependencies":{"@types/mdast":"4.0.4","devlop":"1.1.0","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2","micromark-util-normalize-identifier":"2.0.1"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm-strikethrough@2.0.0":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm-table@2.0.0":{"dependencies":{"@types/mdast":"4.0.4","devlop":"1.1.0","markdown-table":"3.0.4","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm-task-list-item@2.0.0":{"dependencies":{"@types/mdast":"4.0.4","devlop":"1.1.0","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm@3.0.0":{"dependencies":{"mdast-util-from-markdown":"2.0.2","mdast-util-gfm-autolink-literal":"2.0.1","mdast-util-gfm-footnote":"2.0.0","mdast-util-gfm-strikethrough":"2.0.0","mdast-util-gfm-table":"2.0.0","mdast-util-gfm-task-list-item":"2.0.0","mdast-util-to-markdown":"2.1.2"},"transitivePeerDependencies":["supports-color"]},"mdast-util-phrasing@4.1.0":{"dependencies":{"@types/mdast":"4.0.4","unist-util-is":"6.0.0"}},"mdast-util-to-hast@13.2.0":{"dependencies":{"@types/hast":"3.0.4","@types/mdast":"4.0.4","@ungap/structured-clone":"1.2.1","devlop":"1.1.0","micromark-util-sanitize-uri":"2.0.1","trim-lines":"3.0.1","unist-util-position":"5.0.0","unist-util-visit":"5.0.0","vfile":"6.0.3"}},"mdast-util-to-markdown@2.1.2":{"dependencies":{"@types/mdast":"4.0.4","@types/unist":"3.0.3","longest-streak":"3.1.0","mdast-util-phrasing":"4.1.0","mdast-util-to-string":"4.0.0","micromark-util-classify-character":"2.0.1","micromark-util-decode-string":"2.0.1","unist-util-visit":"5.0.0","zwitch":"2.0.4"}},"mdast-util-to-string@4.0.0":{"dependencies":{"@types/mdast":"4.0.4"}},"mdn-data@2.12.2":{},"merge-stream@2.0.0":{"optional":true},"merge2@1.4.1":{},"mermaid@11.12.1":{"dependencies":{"@braintree/sanitize-url":"7.1.1","@iconify/utils":"3.0.2","@mermaid-js/parser":"0.6.3","@types/d3":"7.4.3","cytoscape":"3.30.4","cytoscape-cose-bilkent":"4.1.0(cytoscape@3.30.4)","cytoscape-fcose":"2.2.0(cytoscape@3.30.4)","d3":"7.9.0","d3-sankey":"0.12.3","dagre-d3-es":"7.0.13","dayjs":"1.11.19","dompurify":"3.3.1","katex":"0.16.22","khroma":"2.1.0","lodash-es":"4.17.21","marked":"16.4.2","roughjs":"4.6.6","stylis":"4.3.6","ts-dedent":"2.2.0","uuid":"11.1.0"},"transitivePeerDependencies":["supports-color"]},"micromark-core-commonmark@2.0.2":{"dependencies":{"decode-named-character-reference":"1.0.2","devlop":"1.1.0","micromark-factory-destination":"2.0.1","micromark-factory-label":"2.0.1","micromark-factory-space":"2.0.1","micromark-factory-title":"2.0.1","micromark-factory-whitespace":"2.0.1","micromark-util-character":"2.1.1","micromark-util-chunked":"2.0.1","micromark-util-classify-character":"2.0.1","micromark-util-html-tag-name":"2.0.1","micromark-util-normalize-identifier":"2.0.1","micromark-util-resolve-all":"2.0.1","micromark-util-subtokenize":"2.0.3","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-frontmatter@2.0.0":{"dependencies":{"fault":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-autolink-literal@2.1.0":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-sanitize-uri":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-footnote@2.1.0":{"dependencies":{"devlop":"1.1.0","micromark-core-commonmark":"2.0.2","micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-normalize-identifier":"2.0.1","micromark-util-sanitize-uri":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-strikethrough@2.1.0":{"dependencies":{"devlop":"1.1.0","micromark-util-chunked":"2.0.1","micromark-util-classify-character":"2.0.1","micromark-util-resolve-all":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-table@2.1.1":{"dependencies":{"devlop":"1.1.0","micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-tagfilter@2.0.0":{"dependencies":{"micromark-util-types":"2.0.1"}},"micromark-extension-gfm-task-list-item@2.1.0":{"dependencies":{"devlop":"1.1.0","micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm@3.0.0":{"dependencies":{"micromark-extension-gfm-autolink-literal":"2.1.0","micromark-extension-gfm-footnote":"2.1.0","micromark-extension-gfm-strikethrough":"2.1.0","micromark-extension-gfm-table":"2.1.1","micromark-extension-gfm-tagfilter":"2.0.0","micromark-extension-gfm-task-list-item":"2.1.0","micromark-util-combine-extensions":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-factory-destination@2.0.1":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-factory-label@2.0.1":{"dependencies":{"devlop":"1.1.0","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-factory-space@2.0.1":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-types":"2.0.1"}},"micromark-factory-title@2.0.1":{"dependencies":{"micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-factory-whitespace@2.0.1":{"dependencies":{"micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-character@2.1.1":{"dependencies":{"micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-chunked@2.0.1":{"dependencies":{"micromark-util-symbol":"2.0.1"}},"micromark-util-classify-character@2.0.1":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-combine-extensions@2.0.1":{"dependencies":{"micromark-util-chunked":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-decode-numeric-character-reference@2.0.2":{"dependencies":{"micromark-util-symbol":"2.0.1"}},"micromark-util-decode-string@2.0.1":{"dependencies":{"decode-named-character-reference":"1.0.2","micromark-util-character":"2.1.1","micromark-util-decode-numeric-character-reference":"2.0.2","micromark-util-symbol":"2.0.1"}},"micromark-util-encode@2.0.1":{},"micromark-util-html-tag-name@2.0.1":{},"micromark-util-normalize-identifier@2.0.1":{"dependencies":{"micromark-util-symbol":"2.0.1"}},"micromark-util-resolve-all@2.0.1":{"dependencies":{"micromark-util-types":"2.0.1"}},"micromark-util-sanitize-uri@2.0.1":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-encode":"2.0.1","micromark-util-symbol":"2.0.1"}},"micromark-util-subtokenize@2.0.3":{"dependencies":{"devlop":"1.1.0","micromark-util-chunked":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-symbol@2.0.1":{},"micromark-util-types@2.0.1":{},"micromark@4.0.1":{"dependencies":{"@types/debug":"4.1.12","debug":"4.4.3","decode-named-character-reference":"1.0.2","devlop":"1.1.0","micromark-core-commonmark":"2.0.2","micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-chunked":"2.0.1","micromark-util-combine-extensions":"2.0.1","micromark-util-decode-numeric-character-reference":"2.0.2","micromark-util-encode":"2.0.1","micromark-util-normalize-identifier":"2.0.1","micromark-util-resolve-all":"2.0.1","micromark-util-sanitize-uri":"2.0.1","micromark-util-subtokenize":"2.0.3","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"},"transitivePeerDependencies":["supports-color"]},"micromatch@4.0.8":{"dependencies":{"braces":"3.0.3","picomatch":"2.3.1"}},"mime-db@1.52.0":{},"mime-types@2.1.35":{"dependencies":{"mime-db":"1.52.0"}},"mimic-fn@2.1.0":{},"mimic-response@3.1.0":{},"miniflare@4.20260504.0":{"dependencies":{"@cspotcode/source-map-support":"0.8.1","sharp":"0.34.5","undici":"7.24.8","workerd":"1.20260504.1","ws":"8.18.0","youch":"4.1.0-beta.10"},"transitivePeerDependencies":["bufferutil","utf-8-validate"]},"minimatch@5.1.9":{"dependencies":{"brace-expansion":"2.1.0"},"optional":true},"minimatch@9.0.3":{"dependencies":{"brace-expansion":"2.0.2"}},"minimatch@9.0.5":{"dependencies":{"brace-expansion":"2.0.2"}},"minimatch@9.0.9":{"dependencies":{"brace-expansion":"2.1.0"},"optional":true},"minimist@1.2.8":{},"minipass@3.3.6":{"dependencies":{"yallist":"4.0.0"}},"minipass@5.0.0":{},"minipass@7.1.2":{},"minipass@7.1.3":{"optional":true},"minizlib@2.1.2":{"dependencies":{"minipass":"3.3.6","yallist":"4.0.0"}},"mkdirp-classic@0.5.3":{},"mkdirp@1.0.4":{},"mlly@1.8.0":{"dependencies":{"acorn":"8.16.0","pathe":"2.0.3","pkg-types":"1.3.1","ufo":"1.6.1"}},"mri@1.2.0":{},"mrmime@2.0.1":{},"ms@2.1.3":{},"msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3)":{"dependencies":{"@bundled-es-modules/cookie":"2.0.1","@bundled-es-modules/statuses":"1.0.1","@bundled-es-modules/tough-cookie":"0.1.6","@inquirer/confirm":"5.1.21(@types/node@22.15.33)","@mswjs/interceptors":"0.39.8","@open-draft/deferred-promise":"2.2.0","@open-draft/until":"2.1.0","@types/cookie":"0.6.0","@types/statuses":"2.0.6","graphql":"16.14.0","headers-polyfill":"4.0.3","is-node-process":"1.2.0","outvariant":"1.4.3","path-to-regexp":"6.3.0","picocolors":"1.1.1","strict-event-emitter":"0.5.1","type-fest":"4.41.0","yargs":"17.7.2"},"optionalDependencies":{"typescript":"5.8.3"},"transitivePeerDependencies":["@types/node"],"optional":true},"msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3)":{"dependencies":{"@bundled-es-modules/cookie":"2.0.1","@bundled-es-modules/statuses":"1.0.1","@bundled-es-modules/tough-cookie":"0.1.6","@inquirer/confirm":"5.1.21(@types/node@24.10.2)","@mswjs/interceptors":"0.39.8","@open-draft/deferred-promise":"2.2.0","@open-draft/until":"2.1.0","@types/cookie":"0.6.0","@types/statuses":"2.0.6","graphql":"16.14.0","headers-polyfill":"4.0.3","is-node-process":"1.2.0","outvariant":"1.4.3","path-to-regexp":"6.3.0","picocolors":"1.1.1","strict-event-emitter":"0.5.1","type-fest":"4.41.0","yargs":"17.7.2"},"optionalDependencies":{"typescript":"5.8.3"},"transitivePeerDependencies":["@types/node"],"optional":true},"msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3)":{"dependencies":{"@bundled-es-modules/cookie":"2.0.1","@bundled-es-modules/statuses":"1.0.1","@bundled-es-modules/tough-cookie":"0.1.6","@inquirer/confirm":"5.1.21(@types/node@24.10.2)","@mswjs/interceptors":"0.39.8","@open-draft/deferred-promise":"2.2.0","@open-draft/until":"2.1.0","@types/cookie":"0.6.0","@types/statuses":"2.0.6","graphql":"16.14.0","headers-polyfill":"4.0.3","is-node-process":"1.2.0","outvariant":"1.4.3","path-to-regexp":"6.3.0","picocolors":"1.1.1","strict-event-emitter":"0.5.1","type-fest":"4.41.0","yargs":"17.7.2"},"optionalDependencies":{"typescript":"5.9.3"},"transitivePeerDependencies":["@types/node"],"optional":true},"mute-stream@2.0.0":{"optional":true},"nanoid@3.3.11":{},"napi-build-utils@2.0.0":{},"neo-async@2.6.2":{"optional":true},"netmask@2.1.1":{"optional":true},"node-abi@3.89.0":{"dependencies":{"semver":"7.7.3"}},"node-domexception@1.0.0":{"optional":true},"node-fetch@3.3.2":{"dependencies":{"data-uri-to-buffer":"4.0.1","fetch-blob":"3.2.0","formdata-polyfill":"4.0.10"},"optional":true},"node-machine-id@1.1.12":{},"node-releases@2.0.19":{},"node-releases@2.0.38":{"optional":true},"normalize-path@3.0.0":{},"npm-run-path@4.0.1":{"dependencies":{"path-key":"3.1.1"}},"nth-check@2.1.1":{"dependencies":{"boolbase":"1.0.0"}},"nwsapi@2.2.20":{},"nx-cloud@19.1.0":{"dependencies":{"@nrwl/nx-cloud":"19.1.0","axios":"1.11.0","chalk":"4.1.2","dotenv":"10.0.0","fs-extra":"11.3.1","ini":"4.1.3","node-machine-id":"1.1.12","open":"8.4.2","tar":"6.2.1","yargs-parser":"22.0.0"},"transitivePeerDependencies":["debug"]},"nx@21.4.1":{"dependencies":{"@napi-rs/wasm-runtime":"0.2.4","@yarnpkg/lockfile":"1.1.0","@yarnpkg/parsers":"3.0.2","@zkochan/js-yaml":"0.0.7","axios":"1.11.0","chalk":"4.1.2","cli-cursor":"3.1.0","cli-spinners":"2.6.1","cliui":"8.0.1","dotenv":"16.4.7","dotenv-expand":"11.0.7","enquirer":"2.3.6","figures":"3.2.0","flat":"5.0.2","front-matter":"4.0.2","ignore":"5.3.2","jest-diff":"30.1.1","jsonc-parser":"3.2.0","lines-and-columns":"2.0.3","minimatch":"9.0.3","node-machine-id":"1.1.12","npm-run-path":"4.0.1","open":"8.4.2","ora":"5.3.0","resolve.exports":"2.0.3","semver":"7.7.2","string-width":"4.2.3","tar-stream":"2.2.0","tmp":"0.2.5","tree-kill":"1.2.2","tsconfig-paths":"4.2.0","tslib":"2.8.1","yaml":"2.8.1","yargs":"17.7.2","yargs-parser":"21.1.1"},"optionalDependencies":{"@nx/nx-darwin-arm64":"21.4.1","@nx/nx-darwin-x64":"21.4.1","@nx/nx-freebsd-x64":"21.4.1","@nx/nx-linux-arm-gnueabihf":"21.4.1","@nx/nx-linux-arm64-gnu":"21.4.1","@nx/nx-linux-arm64-musl":"21.4.1","@nx/nx-linux-x64-gnu":"21.4.1","@nx/nx-linux-x64-musl":"21.4.1","@nx/nx-win32-arm64-msvc":"21.4.1","@nx/nx-win32-x64-msvc":"21.4.1"},"transitivePeerDependencies":["debug"]},"obug@2.1.1":{},"once@1.4.0":{"dependencies":{"wrappy":"1.0.2"}},"onetime@5.1.2":{"dependencies":{"mimic-fn":"2.1.0"}},"oniguruma-parser@0.12.1":{},"oniguruma-to-es@4.3.3":{"dependencies":{"oniguruma-parser":"0.12.1","regex":"6.0.1","regex-recursion":"6.0.2"}},"open@8.4.2":{"dependencies":{"define-lazy-prop":"2.0.0","is-docker":"2.2.1","is-wsl":"2.2.0"}},"ora@5.3.0":{"dependencies":{"bl":"4.1.0","chalk":"4.1.2","cli-cursor":"3.1.0","cli-spinners":"2.9.2","is-interactive":"1.0.0","log-symbols":"4.1.0","strip-ansi":"6.0.1","wcwidth":"1.0.1"}},"outdent@0.5.0":{},"outvariant@1.4.3":{"optional":true},"oxlint@1.26.0":{"optionalDependencies":{"@oxlint/darwin-arm64":"1.26.0","@oxlint/darwin-x64":"1.26.0","@oxlint/linux-arm64-gnu":"1.26.0","@oxlint/linux-arm64-musl":"1.26.0","@oxlint/linux-x64-gnu":"1.26.0","@oxlint/linux-x64-musl":"1.26.0","@oxlint/win32-arm64":"1.26.0","@oxlint/win32-x64":"1.26.0"}},"p-filter@2.1.0":{"dependencies":{"p-map":"2.1.0"}},"p-limit@2.3.0":{"dependencies":{"p-try":"2.2.0"}},"p-locate@4.1.0":{"dependencies":{"p-limit":"2.3.0"}},"p-map@2.1.0":{},"p-map@7.0.4":{},"p-try@2.2.0":{},"pac-proxy-agent@7.2.0":{"dependencies":{"@tootallnate/quickjs-emscripten":"0.23.0","agent-base":"7.1.4","debug":"4.4.3","get-uri":"6.0.5","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","pac-resolver":"7.0.1","socks-proxy-agent":"8.0.5"},"transitivePeerDependencies":["supports-color"],"optional":true},"pac-resolver@7.0.1":{"dependencies":{"degenerator":"5.0.1","netmask":"2.1.1"},"optional":true},"package-json-from-dist@1.0.1":{},"package-manager-detector@0.2.11":{"dependencies":{"quansync":"0.2.11"}},"package-manager-detector@1.5.0":{},"pako@1.0.11":{"optional":true},"parse5-htmlparser2-tree-adapter@7.1.0":{"dependencies":{"domhandler":"5.0.3","parse5":"7.3.0"}},"parse5-parser-stream@7.1.2":{"dependencies":{"parse5":"7.3.0"}},"parse5@7.3.0":{"dependencies":{"entities":"6.0.1"}},"parse5@8.0.0":{"dependencies":{"entities":"6.0.1"}},"path-data-parser@0.1.0":{},"path-exists@4.0.0":{},"path-key@3.1.1":{},"path-scurry@1.11.1":{"dependencies":{"lru-cache":"10.4.3","minipass":"7.1.2"}},"path-to-regexp@6.3.0":{},"path-type@4.0.0":{},"pathe@2.0.3":{},"pathval@2.0.1":{},"pend@1.2.0":{"optional":true},"picocolors@1.1.1":{},"picomatch@2.3.1":{},"picomatch@4.0.3":{},"picomatch@4.0.4":{},"pify@4.0.1":{},"pkg-types@1.3.1":{"dependencies":{"confbox":"0.1.8","mlly":"1.8.0","pathe":"2.0.3"}},"pkg-types@2.3.0":{"dependencies":{"confbox":"0.2.2","exsolve":"1.0.8","pathe":"2.0.3"}},"playwright-core@1.55.0":{"optional":true},"playwright@1.55.0":{"dependencies":{"playwright-core":"1.55.0"},"optionalDependencies":{"fsevents":"2.3.2"},"optional":true},"pngjs@7.0.0":{},"points-on-curve@0.2.0":{},"points-on-path@0.2.1":{"dependencies":{"path-data-parser":"0.1.0","points-on-curve":"0.2.0"}},"postcss@8.5.14":{"dependencies":{"nanoid":"3.3.11","picocolors":"1.1.1","source-map-js":"1.2.1"}},"postcss@8.5.6":{"dependencies":{"nanoid":"3.3.11","picocolors":"1.1.1","source-map-js":"1.2.1"}},"posthog-js@1.321.2":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/api-logs":"0.208.0","@opentelemetry/exporter-logs-otlp-http":"0.208.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.4.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-logs":"0.208.0(@opentelemetry/api@1.9.0)","@posthog/core":"1.9.1","@posthog/types":"1.321.2","core-js":"3.46.0","dompurify":"3.3.1","fflate":"0.4.8","preact":"10.28.2","query-selector-shadow-dom":"1.0.1","web-vitals":"4.2.4"}},"preact@10.28.2":{},"prebuild-install@7.1.3":{"dependencies":{"detect-libc":"2.1.2","expand-template":"2.0.3","github-from-package":"0.0.0","minimist":"1.2.8","mkdirp-classic":"0.5.3","napi-build-utils":"2.0.0","node-abi":"3.89.0","pump":"3.0.3","rc":"1.2.8","simple-get":"4.0.1","tar-fs":"2.1.4","tunnel-agent":"0.6.0"}},"prettier@2.8.8":{},"prettier@3.6.2":{},"pretty-format@27.5.1":{"dependencies":{"ansi-regex":"5.0.1","ansi-styles":"5.2.0","react-is":"17.0.2"}},"pretty-format@30.0.5":{"dependencies":{"@jest/schemas":"30.0.5","ansi-styles":"5.2.0","react-is":"18.3.1"}},"process-nextick-args@2.0.1":{"optional":true},"process@0.11.10":{"optional":true},"progress@2.0.3":{"optional":true},"property-information@6.5.0":{},"property-information@7.1.0":{},"protobufjs@7.5.4":{"dependencies":{"@protobufjs/aspromise":"1.1.2","@protobufjs/base64":"1.1.2","@protobufjs/codegen":"2.0.4","@protobufjs/eventemitter":"1.1.0","@protobufjs/fetch":"1.1.0","@protobufjs/float":"1.0.2","@protobufjs/inquire":"1.1.0","@protobufjs/path":"1.1.2","@protobufjs/pool":"1.1.0","@protobufjs/utf8":"1.1.0","@types/node":"22.15.33","long":"5.3.2"}},"proxy-agent@6.5.0":{"dependencies":{"agent-base":"7.1.4","debug":"4.4.3","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","lru-cache":"7.18.3","pac-proxy-agent":"7.2.0","proxy-from-env":"1.1.0","socks-proxy-agent":"8.0.5"},"transitivePeerDependencies":["supports-color"],"optional":true},"proxy-from-env@1.1.0":{},"psl@1.15.0":{"dependencies":{"punycode":"2.3.1"},"optional":true},"pump@3.0.3":{"dependencies":{"end-of-stream":"1.4.5","once":"1.4.0"}},"pump@3.0.4":{"dependencies":{"end-of-stream":"1.4.5","once":"1.4.0"},"optional":true},"punycode@2.3.1":{},"quansync@0.2.11":{},"query-selector-shadow-dom@1.0.1":{},"querystringify@2.2.0":{"optional":true},"queue-microtask@1.2.3":{},"rc@1.2.8":{"dependencies":{"deep-extend":"0.6.0","ini":"1.3.8","minimist":"1.2.8","strip-json-comments":"2.0.1"}},"react-dom@19.2.0(react@19.2.0)":{"dependencies":{"react":"19.2.0","scheduler":"0.27.0"}},"react-is@17.0.2":{},"react-is@18.3.1":{},"react@19.2.0":{},"read-yaml-file@1.1.0":{"dependencies":{"graceful-fs":"4.2.11","js-yaml":"3.14.1","pify":"4.0.1","strip-bom":"3.0.0"}},"readable-stream@2.3.8":{"dependencies":{"core-util-is":"1.0.3","inherits":"2.0.4","isarray":"1.0.0","process-nextick-args":"2.0.1","safe-buffer":"5.1.2","string_decoder":"1.1.1","util-deprecate":"1.0.2"},"optional":true},"readable-stream@3.6.2":{"dependencies":{"inherits":"2.0.4","string_decoder":"1.3.0","util-deprecate":"1.0.2"}},"readable-stream@4.7.0":{"dependencies":{"abort-controller":"3.0.0","buffer":"6.0.3","events":"3.3.0","process":"0.11.10","string_decoder":"1.3.0"},"optional":true},"readdir-glob@1.1.3":{"dependencies":{"minimatch":"5.1.9"},"optional":true},"readdirp@3.6.0":{"dependencies":{"picomatch":"2.3.1"}},"regex-recursion@6.0.2":{"dependencies":{"regex-utilities":"2.3.0"}},"regex-utilities@2.3.0":{},"regex@6.0.1":{"dependencies":{"regex-utilities":"2.3.0"}},"rehype-autolink-headings@7.1.0":{"dependencies":{"@types/hast":"3.0.4","@ungap/structured-clone":"1.2.1","hast-util-heading-rank":"3.0.0","hast-util-is-element":"3.0.0","unified":"11.0.5","unist-util-visit":"5.0.0"}},"rehype-highlight@7.0.2":{"dependencies":{"@types/hast":"3.0.4","hast-util-to-text":"4.0.2","lowlight":"3.3.0","unist-util-visit":"5.0.0","vfile":"6.0.3"}},"rehype-minify-whitespace@6.0.2":{"dependencies":{"@types/hast":"3.0.4","hast-util-minify-whitespace":"1.0.1"}},"rehype-parse@9.0.1":{"dependencies":{"@types/hast":"3.0.4","hast-util-from-html":"2.0.3","unified":"11.0.5"}},"rehype-raw@7.0.0":{"dependencies":{"@types/hast":"3.0.4","hast-util-raw":"9.1.0","vfile":"6.0.3"}},"rehype-remark@10.0.1":{"dependencies":{"@types/hast":"3.0.4","@types/mdast":"4.0.4","hast-util-to-mdast":"10.1.2","unified":"11.0.5","vfile":"6.0.3"}},"rehype-sanitize@6.0.0":{"dependencies":{"@types/hast":"3.0.4","hast-util-sanitize":"5.0.2"}},"rehype-slug@6.0.0":{"dependencies":{"@types/hast":"3.0.4","github-slugger":"2.0.0","hast-util-heading-rank":"3.0.0","hast-util-to-string":"3.0.1","unist-util-visit":"5.0.0"}},"rehype-stringify@10.0.1":{"dependencies":{"@types/hast":"3.0.4","hast-util-to-html":"9.0.5","unified":"11.0.5"}},"remark-frontmatter@5.0.0":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-frontmatter":"2.0.1","micromark-extension-frontmatter":"2.0.0","unified":"11.0.5"},"transitivePeerDependencies":["supports-color"]},"remark-gfm@4.0.1":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-gfm":"3.0.0","micromark-extension-gfm":"3.0.0","remark-parse":"11.0.0","remark-stringify":"11.0.0","unified":"11.0.5"},"transitivePeerDependencies":["supports-color"]},"remark-parse@11.0.0":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-from-markdown":"2.0.2","micromark-util-types":"2.0.1","unified":"11.0.5"},"transitivePeerDependencies":["supports-color"]},"remark-rehype@11.1.2":{"dependencies":{"@types/hast":"3.0.4","@types/mdast":"4.0.4","mdast-util-to-hast":"13.2.0","unified":"11.0.5","vfile":"6.0.3"}},"remark-stringify@11.0.0":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-to-markdown":"2.1.2","unified":"11.0.5"}},"require-directory@2.1.1":{},"require-from-string@2.0.2":{},"requires-port@1.0.0":{"optional":true},"resolve-from@5.0.0":{},"resolve-pkg-maps@1.0.0":{"optional":true},"resolve.exports@2.0.3":{},"resq@1.11.0":{"dependencies":{"fast-deep-equal":"2.0.1"},"optional":true},"restore-cursor@3.1.0":{"dependencies":{"onetime":"5.1.2","signal-exit":"3.0.7"}},"reusify@1.0.4":{},"rgb2hex@0.2.5":{"optional":true},"robust-predicates@3.0.2":{},"rolldown@1.0.0-rc.17":{"dependencies":{"@oxc-project/types":"0.127.0","@rolldown/pluginutils":"1.0.0-rc.17"},"optionalDependencies":{"@rolldown/binding-android-arm64":"1.0.0-rc.17","@rolldown/binding-darwin-arm64":"1.0.0-rc.17","@rolldown/binding-darwin-x64":"1.0.0-rc.17","@rolldown/binding-freebsd-x64":"1.0.0-rc.17","@rolldown/binding-linux-arm-gnueabihf":"1.0.0-rc.17","@rolldown/binding-linux-arm64-gnu":"1.0.0-rc.17","@rolldown/binding-linux-arm64-musl":"1.0.0-rc.17","@rolldown/binding-linux-ppc64-gnu":"1.0.0-rc.17","@rolldown/binding-linux-s390x-gnu":"1.0.0-rc.17","@rolldown/binding-linux-x64-gnu":"1.0.0-rc.17","@rolldown/binding-linux-x64-musl":"1.0.0-rc.17","@rolldown/binding-openharmony-arm64":"1.0.0-rc.17","@rolldown/binding-wasm32-wasi":"1.0.0-rc.17","@rolldown/binding-win32-arm64-msvc":"1.0.0-rc.17","@rolldown/binding-win32-x64-msvc":"1.0.0-rc.17"}},"rollup@4.53.2":{"dependencies":{"@types/estree":"1.0.8"},"optionalDependencies":{"@rollup/rollup-android-arm-eabi":"4.53.2","@rollup/rollup-android-arm64":"4.53.2","@rollup/rollup-darwin-arm64":"4.53.2","@rollup/rollup-darwin-x64":"4.53.2","@rollup/rollup-freebsd-arm64":"4.53.2","@rollup/rollup-freebsd-x64":"4.53.2","@rollup/rollup-linux-arm-gnueabihf":"4.53.2","@rollup/rollup-linux-arm-musleabihf":"4.53.2","@rollup/rollup-linux-arm64-gnu":"4.53.2","@rollup/rollup-linux-arm64-musl":"4.53.2","@rollup/rollup-linux-loong64-gnu":"4.53.2","@rollup/rollup-linux-ppc64-gnu":"4.53.2","@rollup/rollup-linux-riscv64-gnu":"4.53.2","@rollup/rollup-linux-riscv64-musl":"4.53.2","@rollup/rollup-linux-s390x-gnu":"4.53.2","@rollup/rollup-linux-x64-gnu":"4.53.2","@rollup/rollup-linux-x64-musl":"4.53.2","@rollup/rollup-openharmony-arm64":"4.53.2","@rollup/rollup-win32-arm64-msvc":"4.53.2","@rollup/rollup-win32-ia32-msvc":"4.53.2","@rollup/rollup-win32-x64-gnu":"4.53.2","@rollup/rollup-win32-x64-msvc":"4.53.2","fsevents":"2.3.3"}},"rou3@0.8.1":{},"roughjs@4.6.6":{"dependencies":{"hachure-fill":"0.5.2","path-data-parser":"0.1.0","points-on-curve":"0.2.0","points-on-path":"0.2.1"}},"rrweb-cssom@0.8.0":{},"run-parallel@1.2.0":{"dependencies":{"queue-microtask":"1.2.3"}},"rw@1.3.3":{},"rxjs@7.8.2":{"dependencies":{"tslib":"2.8.1"},"optional":true},"safaridriver@0.1.2":{"optional":true},"safe-buffer@5.1.2":{"optional":true},"safe-buffer@5.2.1":{},"safer-buffer@2.1.2":{},"sass-embedded-android-arm64@1.89.2":{"optional":true},"sass-embedded-android-arm@1.89.2":{"optional":true},"sass-embedded-android-riscv64@1.89.2":{"optional":true},"sass-embedded-android-x64@1.89.2":{"optional":true},"sass-embedded-darwin-arm64@1.89.2":{"optional":true},"sass-embedded-darwin-x64@1.89.2":{"optional":true},"sass-embedded-linux-arm64@1.89.2":{"optional":true},"sass-embedded-linux-arm@1.89.2":{"optional":true},"sass-embedded-linux-musl-arm64@1.89.2":{"optional":true},"sass-embedded-linux-musl-arm@1.89.2":{"optional":true},"sass-embedded-linux-musl-riscv64@1.89.2":{"optional":true},"sass-embedded-linux-musl-x64@1.89.2":{"optional":true},"sass-embedded-linux-riscv64@1.89.2":{"optional":true},"sass-embedded-linux-x64@1.89.2":{"optional":true},"sass-embedded-win32-arm64@1.89.2":{"optional":true},"sass-embedded-win32-x64@1.89.2":{"optional":true},"sass-embedded@1.89.2":{"dependencies":{"@bufbuild/protobuf":"2.12.0","buffer-builder":"0.2.0","colorjs.io":"0.5.2","immutable":"5.1.5","rxjs":"7.8.2","supports-color":"8.1.1","sync-child-process":"1.0.2","varint":"6.0.0"},"optionalDependencies":{"sass-embedded-android-arm":"1.89.2","sass-embedded-android-arm64":"1.89.2","sass-embedded-android-riscv64":"1.89.2","sass-embedded-android-x64":"1.89.2","sass-embedded-darwin-arm64":"1.89.2","sass-embedded-darwin-x64":"1.89.2","sass-embedded-linux-arm":"1.89.2","sass-embedded-linux-arm64":"1.89.2","sass-embedded-linux-musl-arm":"1.89.2","sass-embedded-linux-musl-arm64":"1.89.2","sass-embedded-linux-musl-riscv64":"1.89.2","sass-embedded-linux-musl-x64":"1.89.2","sass-embedded-linux-riscv64":"1.89.2","sass-embedded-linux-x64":"1.89.2","sass-embedded-win32-arm64":"1.89.2","sass-embedded-win32-x64":"1.89.2"},"optional":true},"saxes@6.0.0":{"dependencies":{"xmlchars":"2.2.0"}},"scheduler@0.27.0":{},"schema-utils@4.3.3":{"dependencies":{"@types/json-schema":"7.0.15","ajv":"8.20.0","ajv-formats":"2.1.1(ajv@8.20.0)","ajv-keywords":"5.1.0(ajv@8.20.0)"},"optional":true},"semver@6.3.1":{},"semver@7.7.2":{},"semver@7.7.3":{},"semver@7.7.4":{"optional":true},"serialize-error@11.0.3":{"dependencies":{"type-fest":"2.19.0"},"optional":true},"seroval-plugins@1.5.4(seroval@1.5.4)":{"dependencies":{"seroval":"1.5.4"}},"seroval@1.5.4":{},"setimmediate@1.0.5":{"optional":true},"sharp@0.34.5":{"dependencies":{"@img/colour":"1.1.0","detect-libc":"2.1.2","semver":"7.7.3"},"optionalDependencies":{"@img/sharp-darwin-arm64":"0.34.5","@img/sharp-darwin-x64":"0.34.5","@img/sharp-libvips-darwin-arm64":"1.2.4","@img/sharp-libvips-darwin-x64":"1.2.4","@img/sharp-libvips-linux-arm":"1.2.4","@img/sharp-libvips-linux-arm64":"1.2.4","@img/sharp-libvips-linux-ppc64":"1.2.4","@img/sharp-libvips-linux-riscv64":"1.2.4","@img/sharp-libvips-linux-s390x":"1.2.4","@img/sharp-libvips-linux-x64":"1.2.4","@img/sharp-libvips-linuxmusl-arm64":"1.2.4","@img/sharp-libvips-linuxmusl-x64":"1.2.4","@img/sharp-linux-arm":"0.34.5","@img/sharp-linux-arm64":"0.34.5","@img/sharp-linux-ppc64":"0.34.5","@img/sharp-linux-riscv64":"0.34.5","@img/sharp-linux-s390x":"0.34.5","@img/sharp-linux-x64":"0.34.5","@img/sharp-linuxmusl-arm64":"0.34.5","@img/sharp-linuxmusl-x64":"0.34.5","@img/sharp-wasm32":"0.34.5","@img/sharp-win32-arm64":"0.34.5","@img/sharp-win32-ia32":"0.34.5","@img/sharp-win32-x64":"0.34.5"}},"shebang-command@2.0.0":{"dependencies":{"shebang-regex":"3.0.0"}},"shebang-regex@3.0.0":{},"shiki@3.15.0":{"dependencies":{"@shikijs/core":"3.15.0","@shikijs/engine-javascript":"3.15.0","@shikijs/engine-oniguruma":"3.15.0","@shikijs/langs":"3.15.0","@shikijs/themes":"3.15.0","@shikijs/types":"3.15.0","@shikijs/vscode-textmate":"10.0.2","@types/hast":"3.0.4"}},"siginfo@2.0.0":{},"signal-exit@3.0.7":{},"signal-exit@4.1.0":{},"simple-concat@1.0.1":{},"simple-get@4.0.1":{"dependencies":{"decompress-response":"6.0.0","once":"1.4.0","simple-concat":"1.0.1"}},"sirv@3.0.2":{"dependencies":{"@polka/url":"1.0.0-next.29","mrmime":"2.0.1","totalist":"3.0.1"}},"slash@3.0.0":{},"smart-buffer@4.2.0":{"optional":true},"socks-proxy-agent@8.0.5":{"dependencies":{"agent-base":"7.1.4","debug":"4.4.3","socks":"2.8.8"},"transitivePeerDependencies":["supports-color"],"optional":true},"socks@2.8.8":{"dependencies":{"ip-address":"10.2.0","smart-buffer":"4.2.0"},"optional":true},"source-map-js@1.2.1":{},"source-map-support@0.5.21":{"dependencies":{"buffer-from":"1.1.2","source-map":"0.6.1"},"optional":true},"source-map@0.6.1":{"optional":true},"source-map@0.7.6":{},"space-separated-tokens@2.0.2":{},"spacetrim@0.11.59":{"optional":true},"spawndamnit@3.0.1":{"dependencies":{"cross-spawn":"7.0.6","signal-exit":"4.1.0"}},"split2@4.2.0":{"optional":true},"sprintf-js@1.0.3":{},"srvx@0.11.15":{},"stackback@0.0.2":{},"statuses@2.0.2":{"optional":true},"std-env@3.10.0":{},"std-env@3.9.0":{},"std-env@4.1.0":{},"streamx@2.25.0":{"dependencies":{"events-universal":"1.0.1","fast-fifo":"1.3.2","text-decoder":"1.2.7"},"transitivePeerDependencies":["bare-abort-controller","react-native-b4a"],"optional":true},"strict-event-emitter@0.5.1":{"optional":true},"string-width@4.2.3":{"dependencies":{"emoji-regex":"8.0.0","is-fullwidth-code-point":"3.0.0","strip-ansi":"6.0.1"}},"string-width@5.1.2":{"dependencies":{"eastasianwidth":"0.2.0","emoji-regex":"9.2.2","strip-ansi":"7.1.2"}},"string_decoder@1.1.1":{"dependencies":{"safe-buffer":"5.1.2"},"optional":true},"string_decoder@1.3.0":{"dependencies":{"safe-buffer":"5.2.1"}},"stringify-entities@4.0.4":{"dependencies":{"character-entities-html4":"2.1.0","character-entities-legacy":"3.0.0"}},"strip-ansi@6.0.1":{"dependencies":{"ansi-regex":"5.0.1"}},"strip-ansi@7.1.2":{"dependencies":{"ansi-regex":"6.1.0"}},"strip-ansi@7.2.0":{"dependencies":{"ansi-regex":"6.2.2"},"optional":true},"strip-bom@3.0.0":{},"strip-json-comments@2.0.1":{},"strip-literal@3.0.0":{"dependencies":{"js-tokens":"9.0.1"}},"strnum@1.1.2":{"optional":true},"stylis@4.3.6":{},"supports-color@10.2.2":{},"supports-color@7.2.0":{"dependencies":{"has-flag":"4.0.0"}},"supports-color@8.1.1":{"dependencies":{"has-flag":"4.0.0"},"optional":true},"symbol-tree@3.2.4":{},"sync-child-process@1.0.2":{"dependencies":{"sync-message-port":"1.2.0"},"optional":true},"sync-message-port@1.2.0":{"optional":true},"tailwindcss@4.2.4":{},"tapable@2.3.3":{},"tar-fs@2.1.4":{"dependencies":{"chownr":"1.1.4","mkdirp-classic":"0.5.3","pump":"3.0.3","tar-stream":"2.2.0"}},"tar-fs@3.1.2":{"dependencies":{"pump":"3.0.4","tar-stream":"3.2.0"},"optionalDependencies":{"bare-fs":"4.7.1","bare-path":"3.0.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a"],"optional":true},"tar-stream@2.2.0":{"dependencies":{"bl":"4.1.0","end-of-stream":"1.4.5","fs-constants":"1.0.0","inherits":"2.0.4","readable-stream":"3.6.2"}},"tar-stream@3.2.0":{"dependencies":{"b4a":"1.8.1","bare-fs":"4.7.1","fast-fifo":"1.3.2","streamx":"2.25.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a"],"optional":true},"tar@6.2.1":{"dependencies":{"chownr":"2.0.0","fs-minipass":"2.1.0","minipass":"5.0.0","minizlib":"2.1.2","mkdirp":"1.0.4","yallist":"4.0.0"}},"teex@1.0.1":{"dependencies":{"streamx":"2.25.0"},"transitivePeerDependencies":["bare-abort-controller","react-native-b4a"],"optional":true},"term-size@2.2.1":{},"terser-webpack-plugin@5.5.0(esbuild@0.27.3)(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@jridgewell/trace-mapping":"0.3.31","jest-worker":"27.5.1","schema-utils":"4.3.3","terser":"5.36.0","webpack":"5.99.9(esbuild@0.27.3)"},"optionalDependencies":{"esbuild":"0.27.3"},"optional":true},"terser@5.36.0":{"dependencies":{"@jridgewell/source-map":"0.3.11","acorn":"8.16.0","commander":"2.20.3","source-map-support":"0.5.21"},"optional":true},"test-exclude@7.0.1":{"dependencies":{"@istanbuljs/schema":"0.1.3","glob":"10.4.5","minimatch":"9.0.5"}},"text-decoder@1.2.7":{"dependencies":{"b4a":"1.8.1"},"transitivePeerDependencies":["react-native-b4a"],"optional":true},"tinybench@2.9.0":{},"tinyexec@0.3.2":{},"tinyexec@1.0.2":{},"tinyglobby@0.2.14":{"dependencies":{"fdir":"6.5.0(picomatch@4.0.4)","picomatch":"4.0.4"}},"tinyglobby@0.2.15":{"dependencies":{"fdir":"6.5.0(picomatch@4.0.4)","picomatch":"4.0.4"}},"tinyglobby@0.2.16":{"dependencies":{"fdir":"6.5.0(picomatch@4.0.4)","picomatch":"4.0.4"}},"tinypool@1.1.1":{},"tinyrainbow@2.0.0":{},"tinyrainbow@3.0.3":{},"tinyrainbow@3.1.0":{},"tinyspy@4.0.3":{},"tldts-core@6.1.52":{},"tldts-core@7.0.19":{},"tldts@6.1.52":{"dependencies":{"tldts-core":"6.1.52"}},"tldts@7.0.19":{"dependencies":{"tldts-core":"7.0.19"}},"tmp@0.2.5":{},"to-regex-range@5.0.1":{"dependencies":{"is-number":"7.0.0"}},"totalist@3.0.1":{},"tough-cookie@4.1.4":{"dependencies":{"psl":"1.15.0","punycode":"2.3.1","universalify":"0.2.0","url-parse":"1.5.10"},"optional":true},"tough-cookie@5.1.2":{"dependencies":{"tldts":"6.1.52"}},"tough-cookie@6.0.0":{"dependencies":{"tldts":"7.0.19"}},"tr46@5.1.1":{"dependencies":{"punycode":"2.3.1"}},"tr46@6.0.0":{"dependencies":{"punycode":"2.3.1"}},"tree-kill@1.2.2":{},"trim-lines@3.0.1":{},"trim-trailing-lines@2.1.0":{},"trough@2.2.0":{},"ts-algebra@2.0.0":{},"ts-dedent@2.2.0":{},"tsconfig-paths@4.2.0":{"dependencies":{"json5":"2.2.3","minimist":"1.2.8","strip-bom":"3.0.0"}},"tslib@2.8.1":{},"tsx@4.20.5":{"dependencies":{"esbuild":"0.25.12","get-tsconfig":"4.14.0"},"optionalDependencies":{"fsevents":"2.3.3"},"optional":true},"tunnel-agent@0.6.0":{"dependencies":{"safe-buffer":"5.2.1"}},"type-fest@2.19.0":{"optional":true},"type-fest@4.26.0":{"optional":true},"type-fest@4.41.0":{"optional":true},"typescript@5.8.3":{},"typescript@5.9.3":{},"ufo@1.6.1":{},"undici-types@6.21.0":{},"undici-types@7.16.0":{"optional":true},"undici@7.16.0":{},"undici@7.24.8":{},"undici@7.25.0":{"optional":true},"unenv@2.0.0-rc.24":{"dependencies":{"pathe":"2.0.3"}},"unified@11.0.5":{"dependencies":{"@types/unist":"3.0.3","bail":"2.0.2","devlop":"1.1.0","extend":"3.0.2","is-plain-obj":"4.1.0","trough":"2.2.0","vfile":"6.0.3"}},"unist-util-find-after@5.0.0":{"dependencies":{"@types/unist":"3.0.3","unist-util-is":"6.0.0"}},"unist-util-is@6.0.0":{"dependencies":{"@types/unist":"3.0.3"}},"unist-util-position@5.0.0":{"dependencies":{"@types/unist":"3.0.3"}},"unist-util-stringify-position@4.0.0":{"dependencies":{"@types/unist":"3.0.3"}},"unist-util-visit-parents@6.0.1":{"dependencies":{"@types/unist":"3.0.3","unist-util-is":"6.0.0"}},"unist-util-visit@5.0.0":{"dependencies":{"@types/unist":"3.0.3","unist-util-is":"6.0.0","unist-util-visit-parents":"6.0.1"}},"universalify@0.1.2":{},"universalify@0.2.0":{"optional":true},"universalify@2.0.1":{},"unplugin@3.0.0":{"dependencies":{"@jridgewell/remapping":"2.3.5","picomatch":"4.0.3","webpack-virtual-modules":"0.6.2"}},"update-browserslist-db@1.1.3(browserslist@4.25.3)":{"dependencies":{"browserslist":"4.25.3","escalade":"3.2.0","picocolors":"1.1.1"}},"update-browserslist-db@1.2.3(browserslist@4.28.2)":{"dependencies":{"browserslist":"4.28.2","escalade":"3.2.0","picocolors":"1.1.1"},"optional":true},"url-parse@1.5.10":{"dependencies":{"querystringify":"2.2.0","requires-port":"1.0.0"},"optional":true},"urlpattern-polyfill@10.1.0":{"optional":true},"use-sync-external-store@1.6.0(react@19.2.0)":{"dependencies":{"react":"19.2.0"}},"userhome@1.0.1":{"optional":true},"util-deprecate@1.0.2":{},"uuid@11.1.0":{},"varint@6.0.0":{"optional":true},"vfile-location@5.0.3":{"dependencies":{"@types/unist":"3.0.3","vfile":"6.0.3"}},"vfile-message@4.0.2":{"dependencies":{"@types/unist":"3.0.3","unist-util-stringify-position":"4.0.0"}},"vfile@6.0.3":{"dependencies":{"@types/unist":"3.0.3","vfile-message":"4.0.2"}},"vite-node@3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"cac":"6.7.14","debug":"4.4.3","es-module-lexer":"1.7.0","pathe":"2.0.3","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"transitivePeerDependencies":["@types/node","jiti","less","lightningcss","sass","sass-embedded","stylus","sugarss","supports-color","terser","tsx","yaml"]},"vite-plugin-static-copy@4.1.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"chokidar":"3.6.0","p-map":"7.0.4","picocolors":"1.1.1","tinyglobby":"0.2.16","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"esbuild":"0.25.12","fdir":"6.5.0(picomatch@4.0.4)","picomatch":"4.0.4","postcss":"8.5.6","rollup":"4.53.2","tinyglobby":"0.2.16"},"optionalDependencies":{"@types/node":"24.10.2","fsevents":"2.3.3","jiti":"2.6.1","lightningcss":"1.32.0","sass-embedded":"1.89.2","terser":"5.36.0","tsx":"4.20.5","yaml":"2.8.1"}},"vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"lightningcss":"1.32.0","picomatch":"4.0.4","postcss":"8.5.14","rolldown":"1.0.0-rc.17","tinyglobby":"0.2.16"},"optionalDependencies":{"@types/node":"22.15.33","esbuild":"0.27.3","fsevents":"2.3.3","jiti":"2.6.1","sass-embedded":"1.89.2","terser":"5.36.0","tsx":"4.20.5","yaml":"2.8.1"}},"vitefu@1.1.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"optionalDependencies":{"vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"vitest@3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"@types/chai":"5.2.2","@vitest/expect":"3.2.4","@vitest/mocker":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/pretty-format":"3.2.4","@vitest/runner":"3.2.4","@vitest/snapshot":"3.2.4","@vitest/spy":"3.2.4","@vitest/utils":"3.2.4","chai":"5.3.3","debug":"4.4.1","expect-type":"1.2.2","magic-string":"0.30.18","pathe":"2.0.3","picomatch":"4.0.3","std-env":"3.9.0","tinybench":"2.9.0","tinyexec":"0.3.2","tinyglobby":"0.2.14","tinypool":"1.1.1","tinyrainbow":"2.0.0","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","vite-node":"3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","why-is-node-running":"2.3.0"},"optionalDependencies":{"@types/debug":"4.1.12","@types/node":"24.10.2","@vitest/browser":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)","happy-dom":"18.0.1","jsdom":"26.1.0"},"transitivePeerDependencies":["jiti","less","lightningcss","msw","sass","sass-embedded","stylus","sugarss","supports-color","terser","tsx","yaml"]},"vitest@3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"@types/chai":"5.2.2","@vitest/expect":"3.2.4","@vitest/mocker":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/pretty-format":"3.2.4","@vitest/runner":"3.2.4","@vitest/snapshot":"3.2.4","@vitest/spy":"3.2.4","@vitest/utils":"3.2.4","chai":"5.3.3","debug":"4.4.1","expect-type":"1.2.2","magic-string":"0.30.18","pathe":"2.0.3","picomatch":"4.0.3","std-env":"3.9.0","tinybench":"2.9.0","tinyexec":"0.3.2","tinyglobby":"0.2.14","tinypool":"1.1.1","tinyrainbow":"2.0.0","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","vite-node":"3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","why-is-node-running":"2.3.0"},"optionalDependencies":{"@types/debug":"4.1.12","@types/node":"24.10.2","@vitest/browser":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)","happy-dom":"18.0.1","jsdom":"27.3.0(postcss@8.5.14)"},"transitivePeerDependencies":["jiti","less","lightningcss","msw","sass","sass-embedded","stylus","sugarss","supports-color","terser","tsx","yaml"]},"vitest@4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"@vitest/expect":"4.0.18","@vitest/mocker":"4.0.18(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/pretty-format":"4.0.18","@vitest/runner":"4.0.18","@vitest/snapshot":"4.0.18","@vitest/spy":"4.0.18","@vitest/utils":"4.0.18","es-module-lexer":"1.7.0","expect-type":"1.2.2","magic-string":"0.30.21","obug":"2.1.1","pathe":"2.0.3","picomatch":"4.0.3","std-env":"3.10.0","tinybench":"2.9.0","tinyexec":"1.0.2","tinyglobby":"0.2.15","tinyrainbow":"3.0.3","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","why-is-node-running":"2.3.0"},"optionalDependencies":{"@opentelemetry/api":"1.9.0","@types/node":"24.10.2","happy-dom":"18.0.1","jsdom":"27.3.0(postcss@8.5.14)"},"transitivePeerDependencies":["jiti","less","lightningcss","msw","sass","sass-embedded","stylus","sugarss","terser","tsx","yaml"]},"vitest@4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/expect":"4.1.5","@vitest/mocker":"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/pretty-format":"4.1.5","@vitest/runner":"4.1.5","@vitest/snapshot":"4.1.5","@vitest/spy":"4.1.5","@vitest/utils":"4.1.5","es-module-lexer":"2.1.0","expect-type":"1.3.0","magic-string":"0.30.21","obug":"2.1.1","pathe":"2.0.3","picomatch":"4.0.4","std-env":"4.1.0","tinybench":"2.9.0","tinyexec":"1.0.2","tinyglobby":"0.2.16","tinyrainbow":"3.1.0","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","why-is-node-running":"2.3.0"},"optionalDependencies":{"@opentelemetry/api":"1.9.0","@types/node":"22.15.33","@vitest/coverage-v8":"4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)","happy-dom":"18.0.1","jsdom":"27.3.0(postcss@8.5.14)"},"transitivePeerDependencies":["msw"]},"vscode-jsonrpc@8.2.0":{},"vscode-languageserver-protocol@3.17.5":{"dependencies":{"vscode-jsonrpc":"8.2.0","vscode-languageserver-types":"3.17.5"}},"vscode-languageserver-textdocument@1.0.12":{},"vscode-languageserver-types@3.17.5":{},"vscode-languageserver@9.0.1":{"dependencies":{"vscode-languageserver-protocol":"3.17.5"}},"vscode-uri@3.0.8":{},"w3c-xmlserializer@5.0.0":{"dependencies":{"xml-name-validator":"5.0.0"}},"wait-port@1.1.0":{"dependencies":{"chalk":"4.1.2","commander":"9.5.0","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"],"optional":true},"watchpack@2.5.1":{"dependencies":{"glob-to-regexp":"0.4.1","graceful-fs":"4.2.11"},"optional":true},"wcwidth@1.0.1":{"dependencies":{"defaults":"1.0.4"}},"web-namespaces@2.0.1":{},"web-streams-polyfill@3.3.3":{"optional":true},"web-vitals@4.2.4":{},"web-vitals@5.1.0":{},"webdriver@9.2.0":{"dependencies":{"@types/node":"20.19.39","@types/ws":"8.18.1","@wdio/config":"9.1.3","@wdio/logger":"9.1.3","@wdio/protocols":"9.2.0","@wdio/types":"9.1.3","@wdio/utils":"9.1.3","deepmerge-ts":"7.1.5","ws":"8.20.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","bufferutil","react-native-b4a","supports-color","utf-8-validate"],"optional":true},"webdriverio@9.2.1":{"dependencies":{"@types/node":"20.19.39","@types/sinonjs__fake-timers":"8.1.5","@wdio/config":"9.1.3","@wdio/logger":"9.1.3","@wdio/protocols":"9.2.0","@wdio/repl":"9.0.8","@wdio/types":"9.1.3","@wdio/utils":"9.1.3","archiver":"7.0.1","aria-query":"5.3.2","cheerio":"1.2.0","css-shorthand-properties":"1.1.2","css-value":"0.0.1","grapheme-splitter":"1.0.4","htmlfy":"0.3.2","import-meta-resolve":"4.2.0","is-plain-obj":"4.1.0","jszip":"3.10.1","lodash.clonedeep":"4.5.0","lodash.zip":"4.2.0","minimatch":"9.0.9","query-selector-shadow-dom":"1.0.1","resq":"1.11.0","rgb2hex":"0.2.5","serialize-error":"11.0.3","urlpattern-polyfill":"10.1.0","webdriver":"9.2.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","bufferutil","react-native-b4a","supports-color","utf-8-validate"],"optional":true},"webidl-conversions@7.0.0":{},"webidl-conversions@8.0.0":{},"webpack-sources@3.4.1":{"optional":true},"webpack-virtual-modules@0.6.2":{},"webpack@5.99.9(esbuild@0.27.3)":{"dependencies":{"@types/eslint-scope":"3.7.7","@types/estree":"1.0.9","@types/json-schema":"7.0.15","@webassemblyjs/ast":"1.14.1","@webassemblyjs/wasm-edit":"1.14.1","@webassemblyjs/wasm-parser":"1.14.1","acorn":"8.16.0","browserslist":"4.28.2","chrome-trace-event":"1.0.4","enhanced-resolve":"5.21.0","es-module-lexer":"1.7.0","eslint-scope":"5.1.1","events":"3.3.0","glob-to-regexp":"0.4.1","graceful-fs":"4.2.11","json-parse-even-better-errors":"2.3.1","loader-runner":"4.3.2","mime-types":"2.1.35","neo-async":"2.6.2","schema-utils":"4.3.3","tapable":"2.3.3","terser-webpack-plugin":"5.5.0(esbuild@0.27.3)(webpack@5.99.9(esbuild@0.27.3))","watchpack":"2.5.1","webpack-sources":"3.4.1"},"transitivePeerDependencies":["@swc/core","esbuild","uglify-js"],"optional":true},"whatwg-encoding@3.1.1":{"dependencies":{"iconv-lite":"0.6.3"}},"whatwg-mimetype@3.0.0":{"optional":true},"whatwg-mimetype@4.0.0":{},"whatwg-url@14.2.0":{"dependencies":{"tr46":"5.1.1","webidl-conversions":"7.0.0"}},"whatwg-url@15.1.0":{"dependencies":{"tr46":"6.0.0","webidl-conversions":"8.0.0"}},"which@2.0.2":{"dependencies":{"isexe":"2.0.0"}},"which@4.0.0":{"dependencies":{"isexe":"3.1.5"},"optional":true},"why-is-node-running@2.3.0":{"dependencies":{"siginfo":"2.0.0","stackback":"0.0.2"}},"workerd@1.20260504.1":{"optionalDependencies":{"@cloudflare/workerd-darwin-64":"1.20260504.1","@cloudflare/workerd-darwin-arm64":"1.20260504.1","@cloudflare/workerd-linux-64":"1.20260504.1","@cloudflare/workerd-linux-arm64":"1.20260504.1","@cloudflare/workerd-windows-64":"1.20260504.1"}},"wrangler@4.88.0":{"dependencies":{"@cloudflare/kv-asset-handler":"0.5.0","@cloudflare/unenv-preset":"2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)","blake3-wasm":"2.1.5","esbuild":"0.27.3","miniflare":"4.20260504.0","path-to-regexp":"6.3.0","unenv":"2.0.0-rc.24","workerd":"1.20260504.1"},"optionalDependencies":{"fsevents":"2.3.3"},"transitivePeerDependencies":["bufferutil","utf-8-validate"]},"wrap-ansi@6.2.0":{"dependencies":{"ansi-styles":"4.3.0","string-width":"4.2.3","strip-ansi":"6.0.1"},"optional":true},"wrap-ansi@7.0.0":{"dependencies":{"ansi-styles":"4.3.0","string-width":"4.2.3","strip-ansi":"6.0.1"}},"wrap-ansi@8.1.0":{"dependencies":{"ansi-styles":"6.2.1","string-width":"5.1.2","strip-ansi":"7.1.2"}},"wrappy@1.0.2":{},"ws@8.18.0":{},"ws@8.18.3":{},"ws@8.20.0":{},"xml-name-validator@5.0.0":{},"xmlbuilder2@4.0.3":{"dependencies":{"@oozcitak/dom":"2.0.2","@oozcitak/infra":"2.0.2","@oozcitak/util":"10.0.0","js-yaml":"4.1.1"}},"xmlchars@2.2.0":{},"y18n@5.0.8":{},"yallist@3.1.1":{},"yallist@4.0.0":{},"yaml@2.8.1":{},"yargs-parser@21.1.1":{},"yargs-parser@22.0.0":{},"yargs@17.7.2":{"dependencies":{"cliui":"8.0.1","escalade":"3.2.0","get-caller-file":"2.0.5","require-directory":"2.1.1","string-width":"4.2.3","y18n":"5.0.8","yargs-parser":"21.1.1"}},"yauzl@2.10.0":{"dependencies":{"buffer-crc32":"0.2.13","fd-slicer":"1.1.0"},"optional":true},"yoctocolors-cjs@2.1.3":{"optional":true},"youch-core@0.3.3":{"dependencies":{"@poppinss/exception":"1.2.2","error-stack-parser-es":"1.0.5"}},"youch@4.1.0-beta.10":{"dependencies":{"@poppinss/colors":"4.1.5","@poppinss/dumper":"0.6.5","@speed-highlight/core":"1.2.12","cookie":"1.0.2","youch-core":"0.3.3"}},"zip-stream@6.0.1":{"dependencies":{"archiver-utils":"5.0.2","compress-commons":"6.0.2","readable-stream":"4.7.0"},"optional":true},"zod@3.25.76":{},"zwitch@2.0.4":{}}} ================================================ FILE: packages/engine/benches/json_pointer_crud/main.rs ================================================ use std::sync::Arc; use std::time::Duration; use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion}; use lix_engine::{ storage_bench, Backend, CreateVersionOptions, Engine, MergeVersionOptions, MergeVersionOutcome, SessionContext, SwitchVersionOptions, }; use rusqlite::{params, Connection, OptionalExtension}; use serde_json::Value as JsonValue; use tempfile::TempDir; use tokio::runtime::Runtime; #[path = "../storage/rocksdb_backend.rs"] mod rocksdb_backend; #[path = "../storage/sqlite_backend.rs"] mod sqlite_backend; use rocksdb_backend::RocksDbBenchBackend; use sqlite_backend::SqliteBenchBackend; const JSON_POINTER_SCHEMA_JSON: &str = include_str!("../../../plugin-json-v2/schema/json_pointer.json"); const PNPM_LOCK_JSON: &str = include_str!("../fixtures/pnpm-lock.fixture.json"); const BASELINE_ROWS: usize = 100; const SMOKE_ROWS: usize = 1_000; const SCALE_ROWS: usize = 10_000; const CHUNK_SIZE: usize = 500; const CHANGE_ROW_DENOMINATOR: usize = 10; #[derive(Clone)] struct PointerRow { path: String, value_json: String, updated_value_json: String, } #[derive(Clone, Copy)] enum LixBackendProfile { Sqlite, RocksDb, } impl LixBackendProfile { fn name(self) -> &'static str { match self { Self::Sqlite => "lix_sqlite", Self::RocksDb => "lix_rocksdb", } } fn backend_label(self) -> &'static str { match self { Self::Sqlite => "sqlite", Self::RocksDb => "rocksdb", } } } struct RawSqliteFixture { conn: Connection, _dir: TempDir, } struct LixFixture { session: SessionContext, } fn json_pointer_crud_benches(c: &mut Criterion) { let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("create tokio runtime for json_pointer CRUD benchmarks"); let rows = fixture_rows(); bench_raw_sqlite(c, &rows, BASELINE_ROWS, "baseline"); bench_raw_storage(c, &runtime, &rows, BASELINE_ROWS, "baseline"); bench_lix(c, &runtime, &rows, BASELINE_ROWS, "baseline"); bench_raw_sqlite(c, &rows, SMOKE_ROWS, "smoke"); bench_raw_storage(c, &runtime, &rows, SMOKE_ROWS, "smoke"); bench_lix(c, &runtime, &rows, SMOKE_ROWS, "smoke"); bench_raw_sqlite(c, &rows, SCALE_ROWS, "scale"); bench_raw_storage(c, &runtime, &rows, SCALE_ROWS, "scale"); bench_lix(c, &runtime, &rows, SCALE_ROWS, "scale"); } fn bench_raw_sqlite(c: &mut Criterion, all_rows: &[PointerRow], row_count: usize, label: &str) { let rows = all_rows[..row_count].to_vec(); let mut group = c.benchmark_group(format!("json_pointer_crud/raw_sqlite/{label}")); group.sample_size(if row_count <= SMOKE_ROWS { 20 } else { 11 }); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); group.bench_function(format!("insert_all_rows/{}", row_label(row_count)), |b| { b.iter_batched( prepare_raw_sqlite_empty, |fixture| black_box(raw_sqlite_insert_all(fixture, &rows)), BatchSize::LargeInput, ) }); group.bench_function( format!("select_all_path_value/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_select_all(fixture, row_count)), BatchSize::LargeInput, ) }, ); group.bench_function(format!("select_one_by_pk/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_select_one_by_pk(fixture, pick_pk_row(&rows))), BatchSize::LargeInput, ) }); group.bench_function(format!("update_all_values/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_update_all(fixture, row_count)), BatchSize::LargeInput, ) }); group.bench_function(format!("update_one_by_pk/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_update_one_by_pk(fixture, pick_pk_row(&rows))), BatchSize::LargeInput, ) }); group.bench_function(format!("delete_all_rows/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_delete_all(fixture, row_count)), BatchSize::LargeInput, ) }); group.bench_function(format!("delete_one_by_pk/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_delete_one_by_pk(fixture, pick_pk_row(&rows))), BatchSize::LargeInput, ) }); group.finish(); } fn bench_raw_storage( c: &mut Criterion, runtime: &Runtime, all_rows: &[PointerRow], row_count: usize, label: &str, ) { let rows = all_rows[..row_count].to_vec(); let storage_rows = storage_rows(&rows); let change_rows = changed_row_count(row_count); for profile in [LixBackendProfile::Sqlite, LixBackendProfile::RocksDb] { let mut group = c.benchmark_group(format!( "json_pointer_crud/raw_storage_{}/{label}", profile.backend_label() )); group.sample_size(10); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); group.bench_function( format!("write_root_all_rows/{}", row_label(row_count)), |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_write_root( &storage_rows, ), ) .expect("prepare json_pointer raw storage write root") }, |fixture| { let backend = raw_storage_backend(profile); black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("json_pointer raw storage write root"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("get_many_exact_keys/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = raw_storage_backend(profile); let fixture = runtime .block_on(storage_bench::prepare_json_pointer_tracked_state_read( &backend, &storage_rows, )) .expect("prepare json_pointer raw storage get_many"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_get_many_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage get_many"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("get_many_missing_keys/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_storage_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_get_many_missing_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage get_many missing"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function(format!("scan_keys_only/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_storage_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_scan_keys_only_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage scan keys"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("scan_headers_only/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_storage_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_scan_headers_only_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage scan headers"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("scan_full_rows/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_storage_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_scan_full_rows_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage scan"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("prefix_scan_schema/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_storage_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_prefix_scan_schema_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage prefix schema scan"), ) }, BatchSize::LargeInput, ) }); group.bench_function( format!("prefix_scan_schema_file_null/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_storage_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_prefix_scan_schema_file_null_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage prefix schema file null scan"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("write_delta_10pct_updates/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = raw_storage_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_update_rows( &backend, &storage_rows, change_rows, ), ) .expect("prepare json_pointer raw storage delta update"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("json_pointer raw storage delta update"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("write_tombstone_10pct_deletes/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = raw_storage_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_tombstone_rows( &backend, &storage_rows, change_rows, ), ) .expect("prepare json_pointer raw storage tombstones"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("json_pointer raw storage tombstones"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("changed_keys_update_10pct/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = raw_storage_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_diff_update_rows( &backend, &storage_rows, change_rows, ), ) .expect("prepare json_pointer raw storage changed keys"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_changed_keys_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage changed keys"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("changed_keys_delta_chain_10x1pct/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = raw_storage_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_diff_delta_chain( &backend, &storage_rows, 10, (row_count / 100).max(1), ), ) .expect("prepare json_pointer raw storage delta-chain changed keys"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_changed_keys_prepared( &backend, &fixture, ), ) .expect("json_pointer raw storage delta-chain changed keys"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("materialize_delta_chain_10x1pct/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = raw_storage_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_materialize_delta_chain( &backend, &storage_rows, 10, (row_count / 100).max(1), ), ) .expect("prepare json_pointer raw storage materialize delta chain"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_materialize_root_prepared( &backend, &fixture, )) .expect("json_pointer raw storage materialize delta chain"), ) }, BatchSize::LargeInput, ) }, ); group.finish(); } } fn bench_lix( c: &mut Criterion, runtime: &Runtime, all_rows: &[PointerRow], row_count: usize, label: &str, ) { let rows = all_rows[..row_count].to_vec(); let change_rows = changed_row_count(row_count); for profile in [LixBackendProfile::Sqlite, LixBackendProfile::RocksDb] { let mut group = c.benchmark_group(format!("json_pointer_crud/{}/{label}", profile.name())); group.sample_size(if row_count <= SMOKE_ROWS { 11 } else { 11 }); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); group.bench_function(format!("insert_all_rows/{}", row_label(row_count)), |b| { b.iter_batched( || runtime.block_on(prepare_lix_empty(profile)), |fixture| black_box(runtime.block_on(lix_insert_all(fixture, &rows))), BatchSize::LargeInput, ) }); group.bench_function( format!("select_all_path_value/{}", row_label(row_count)), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| black_box(runtime.block_on(lix_select_all(fixture, row_count))), BatchSize::LargeInput, ) }, ); group.bench_function(format!("select_one_by_pk/{}", row_label(row_count)), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { black_box(runtime.block_on(lix_select_one_by_pk(fixture, pick_pk_row(&rows)))) }, BatchSize::LargeInput, ) }); group.bench_function(format!("update_all_values/{}", row_label(row_count)), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| black_box(runtime.block_on(lix_update_all(fixture, row_count))), BatchSize::LargeInput, ) }); group.bench_function(format!("update_one_by_pk/{}", row_label(row_count)), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { black_box(runtime.block_on(lix_update_one_by_pk(fixture, pick_pk_row(&rows)))) }, BatchSize::LargeInput, ) }); group.bench_function(format!("delete_all_rows/{}", row_label(row_count)), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| black_box(runtime.block_on(lix_delete_all(fixture, row_count))), BatchSize::LargeInput, ) }); group.bench_function(format!("delete_one_by_pk/{}", row_label(row_count)), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { black_box(runtime.block_on(lix_delete_one_by_pk(fixture, pick_pk_row(&rows)))) }, BatchSize::LargeInput, ) }); group.bench_function(format!("create_version/{}", row_label(row_count)), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| black_box(runtime.block_on(lix_create_version(fixture))), BatchSize::LargeInput, ) }); group.bench_function( format!( "merge_version_fast_forward_10pct_updates/{}", row_label(row_count) ), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { black_box(runtime.block_on(lix_merge_version_fast_forward( fixture, &rows, change_rows, ))) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!( "merge_version_divergent_10pct_updates/{}", row_label(row_count) ), |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { black_box(runtime.block_on(lix_merge_version_divergent( fixture, &rows, change_rows, ))) }, BatchSize::LargeInput, ) }, ); group.finish(); } } fn prepare_raw_sqlite_empty() -> RawSqliteFixture { let dir = TempDir::new().expect("create raw sqlite tempdir"); let conn = Connection::open(dir.path().join("json-pointer-crud.sqlite")) .expect("open raw sqlite json_pointer CRUD db"); conn.execute_batch( " PRAGMA journal_mode = WAL; PRAGMA synchronous = NORMAL; PRAGMA temp_store = MEMORY; PRAGMA foreign_keys = ON; CREATE TABLE json_pointer ( path TEXT NOT NULL PRIMARY KEY, value TEXT NOT NULL ) WITHOUT ROWID; ", ) .expect("configure raw sqlite json_pointer CRUD db"); RawSqliteFixture { conn, _dir: dir } } fn prepare_raw_sqlite_seeded(rows: &[PointerRow]) -> RawSqliteFixture { let fixture = prepare_raw_sqlite_empty(); raw_sqlite_seed(&fixture.conn, rows); fixture } fn raw_sqlite_seed(conn: &Connection, rows: &[PointerRow]) { conn.execute_batch("BEGIN IMMEDIATE") .expect("begin raw sqlite seed"); { let mut statement = conn .prepare_cached( "INSERT INTO json_pointer (path, value) VALUES (?1, ?2) ON CONFLICT(path) DO UPDATE SET value = excluded.value", ) .expect("prepare raw sqlite seed insert"); for row in rows { statement .execute(params![row.path.as_str(), row.value_json.as_str()]) .expect("insert raw sqlite seed row"); } } conn.execute_batch("COMMIT") .expect("commit raw sqlite seed"); } fn raw_sqlite_insert_all(fixture: RawSqliteFixture, rows: &[PointerRow]) -> usize { raw_sqlite_seed(&fixture.conn, rows); rows.len() } fn raw_sqlite_select_all(fixture: RawSqliteFixture, expected_rows: usize) -> usize { let mut statement = fixture .conn .prepare_cached("SELECT path, value FROM json_pointer ORDER BY path") .expect("prepare raw sqlite select all"); let count = statement .query_map([], |_| Ok(())) .expect("raw sqlite select all") .count(); assert_eq!(count, expected_rows); count } fn raw_sqlite_select_one_by_pk(fixture: RawSqliteFixture, row: &PointerRow) -> usize { let mut statement = fixture .conn .prepare_cached("SELECT path, value FROM json_pointer WHERE path = ?1") .expect("prepare raw sqlite select by pk"); let found = statement .query_row(params![row.path.as_str()], |_| Ok(())) .optional() .expect("raw sqlite select by pk") .is_some(); assert!(found); usize::from(found) } fn raw_sqlite_update_all(fixture: RawSqliteFixture, expected_rows: usize) -> usize { let affected = fixture .conn .execute( "UPDATE json_pointer SET value = ?1", params![r#"{"updated":true}"#], ) .expect("raw sqlite update all"); assert_eq!(affected, expected_rows); affected } fn raw_sqlite_update_one_by_pk(fixture: RawSqliteFixture, row: &PointerRow) -> usize { let affected = fixture .conn .execute( "UPDATE json_pointer SET value = ?1 WHERE path = ?2", params![row.updated_value_json.as_str(), row.path.as_str()], ) .expect("raw sqlite update by pk"); assert_eq!(affected, 1); affected } fn raw_sqlite_delete_all(fixture: RawSqliteFixture, expected_rows: usize) -> usize { let affected = fixture .conn .execute("DELETE FROM json_pointer", []) .expect("raw sqlite delete all"); assert_eq!(affected, expected_rows); affected } fn raw_sqlite_delete_one_by_pk(fixture: RawSqliteFixture, row: &PointerRow) -> usize { let affected = fixture .conn .execute( "DELETE FROM json_pointer WHERE path = ?1", params![row.path.as_str()], ) .expect("raw sqlite delete by pk"); assert_eq!(affected, 1); affected } async fn prepare_lix_empty(profile: LixBackendProfile) -> LixFixture { let engine = match profile { LixBackendProfile::Sqlite => { let backend = SqliteBenchBackend::tempfile().expect("create sqlite json_pointer CRUD backend"); Engine::initialize(Box::new(backend.clone())) .await .expect("initialize sqlite json_pointer CRUD Lix backend"); Engine::new(Box::new(backend)) .await .expect("open sqlite json_pointer CRUD Lix engine") } LixBackendProfile::RocksDb => { let backend = RocksDbBenchBackend::new().expect("create rocksdb json_pointer CRUD backend"); Engine::initialize(Box::new(backend.clone())) .await .expect("initialize rocksdb json_pointer CRUD Lix backend"); Engine::new(Box::new(backend)) .await .expect("open rocksdb json_pointer CRUD Lix engine") } }; let setup_session = engine .open_workspace_session() .await .expect("open json_pointer CRUD Lix setup workspace session"); register_json_pointer_schema(&setup_session).await; let session = engine .open_workspace_session() .await .expect("open json_pointer CRUD Lix benchmark workspace session"); LixFixture { session } } async fn prepare_lix_seeded(profile: LixBackendProfile, rows: &[PointerRow]) -> LixFixture { let fixture = prepare_lix_empty(profile).await; insert_lix_rows(&fixture.session, rows).await; fixture } async fn register_json_pointer_schema(session: &SessionContext) { let sql = format!( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) VALUES (lix_json('{}'), false, false)", sql_string(JSON_POINTER_SCHEMA_JSON) ); let affected = session .execute(&sql, &[]) .await .expect("register json_pointer schema") .rows_affected(); assert_eq!(affected, 1); } async fn lix_insert_all(fixture: LixFixture, rows: &[PointerRow]) -> usize { insert_lix_rows(&fixture.session, rows).await; rows.len() } async fn insert_lix_rows(session: &SessionContext, rows: &[PointerRow]) { for chunk in rows.chunks(CHUNK_SIZE) { let mut sql = String::from("INSERT INTO json_pointer (path, value) VALUES "); for (index, row) in chunk.iter().enumerate() { if index > 0 { sql.push(','); } sql.push_str(&format!( "('{}', lix_json('{}'))", sql_string(row.path.as_str()), sql_string(row.value_json.as_str()) )); } let affected = session .execute(&sql, &[]) .await .expect("insert json_pointer rows") .rows_affected(); assert_eq!(affected as usize, chunk.len()); } } async fn lix_select_all(fixture: LixFixture, expected_rows: usize) -> usize { let result = fixture .session .execute("SELECT path, value FROM json_pointer ORDER BY path", &[]) .await .expect("select all json_pointer rows"); assert_eq!(result.len(), expected_rows); result.len() } async fn lix_select_one_by_pk(fixture: LixFixture, row: &PointerRow) -> usize { let sql = format!( "SELECT path, value FROM json_pointer WHERE path = '{}'", sql_string(row.path.as_str()) ); let result = fixture .session .execute(&sql, &[]) .await .expect("select json_pointer row by path"); assert_eq!(result.len(), 1); result.len() } async fn lix_update_all(fixture: LixFixture, expected_rows: usize) -> usize { let affected = fixture .session .execute( r#"UPDATE json_pointer SET value = lix_json('{"updated":true}')"#, &[], ) .await .expect("update all json_pointer rows") .rows_affected() as usize; assert_eq!(affected, expected_rows); affected } async fn lix_update_one_by_pk(fixture: LixFixture, row: &PointerRow) -> usize { let sql = format!( "UPDATE json_pointer SET value = lix_json('{}') WHERE path = '{}'", sql_string(row.updated_value_json.as_str()), sql_string(row.path.as_str()) ); let affected = fixture .session .execute(&sql, &[]) .await .expect("update json_pointer row by path") .rows_affected() as usize; assert_eq!(affected, 1); affected } async fn lix_delete_all(fixture: LixFixture, expected_rows: usize) -> usize { let affected = fixture .session .execute("DELETE FROM json_pointer", &[]) .await .expect("delete all json_pointer rows") .rows_affected() as usize; assert_eq!(affected, expected_rows); affected } async fn lix_delete_one_by_pk(fixture: LixFixture, row: &PointerRow) -> usize { let sql = format!( "DELETE FROM json_pointer WHERE path = '{}'", sql_string(row.path.as_str()) ); let affected = fixture .session .execute(&sql, &[]) .await .expect("delete json_pointer row by path") .rows_affected() as usize; assert_eq!(affected, 1); affected } async fn lix_create_version(fixture: LixFixture) -> String { create_lix_version(&fixture.session).await } async fn create_lix_version(session: &SessionContext) -> String { let receipt = session .create_version(CreateVersionOptions { id: Some("bench-draft".to_string()), name: "bench draft".to_string(), from_commit_id: None, }) .await .expect("create json_pointer benchmark version"); receipt.id } async fn lix_merge_version_fast_forward( fixture: LixFixture, rows: &[PointerRow], change_rows: usize, ) -> usize { let main_id = fixture .session .active_version_id() .await .expect("load active json_pointer main version id"); let draft_id = create_lix_version(&fixture.session).await; let (draft_session, _) = fixture .session .switch_version(SwitchVersionOptions { version_id: draft_id.clone(), }) .await .expect("switch to json_pointer draft version"); update_lix_rows_by_pk(&draft_session, &rows[..change_rows], "source").await; let (main_session, _) = draft_session .switch_version(SwitchVersionOptions { version_id: main_id, }) .await .expect("switch back to main version"); let receipt = main_session .merge_version(MergeVersionOptions { source_version_id: draft_id, }) .await .expect("merge fast-forward json_pointer draft"); assert_eq!(receipt.outcome, MergeVersionOutcome::FastForward); assert_eq!(receipt.change_stats.total, change_rows); receipt.change_stats.total } async fn lix_merge_version_divergent( fixture: LixFixture, rows: &[PointerRow], change_rows: usize, ) -> usize { let main_id = fixture .session .active_version_id() .await .expect("load active json_pointer main version id"); let draft_id = create_lix_version(&fixture.session).await; let (draft_session, _) = fixture .session .switch_version(SwitchVersionOptions { version_id: draft_id.clone(), }) .await .expect("switch to json_pointer draft version"); update_lix_rows_by_pk(&draft_session, &rows[..change_rows], "source").await; let (main_session, _) = draft_session .switch_version(SwitchVersionOptions { version_id: main_id, }) .await .expect("switch back to main version"); update_lix_rows_by_pk(&main_session, &rows[change_rows..change_rows * 2], "target").await; let receipt = main_session .merge_version(MergeVersionOptions { source_version_id: draft_id, }) .await .expect("merge divergent json_pointer draft"); assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted); assert_eq!(receipt.change_stats.total, change_rows); receipt.change_stats.total } async fn update_lix_rows_by_pk(session: &SessionContext, rows: &[PointerRow], side: &str) { for row in rows { let value = serde_json::json!({ "updated": true, "side": side, "path": row.path, }) .to_string(); let sql = format!( "UPDATE json_pointer SET value = lix_json('{}') WHERE path = '{}'", sql_string(value.as_str()), sql_string(row.path.as_str()) ); let affected = session .execute(&sql, &[]) .await .expect("update json_pointer row by path") .rows_affected(); assert_eq!(affected, 1); } } fn fixture_rows() -> Vec { let root: JsonValue = serde_json::from_str(PNPM_LOCK_JSON).expect("pnpm lock JSON fixture"); let mut rows = Vec::new(); flatten_json("", &root, &mut rows); assert!( rows.len() >= SCALE_ROWS, "pnpm lock fixture should have at least {SCALE_ROWS} pointer rows, got {}", rows.len() ); rows } fn storage_rows(rows: &[PointerRow]) -> Vec { rows.iter() .map(|row| storage_bench::JsonPointerStorageRow { path: row.path.clone(), value_json: row.value_json.clone(), updated_value_json: row.updated_value_json.clone(), }) .collect() } fn pick_pk_row(rows: &[PointerRow]) -> &PointerRow { &rows[rows.len() / 2] } fn raw_storage_backend(profile: LixBackendProfile) -> Arc { match profile { LixBackendProfile::Sqlite => { Arc::new(SqliteBenchBackend::tempfile().expect("create sqlite raw storage backend")) } LixBackendProfile::RocksDb => { Arc::new(RocksDbBenchBackend::new().expect("create rocksdb raw storage backend")) } } } fn prepare_raw_storage_read( runtime: &Runtime, profile: LixBackendProfile, rows: &[storage_bench::JsonPointerStorageRow], ) -> ( Arc, storage_bench::JsonPointerTrackedStateReadFixture, ) { let backend = raw_storage_backend(profile); let fixture = runtime .block_on(storage_bench::prepare_json_pointer_tracked_state_read( &backend, rows, )) .expect("prepare json_pointer raw storage read"); (backend, fixture) } fn flatten_json(path: &str, value: &JsonValue, rows: &mut Vec) { rows.push(PointerRow { path: path.to_string(), value_json: value.to_string(), updated_value_json: updated_value_for(path), }); match value { JsonValue::Array(items) => { for (index, item) in items.iter().enumerate() { let child_path = format!("{path}/{}", index); flatten_json(&child_path, item, rows); } } JsonValue::Object(map) => { for (key, child) in map { let child_path = format!("{path}/{}", escape_pointer_token(key)); flatten_json(&child_path, child, rows); } } JsonValue::Null | JsonValue::Bool(_) | JsonValue::Number(_) | JsonValue::String(_) => {} } } fn updated_value_for(path: &str) -> String { serde_json::json!({ "updated": true, "path": path, }) .to_string() } fn escape_pointer_token(token: &str) -> String { token.replace('~', "~0").replace('/', "~1") } fn sql_string(value: &str) -> String { value.replace('\'', "''") } fn row_label(rows: usize) -> String { if rows >= 1_000 { format!("{}k", rows / 1_000) } else { rows.to_string() } } fn changed_row_count(rows: usize) -> usize { (rows / CHANGE_ROW_DENOMINATOR).max(1) } criterion_group!(benches, json_pointer_crud_benches); criterion_main!(benches); ================================================ FILE: packages/engine/benches/json_pointer_physical/main.rs ================================================ use std::sync::Arc; use std::time::Duration; use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion}; use lix_engine::{storage_bench, Backend}; use rusqlite::{params, Connection, OptionalExtension}; use serde_json::Value as JsonValue; use tempfile::TempDir; use tokio::runtime::Runtime; #[path = "../storage/rocksdb_backend.rs"] mod rocksdb_backend; #[path = "../storage/sqlite_backend.rs"] mod sqlite_backend; use rocksdb_backend::RocksDbBenchBackend; use sqlite_backend::SqliteBenchBackend; const PNPM_LOCK_JSON: &str = include_str!("../fixtures/pnpm-lock.fixture.json"); const BASELINE_ROWS: usize = 100; const SMOKE_ROWS: usize = 1_000; const SCALE_ROWS: usize = 10_000; const CHANGE_ROW_DENOMINATOR: usize = 10; #[derive(Clone)] struct PointerRow { path: String, value_json: String, updated_value_json: String, } struct RawSqliteFixture { conn: Connection, _dir: TempDir, } #[derive(Clone, Copy)] enum BackendProfile { Sqlite, RocksDb, } impl BackendProfile { fn label(self) -> &'static str { match self { Self::Sqlite => "sqlite", Self::RocksDb => "rocksdb", } } } fn json_pointer_physical_benches(c: &mut Criterion) { let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("create tokio runtime for json_pointer physical benchmarks"); let rows = fixture_rows(); bench_raw_sqlite(c, &rows, BASELINE_ROWS, "baseline"); bench_physical(c, &runtime, &rows, BASELINE_ROWS, "baseline"); bench_raw_sqlite(c, &rows, SMOKE_ROWS, "smoke"); bench_physical(c, &runtime, &rows, SMOKE_ROWS, "smoke"); bench_raw_sqlite(c, &rows, SCALE_ROWS, "scale"); bench_physical(c, &runtime, &rows, SCALE_ROWS, "scale"); } fn bench_raw_sqlite(c: &mut Criterion, all_rows: &[PointerRow], row_count: usize, label: &str) { let rows = all_rows[..row_count].to_vec(); let change_rows = changed_row_count(row_count); let mut group = c.benchmark_group(format!("json_pointer_physical/raw_sqlite/{label}")); group.sample_size(10); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); group.bench_function( format!("write_root_all_rows/{}", row_label(row_count)), |b| { b.iter_batched( prepare_raw_sqlite_empty, |fixture| black_box(raw_sqlite_insert_all(fixture, &rows)), BatchSize::LargeInput, ) }, ); group.bench_function( format!("get_many_exact_keys/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_get_many_exact(fixture, &rows)), BatchSize::LargeInput, ) }, ); group.bench_function( format!("get_many_missing_keys/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_get_many_missing(fixture, row_count)), BatchSize::LargeInput, ) }, ); group.bench_function( format!("exists_many_exact_keys/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_exists_many(fixture, &rows)), BatchSize::LargeInput, ) }, ); group.bench_function(format!("scan_keys_only/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_scan_keys_only(fixture, row_count)), BatchSize::LargeInput, ) }); group.bench_function(format!("scan_headers_only/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_scan_keys_only(fixture, row_count)), BatchSize::LargeInput, ) }); group.bench_function(format!("scan_full_rows/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_scan_full_rows(fixture, row_count)), BatchSize::LargeInput, ) }); group.bench_function( format!("prefix_scan_schema/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_scan_full_rows(fixture, row_count)), BatchSize::LargeInput, ) }, ); group.bench_function( format!("prefix_scan_schema_file_null/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_scan_full_rows(fixture, row_count)), BatchSize::LargeInput, ) }, ); group.bench_function( format!("write_delta_10pct_updates/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_update_first_rows(fixture, &rows, change_rows)), BatchSize::LargeInput, ) }, ); group.bench_function( format!("write_tombstone_10pct_deletes/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_raw_sqlite_seeded(&rows), |fixture| black_box(raw_sqlite_delete_first_rows(fixture, &rows, change_rows)), BatchSize::LargeInput, ) }, ); group.finish(); } fn bench_physical( c: &mut Criterion, runtime: &Runtime, all_rows: &[PointerRow], row_count: usize, label: &str, ) { let rows = all_rows[..row_count].to_vec(); let storage_rows = storage_rows(&rows); let change_rows = changed_row_count(row_count); for profile in [BackendProfile::Sqlite, BackendProfile::RocksDb] { let mut group = c.benchmark_group(format!("json_pointer_physical/{}/{label}", profile.label())); group.sample_size(10); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); group.bench_function( format!("write_root_all_rows/{}", row_label(row_count)), |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_write_root( &storage_rows, ), ) .expect("prepare json_pointer physical write root") }, |fixture| { let backend = physical_backend(profile); black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("json_pointer physical write root"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("get_many_exact_keys/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_physical_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_get_many_prepared( &backend, &fixture, ), ) .expect("json_pointer physical get_many"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("get_many_missing_keys/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_physical_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_get_many_missing_prepared( &backend, &fixture, ), ) .expect("json_pointer physical get_many missing"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function(format!("scan_keys_only/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_physical_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_scan_keys_only_prepared( &backend, &fixture, ), ) .expect("json_pointer physical scan keys"), ) }, BatchSize::LargeInput, ) }); group.bench_function( format!("scan_headers_only/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_physical_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_scan_headers_only_prepared( &backend, &fixture, ), ) .expect("json_pointer physical scan headers"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function(format!("scan_full_rows/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_physical_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_scan_full_rows_prepared( &backend, &fixture, ), ) .expect("json_pointer physical scan full rows"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("prefix_scan_schema/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_physical_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_prefix_scan_schema_prepared( &backend, &fixture, ), ) .expect("json_pointer physical prefix schema scan"), ) }, BatchSize::LargeInput, ) }); group.bench_function( format!("prefix_scan_schema_file_null/{}", row_label(row_count)), |b| { b.iter_batched( || prepare_physical_read(runtime, profile, &storage_rows), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_prefix_scan_schema_file_null_prepared( &backend, &fixture, ), ) .expect("json_pointer physical prefix schema file null scan"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("write_delta_10pct_updates/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = physical_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_update_rows( &backend, &storage_rows, change_rows, ), ) .expect("prepare json_pointer physical delta update"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("json_pointer physical delta update"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("write_tombstone_10pct_deletes/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = physical_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_tombstone_rows( &backend, &storage_rows, change_rows, ), ) .expect("prepare json_pointer physical tombstones"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("json_pointer physical tombstones"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("changed_keys_update_10pct/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = physical_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_diff_update_rows( &backend, &storage_rows, change_rows, ), ) .expect("prepare json_pointer physical changed keys"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_changed_keys_prepared( &backend, &fixture, ), ) .expect("json_pointer physical changed keys"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("changed_keys_delta_chain_10x1pct/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = physical_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_diff_delta_chain( &backend, &storage_rows, 10, (row_count / 100).max(1), ), ) .expect("prepare json_pointer physical delta-chain changed keys"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_pointer_tracked_state_changed_keys_prepared( &backend, &fixture, ), ) .expect("json_pointer physical delta-chain changed keys"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("materialize_delta_chain_10x1pct/{}", row_label(row_count)), |b| { b.iter_batched( || { let backend = physical_backend(profile); let fixture = runtime .block_on( storage_bench::prepare_json_pointer_tracked_state_materialize_delta_chain( &backend, &storage_rows, 10, (row_count / 100).max(1), ), ) .expect("prepare json_pointer physical materialize delta chain"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_materialize_root_prepared( &backend, &fixture, )) .expect("json_pointer physical materialize delta chain"), ) }, BatchSize::LargeInput, ) }, ); group.finish(); } } fn fixture_rows() -> Vec { let root: JsonValue = serde_json::from_str(PNPM_LOCK_JSON).expect("pnpm lock JSON fixture"); let mut rows = Vec::new(); flatten_json("", &root, &mut rows); assert!( rows.len() >= SCALE_ROWS, "pnpm lock fixture should have at least {SCALE_ROWS} pointer rows, got {}", rows.len() ); rows } fn prepare_raw_sqlite_empty() -> RawSqliteFixture { let dir = TempDir::new().expect("create raw sqlite tempdir"); let conn = Connection::open(dir.path().join("json-pointer-physical.sqlite")) .expect("open raw sqlite json_pointer physical db"); conn.execute_batch( " PRAGMA journal_mode = WAL; PRAGMA synchronous = NORMAL; PRAGMA temp_store = MEMORY; PRAGMA foreign_keys = ON; CREATE TABLE json_pointer ( path TEXT NOT NULL PRIMARY KEY, value TEXT NOT NULL ) WITHOUT ROWID; ", ) .expect("configure raw sqlite json_pointer physical db"); RawSqliteFixture { conn, _dir: dir } } fn prepare_raw_sqlite_seeded(rows: &[PointerRow]) -> RawSqliteFixture { let fixture = prepare_raw_sqlite_empty(); raw_sqlite_seed(&fixture.conn, rows); fixture } fn raw_sqlite_seed(conn: &Connection, rows: &[PointerRow]) { conn.execute_batch("BEGIN IMMEDIATE") .expect("begin raw sqlite seed"); { let mut statement = conn .prepare_cached( "INSERT INTO json_pointer (path, value) VALUES (?1, ?2) ON CONFLICT(path) DO UPDATE SET value = excluded.value", ) .expect("prepare raw sqlite seed insert"); for row in rows { statement .execute(params![row.path.as_str(), row.value_json.as_str()]) .expect("insert raw sqlite seed row"); } } conn.execute_batch("COMMIT") .expect("commit raw sqlite seed"); } fn raw_sqlite_insert_all(fixture: RawSqliteFixture, rows: &[PointerRow]) -> usize { raw_sqlite_seed(&fixture.conn, rows); rows.len() } fn raw_sqlite_get_many_exact(fixture: RawSqliteFixture, rows: &[PointerRow]) -> usize { let mut statement = fixture .conn .prepare_cached("SELECT value FROM json_pointer WHERE path = ?1") .expect("prepare raw sqlite exact get"); let mut found = 0; for row in rows { if statement .query_row(params![row.path.as_str()], |_| Ok(())) .optional() .expect("raw sqlite exact get") .is_some() { found += 1; } } assert_eq!(found, rows.len()); found } fn raw_sqlite_get_many_missing(fixture: RawSqliteFixture, row_count: usize) -> usize { let mut statement = fixture .conn .prepare_cached("SELECT value FROM json_pointer WHERE path = ?1") .expect("prepare raw sqlite missing get"); let mut found = 0; for index in 0..row_count { let missing_path = format!("/__missing/{index}"); if statement .query_row(params![missing_path.as_str()], |_| Ok(())) .optional() .expect("raw sqlite missing get") .is_some() { found += 1; } } assert_eq!(found, 0); found } fn raw_sqlite_exists_many(fixture: RawSqliteFixture, rows: &[PointerRow]) -> usize { let mut statement = fixture .conn .prepare_cached("SELECT 1 FROM json_pointer WHERE path = ?1") .expect("prepare raw sqlite exists"); let mut found = 0; for row in rows { if statement .query_row(params![row.path.as_str()], |_| Ok(())) .optional() .expect("raw sqlite exists") .is_some() { found += 1; } } assert_eq!(found, rows.len()); found } fn raw_sqlite_scan_keys_only(fixture: RawSqliteFixture, expected_rows: usize) -> usize { let mut statement = fixture .conn .prepare_cached("SELECT path FROM json_pointer ORDER BY path") .expect("prepare raw sqlite keys scan"); let count = statement .query_map([], |_| Ok(())) .expect("raw sqlite keys scan") .count(); assert_eq!(count, expected_rows); count } fn raw_sqlite_scan_full_rows(fixture: RawSqliteFixture, expected_rows: usize) -> usize { let mut statement = fixture .conn .prepare_cached("SELECT path, value FROM json_pointer ORDER BY path") .expect("prepare raw sqlite full scan"); let count = statement .query_map([], |_| Ok(())) .expect("raw sqlite full scan") .count(); assert_eq!(count, expected_rows); count } fn raw_sqlite_update_first_rows( fixture: RawSqliteFixture, rows: &[PointerRow], change_rows: usize, ) -> usize { fixture .conn .execute_batch("BEGIN IMMEDIATE") .expect("begin raw sqlite update"); let mut affected = 0; { let mut statement = fixture .conn .prepare_cached("UPDATE json_pointer SET value = ?1 WHERE path = ?2") .expect("prepare raw sqlite update"); for row in &rows[..change_rows] { affected += statement .execute(params![row.updated_value_json.as_str(), row.path.as_str()]) .expect("raw sqlite update"); } } fixture .conn .execute_batch("COMMIT") .expect("commit raw sqlite update"); assert_eq!(affected, change_rows); affected } fn raw_sqlite_delete_first_rows( fixture: RawSqliteFixture, rows: &[PointerRow], change_rows: usize, ) -> usize { fixture .conn .execute_batch("BEGIN IMMEDIATE") .expect("begin raw sqlite delete"); let mut affected = 0; { let mut statement = fixture .conn .prepare_cached("DELETE FROM json_pointer WHERE path = ?1") .expect("prepare raw sqlite delete"); for row in &rows[..change_rows] { affected += statement .execute(params![row.path.as_str()]) .expect("raw sqlite delete"); } } fixture .conn .execute_batch("COMMIT") .expect("commit raw sqlite delete"); assert_eq!(affected, change_rows); affected } fn storage_rows(rows: &[PointerRow]) -> Vec { rows.iter() .map(|row| storage_bench::JsonPointerStorageRow { path: row.path.clone(), value_json: row.value_json.clone(), updated_value_json: row.updated_value_json.clone(), }) .collect() } fn physical_backend(profile: BackendProfile) -> Arc { match profile { BackendProfile::Sqlite => { Arc::new(SqliteBenchBackend::tempfile().expect("create sqlite physical backend")) } BackendProfile::RocksDb => { Arc::new(RocksDbBenchBackend::new().expect("create rocksdb physical backend")) } } } fn prepare_physical_read( runtime: &Runtime, profile: BackendProfile, rows: &[storage_bench::JsonPointerStorageRow], ) -> ( Arc, storage_bench::JsonPointerTrackedStateReadFixture, ) { let backend = physical_backend(profile); let fixture = runtime .block_on(storage_bench::prepare_json_pointer_tracked_state_read( &backend, rows, )) .expect("prepare json_pointer physical read"); (backend, fixture) } fn flatten_json(path: &str, value: &JsonValue, rows: &mut Vec) { rows.push(PointerRow { path: path.to_string(), value_json: value.to_string(), updated_value_json: updated_value_for(path), }); match value { JsonValue::Array(items) => { for (index, item) in items.iter().enumerate() { let child_path = format!("{path}/{}", index); flatten_json(&child_path, item, rows); } } JsonValue::Object(map) => { for (key, child) in map { let child_path = format!("{path}/{}", escape_pointer_token(key)); flatten_json(&child_path, child, rows); } } JsonValue::Null | JsonValue::Bool(_) | JsonValue::Number(_) | JsonValue::String(_) => {} } } fn updated_value_for(path: &str) -> String { serde_json::json!({ "updated": true, "path": path, }) .to_string() } fn escape_pointer_token(token: &str) -> String { token.replace('~', "~0").replace('/', "~1") } fn row_label(rows: usize) -> String { if rows >= 1_000 { format!("{}k", rows / 1_000) } else { rows.to_string() } } fn changed_row_count(rows: usize) -> usize { (rows / CHANGE_ROW_DENOMINATOR).max(1) } criterion_group!(benches, json_pointer_physical_benches); criterion_main!(benches); ================================================ FILE: packages/engine/benches/optimization9_sql2/json_pointer.schema.json ================================================ { "x-lix-key": "json_pointer", "x-lix-primary-key": [ "/path" ], "type": "object", "properties": { "path": { "type": "string", "description": "RFC 6901 JSON Pointer path (empty string for root)." }, "value": { "anyOf": [ { "type": "object" }, { "type": "array" }, { "type": "string" }, { "type": "number" }, { "type": "boolean" }, { "type": "null" } ] } }, "required": [ "path", "value" ], "additionalProperties": false } ================================================ FILE: packages/engine/benches/optimization9_sql2/main.rs ================================================ use std::time::Duration; use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion}; use lix_engine::{optimization9_sql2_bench, Engine, SessionContext, Value}; use serde_json::Value as JsonValue; use tokio::runtime::Runtime; #[path = "../storage/rocksdb_backend.rs"] mod rocksdb_backend; #[path = "../storage/sqlite_backend.rs"] mod sqlite_backend; use rocksdb_backend::RocksDbBenchBackend; use sqlite_backend::SqliteBenchBackend; const JSON_POINTER_SCHEMA_JSON: &str = include_str!("json_pointer.schema.json"); const PNPM_LOCK_JSON: &str = include_str!("pnpm-lock.fixture.json"); const ROW_COUNT: usize = 1_000; const INSERT_ROWS: usize = 500; const CHUNK_SIZE: usize = 500; #[derive(Clone)] struct PointerRow { path: String, value_json: String, updated_value_json: String, } #[derive(Clone, Copy)] enum LixBackendProfile { Sqlite, RocksDb, } impl LixBackendProfile { fn name(self) -> &'static str { match self { Self::Sqlite => "lix_sqlite", Self::RocksDb => "lix_rocksdb", } } } struct LixFixture { session: SessionContext, } fn optimization9_sql2_benches(c: &mut Criterion) { let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("create tokio runtime for optimization9 sql2 benchmarks"); let rows = fixture_rows(); for profile in [LixBackendProfile::Sqlite, LixBackendProfile::RocksDb] { bench_smoke_crud(c, &runtime, profile, &rows); bench_planning_only(c, &runtime, profile, &rows); bench_execute_preplanned(c, &runtime, profile, &rows); bench_e2e_literal(c, &runtime, profile, &rows); bench_e2e_parameterized(c, &runtime, profile, &rows); } } fn bench_smoke_crud( c: &mut Criterion, runtime: &Runtime, profile: LixBackendProfile, all_rows: &[PointerRow], ) { let rows = all_rows[..ROW_COUNT].to_vec(); let mut group = c.benchmark_group(format!("optimization9_sql2/smoke_crud/{}", profile.name())); configure_group(&mut group); group.bench_function("insert_all_rows/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_empty(profile)), |fixture| { insert_lix_rows_blocking(runtime, &fixture.session, &rows); black_box(rows.len()) }, BatchSize::LargeInput, ) }); group.bench_function("select_all_path_value/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let result = runtime .block_on( fixture .session .execute("SELECT path, value FROM json_pointer ORDER BY path", &[]), ) .expect("smoke select all"); assert_eq!(result.len(), ROW_COUNT); black_box(result.len()) }, BatchSize::LargeInput, ) }); group.bench_function("select_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let sql = select_one_literal_sql(pick_pk_row(&rows)); let result = runtime .block_on(fixture.session.execute(&sql, &[])) .expect("smoke select one"); assert_eq!(result.len(), 1); black_box(result.len()) }, BatchSize::LargeInput, ) }); group.bench_function("update_all_values/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let affected = runtime .block_on(fixture.session.execute( r#"UPDATE json_pointer SET value = lix_json('{"updated":true}')"#, &[], )) .expect("smoke update all") .rows_affected(); assert_eq!(affected as usize, ROW_COUNT); black_box(affected) }, BatchSize::LargeInput, ) }); group.bench_function("update_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let sql = update_one_literal_sql(pick_pk_row(&rows)); let affected = runtime .block_on(fixture.session.execute(&sql, &[])) .expect("smoke update one") .rows_affected(); assert_eq!(affected, 1); black_box(affected) }, BatchSize::LargeInput, ) }); group.bench_function("delete_all_rows/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let affected = runtime .block_on(fixture.session.execute("DELETE FROM json_pointer", &[])) .expect("smoke delete all") .rows_affected(); assert_eq!(affected as usize, ROW_COUNT); black_box(affected) }, BatchSize::LargeInput, ) }); group.bench_function("delete_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let sql = delete_one_literal_sql(pick_pk_row(&rows)); let affected = runtime .block_on(fixture.session.execute(&sql, &[])) .expect("smoke delete one") .rows_affected(); assert_eq!(affected, 1); black_box(affected) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_planning_only( c: &mut Criterion, runtime: &Runtime, profile: LixBackendProfile, all_rows: &[PointerRow], ) { let rows = all_rows[..ROW_COUNT].to_vec(); let mut group = c.benchmark_group(format!( "optimization9_sql2/planning_only/{}", profile.name() )); configure_group(&mut group); group.bench_function("select_all_path_value/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { black_box(runtime.block_on(optimization9_sql2_bench::plan_read_only( &fixture.session, "SELECT path, value FROM json_pointer ORDER BY path", ))) .expect("plan select all") }, BatchSize::LargeInput, ) }); group.bench_function("select_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let sql = select_one_literal_sql(pick_pk_row(&rows)); black_box(runtime.block_on(optimization9_sql2_bench::plan_read_only( &fixture.session, &sql, ))) .expect("plan select one") }, BatchSize::LargeInput, ) }); group.bench_function("insert_500_values/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_empty(profile)), |fixture| { let sql = insert_literal_sql(&rows[..INSERT_ROWS]); black_box(runtime.block_on(optimization9_sql2_bench::plan_write_only( &fixture.session, &sql, ))) .expect("plan insert") }, BatchSize::LargeInput, ) }); group.bench_function("update_all_values/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { black_box(runtime.block_on(optimization9_sql2_bench::plan_write_only( &fixture.session, r#"UPDATE json_pointer SET value = lix_json('{"updated":true}')"#, ))) .expect("plan update all") }, BatchSize::LargeInput, ) }); group.bench_function("delete_all_rows/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { black_box(runtime.block_on(optimization9_sql2_bench::plan_write_only( &fixture.session, "DELETE FROM json_pointer", ))) .expect("plan delete all") }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_execute_preplanned( c: &mut Criterion, runtime: &Runtime, profile: LixBackendProfile, all_rows: &[PointerRow], ) { let rows = all_rows[..ROW_COUNT].to_vec(); let mut group = c.benchmark_group(format!( "optimization9_sql2/execute_preplanned/{}", profile.name() )); configure_group(&mut group); group.bench_function("select_all_path_value/1k", |b| { b.iter_batched( || { let fixture = runtime.block_on(prepare_lix_seeded(profile, &rows)); runtime .block_on(optimization9_sql2_bench::prepare_read_plan( &fixture.session, "SELECT path, value FROM json_pointer ORDER BY path", )) .expect("prepare select all plan") }, |plan| { let result = runtime .block_on(optimization9_sql2_bench::execute_read_plan(plan, &[])) .expect("execute select all plan"); assert_eq!(result.rows.len(), ROW_COUNT); black_box(result.rows.len()) }, BatchSize::LargeInput, ) }); group.bench_function("select_one_by_pk/1k", |b| { b.iter_batched( || { let fixture = runtime.block_on(prepare_lix_seeded(profile, &rows)); let sql = select_one_parameterized_sql(); runtime .block_on(optimization9_sql2_bench::prepare_read_plan( &fixture.session, sql, )) .expect("prepare select one plan") }, |plan| { let params = vec![Value::Text(pick_pk_row(&rows).path.clone())]; let result = runtime .block_on(optimization9_sql2_bench::execute_read_plan(plan, ¶ms)) .expect("execute select one plan"); assert_eq!(result.rows.len(), 1); black_box(result.rows.len()) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_e2e_literal( c: &mut Criterion, runtime: &Runtime, profile: LixBackendProfile, all_rows: &[PointerRow], ) { let rows = all_rows[..ROW_COUNT].to_vec(); let mut group = c.benchmark_group(format!("optimization9_sql2/e2e_literal/{}", profile.name())); configure_group(&mut group); group.bench_function("select_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let sql = select_one_literal_sql(pick_pk_row(&rows)); let result = runtime .block_on(fixture.session.execute(&sql, &[])) .expect("literal select one"); assert_eq!(result.len(), 1); black_box(result.len()) }, BatchSize::LargeInput, ) }); group.bench_function("update_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let sql = update_one_literal_sql(pick_pk_row(&rows)); let affected = runtime .block_on(fixture.session.execute(&sql, &[])) .expect("literal update one") .rows_affected(); assert_eq!(affected, 1); black_box(affected) }, BatchSize::LargeInput, ) }); group.bench_function("delete_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let sql = delete_one_literal_sql(pick_pk_row(&rows)); let affected = runtime .block_on(fixture.session.execute(&sql, &[])) .expect("literal delete one") .rows_affected(); assert_eq!(affected, 1); black_box(affected) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_e2e_parameterized( c: &mut Criterion, runtime: &Runtime, profile: LixBackendProfile, all_rows: &[PointerRow], ) { let rows = all_rows[..ROW_COUNT].to_vec(); let mut group = c.benchmark_group(format!( "optimization9_sql2/e2e_parameterized/{}", profile.name() )); configure_group(&mut group); group.bench_function("select_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let row = pick_pk_row(&rows); let result = runtime .block_on(fixture.session.execute( select_one_parameterized_sql(), &[Value::Text(row.path.clone())], )) .expect("parameterized select one"); assert_eq!(result.len(), 1); black_box(result.len()) }, BatchSize::LargeInput, ) }); group.bench_function("update_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let row = pick_pk_row(&rows); let affected = runtime .block_on(fixture.session.execute( "UPDATE json_pointer SET value = lix_json($1) WHERE path = $2", &[ Value::Text(row.updated_value_json.clone()), Value::Text(row.path.clone()), ], )) .expect("parameterized update one") .rows_affected(); assert_eq!(affected, 1); black_box(affected) }, BatchSize::LargeInput, ) }); group.bench_function("delete_one_by_pk/1k", |b| { b.iter_batched( || runtime.block_on(prepare_lix_seeded(profile, &rows)), |fixture| { let row = pick_pk_row(&rows); let affected = runtime .block_on(fixture.session.execute( "DELETE FROM json_pointer WHERE path = $1", &[Value::Text(row.path.clone())], )) .expect("parameterized delete one") .rows_affected(); assert_eq!(affected, 1); black_box(affected) }, BatchSize::LargeInput, ) }); group.finish(); } fn configure_group(group: &mut criterion::BenchmarkGroup<'_, criterion::measurement::WallTime>) { group.sample_size(11); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); } async fn prepare_lix_empty(profile: LixBackendProfile) -> LixFixture { let engine = match profile { LixBackendProfile::Sqlite => { let backend = SqliteBenchBackend::tempfile().expect("create sqlite optimization9 backend"); Engine::initialize(Box::new(backend.clone())) .await .expect("initialize sqlite optimization9 backend"); Engine::new(Box::new(backend)) .await .expect("open sqlite optimization9 engine") } LixBackendProfile::RocksDb => { let backend = RocksDbBenchBackend::new().expect("create rocksdb optimization9 backend"); Engine::initialize(Box::new(backend.clone())) .await .expect("initialize rocksdb optimization9 backend"); Engine::new(Box::new(backend)) .await .expect("open rocksdb optimization9 engine") } }; let setup_session = engine .open_workspace_session() .await .expect("open optimization9 setup workspace session"); register_json_pointer_schema(&setup_session).await; let session = engine .open_workspace_session() .await .expect("open optimization9 benchmark workspace session"); LixFixture { session } } async fn prepare_lix_seeded(profile: LixBackendProfile, rows: &[PointerRow]) -> LixFixture { let fixture = prepare_lix_empty(profile).await; insert_lix_rows(&fixture.session, rows).await; fixture } async fn register_json_pointer_schema(session: &SessionContext) { let sql = format!( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) VALUES (lix_json('{}'), false, false)", sql_string(JSON_POINTER_SCHEMA_JSON) ); let affected = session .execute(&sql, &[]) .await .expect("register json_pointer schema") .rows_affected(); assert_eq!(affected, 1); } async fn insert_lix_rows(session: &SessionContext, rows: &[PointerRow]) { for chunk in rows.chunks(CHUNK_SIZE) { let sql = insert_literal_sql(chunk); let affected = session .execute(&sql, &[]) .await .expect("insert json_pointer rows") .rows_affected(); assert_eq!(affected as usize, chunk.len()); } } fn insert_lix_rows_blocking(runtime: &Runtime, session: &SessionContext, rows: &[PointerRow]) { runtime.block_on(insert_lix_rows(session, rows)); } fn fixture_rows() -> Vec { let root: JsonValue = serde_json::from_str(PNPM_LOCK_JSON).expect("pnpm lock JSON fixture"); let mut rows = Vec::new(); flatten_json("", &root, &mut rows); assert!( rows.len() >= ROW_COUNT, "pnpm lock fixture should have at least {ROW_COUNT} pointer rows, got {}", rows.len() ); rows } fn flatten_json(path: &str, value: &JsonValue, rows: &mut Vec) { rows.push(PointerRow { path: path.to_string(), value_json: value.to_string(), updated_value_json: updated_value_for(path), }); match value { JsonValue::Array(items) => { for (index, item) in items.iter().enumerate() { let child_path = format!("{path}/{}", index); flatten_json(&child_path, item, rows); } } JsonValue::Object(map) => { for (key, child) in map { let child_path = format!("{path}/{}", escape_pointer_token(key)); flatten_json(&child_path, child, rows); } } JsonValue::Null | JsonValue::Bool(_) | JsonValue::Number(_) | JsonValue::String(_) => {} } } fn insert_literal_sql(rows: &[PointerRow]) -> String { let mut sql = String::from("INSERT INTO json_pointer (path, value) VALUES "); for (index, row) in rows.iter().enumerate() { if index > 0 { sql.push(','); } sql.push_str(&format!( "('{}', lix_json('{}'))", sql_string(row.path.as_str()), sql_string(row.value_json.as_str()) )); } sql } fn select_one_literal_sql(row: &PointerRow) -> String { format!( "SELECT path, value FROM json_pointer WHERE path = '{}'", sql_string(row.path.as_str()) ) } fn select_one_parameterized_sql() -> &'static str { "SELECT path, value FROM json_pointer WHERE path = $1" } fn update_one_literal_sql(row: &PointerRow) -> String { format!( "UPDATE json_pointer SET value = lix_json('{}') WHERE path = '{}'", sql_string(row.updated_value_json.as_str()), sql_string(row.path.as_str()) ) } fn delete_one_literal_sql(row: &PointerRow) -> String { format!( "DELETE FROM json_pointer WHERE path = '{}'", sql_string(row.path.as_str()) ) } fn pick_pk_row(rows: &[PointerRow]) -> &PointerRow { &rows[rows.len() / 2] } fn updated_value_for(path: &str) -> String { serde_json::json!({ "updated": true, "path": path, }) .to_string() } fn escape_pointer_token(token: &str) -> String { token.replace('~', "~0").replace('/', "~1") } fn sql_string(value: &str) -> String { value.replace('\'', "''") } criterion_group!(benches, optimization9_sql2_benches); criterion_main!(benches); ================================================ FILE: packages/engine/benches/optimization9_sql2/pnpm-lock.fixture.json ================================================ {"lockfileVersion":"9.0","settings":{"autoInstallPeers":true,"excludeLinksFromLockfile":false},"importers":{".":{"devDependencies":{"@changesets/cli":{"specifier":"^2.29.7","version":"2.29.7(@types/node@24.10.2)"},"@vitest/coverage-v8":{"specifier":"^3.1.1","version":"3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)"},"nx":{"specifier":"^21.0.0","version":"21.4.1"},"nx-cloud":{"specifier":"^19.1.0","version":"19.1.0"},"vitest":{"specifier":"^3.1.1","version":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}}},"packages/js-kysely":{"dependencies":{"json-schema-to-ts":{"specifier":"^3.1.1","version":"3.1.1"},"kysely":{"specifier":"^0.28.7","version":"0.28.7"}},"devDependencies":{"@lix-js/sdk":{"specifier":"workspace:*","version":"link:../js-sdk"},"typescript":{"specifier":"^5.5.4","version":"5.9.3"},"vitest":{"specifier":"^4.0.18","version":"4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}}},"packages/js-sdk":{"devDependencies":{"better-sqlite3":{"specifier":"^12.9.0","version":"12.9.0"},"typescript":{"specifier":"^5.5.4","version":"5.9.3"},"vitest":{"specifier":"^4.0.18","version":"4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}}},"packages/react-utils":{"devDependencies":{"@lix-js/kysely":{"specifier":"workspace:*","version":"link:../js-kysely"},"@lix-js/sdk":{"specifier":"workspace:*","version":"link:../js-sdk"},"@testing-library/react":{"specifier":"^16.3.0","version":"16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)"},"@types/react":{"specifier":"^19.1.8","version":"19.2.7"},"@vitest/coverage-v8":{"specifier":"^3.2.4","version":"3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)"},"https-proxy-agent":{"specifier":"7.0.2","version":"7.0.2"},"jsdom":{"specifier":"^26.1.0","version":"26.1.0"},"oxlint":{"specifier":"^1.14.0","version":"1.26.0"},"prettier":{"specifier":"^3.3.3","version":"3.6.2"},"react":{"specifier":"19.2.0","version":"19.2.0"},"react-dom":{"specifier":"19.2.0","version":"19.2.0(react@19.2.0)"},"typescript":{"specifier":"^5.5.4","version":"5.8.3"},"vitest":{"specifier":"^3.2.4","version":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}}},"packages/website":{"dependencies":{"@cloudflare/vite-plugin":{"specifier":"^1.36.0","version":"1.36.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(workerd@1.20260504.1)(wrangler@4.88.0)"},"@lix-js/plugin-json":{"specifier":"1.0.1","version":"1.0.1(tslib@2.8.1)"},"@lix-js/sdk":{"specifier":"workspace:*","version":"link:../js-sdk"},"@opral/markdown-wc":{"specifier":"0.9.0","version":"0.9.0"},"@tailwindcss/vite":{"specifier":"^4.2.4","version":"4.2.4(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"@tanstack/react-router":{"specifier":"^1.169.2","version":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)"},"@tanstack/react-start":{"specifier":"^1.167.64","version":"1.167.64(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))"},"@tanstack/router-plugin":{"specifier":"^1.167.34","version":"1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))"},"lucide-react":{"specifier":"^0.544.0","version":"0.544.0(react@19.2.0)"},"posthog-js":{"specifier":"^1.321.2","version":"1.321.2"},"react":{"specifier":"^19.2.0","version":"19.2.0"},"react-dom":{"specifier":"^19.2.0","version":"19.2.0(react@19.2.0)"},"shiki":{"specifier":"^3.2.2","version":"3.15.0"},"tailwindcss":{"specifier":"^4.2.4","version":"4.2.4"}},"devDependencies":{"@testing-library/dom":{"specifier":"^10.4.0","version":"10.4.1"},"@testing-library/react":{"specifier":"^16.2.0","version":"16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)"},"@types/node":{"specifier":"^22.10.2","version":"22.15.33"},"@types/react":{"specifier":"^19.2.0","version":"19.2.7"},"@types/react-dom":{"specifier":"^19.2.0","version":"19.2.3(@types/react@19.2.7)"},"@vitejs/plugin-react":{"specifier":"^6.0.1","version":"6.0.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"@vitest/browser":{"specifier":"^4.1.5","version":"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)"},"@vitest/coverage-v8":{"specifier":"^4.1.5","version":"4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)"},"jsdom":{"specifier":"^27.0.0","version":"27.3.0(postcss@8.5.14)"},"prettier":{"specifier":"^3.6.0","version":"3.6.2"},"typescript":{"specifier":"^5.7.2","version":"5.8.3"},"vite":{"specifier":"^8.0.10","version":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"vite-plugin-static-copy":{"specifier":"^4.1.0","version":"4.1.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"vitest":{"specifier":"^4.1.5","version":"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"web-vitals":{"specifier":"^5.1.0","version":"5.1.0"},"wrangler":{"specifier":"^4.88.0","version":"4.88.0"}}}},"packages":{"@acemir/cssom@0.9.28":{"resolution":{"integrity":"sha512-LuS6IVEivI75vKN8S04qRD+YySP0RmU/cV8UNukhQZvprxF+76Z43TNo/a08eCodaGhT1Us8etqS1ZRY9/Or0A=="}},"@ampproject/remapping@2.3.0":{"resolution":{"integrity":"sha512-30iZtAPgz+LTIYoeivqYo853f02jBYSd5uGnGpkFV0M3xOt9aN73erkgYAmZU43x4VfqcnLxW9Kpg3R5LC4YYw=="},"engines":{"node":">=6.0.0"}},"@antfu/install-pkg@1.1.0":{"resolution":{"integrity":"sha512-MGQsmw10ZyI+EJo45CdSER4zEb+p31LpDAFp2Z3gkSd1yqVZGi0Ebx++YTEMonJy4oChEMLsxZ64j8FH6sSqtQ=="}},"@antfu/utils@9.3.0":{"resolution":{"integrity":"sha512-9hFT4RauhcUzqOE4f1+frMKLZrgNog5b06I7VmZQV1BkvwvqrbC8EBZf3L1eEL2AKb6rNKjER0sEvJiSP1FXEA=="}},"@asamuzakjp/css-color@3.1.4":{"resolution":{"integrity":"sha512-SeuBV4rnjpFNjI8HSgKUwteuFdkHwkboq31HWzznuqgySQir+jSTczoWVVL4jvOjKjuH80fMDG0Fvg1Sb+OJsA=="}},"@asamuzakjp/css-color@4.1.0":{"resolution":{"integrity":"sha512-9xiBAtLn4aNsa4mDnpovJvBn72tNEIACyvlqaNJ+ADemR+yeMJWnBudOi2qGDviJa7SwcDOU/TRh5dnET7qk0w=="}},"@asamuzakjp/dom-selector@6.7.6":{"resolution":{"integrity":"sha512-hBaJER6A9MpdG3WgdlOolHmbOYvSk46y7IQN/1+iqiCuUu6iWdQrs9DGKF8ocqsEqWujWf/V7b7vaDgiUmIvUg=="}},"@asamuzakjp/nwsapi@2.3.9":{"resolution":{"integrity":"sha512-n8GuYSrI9bF7FFZ/SjhwevlHc8xaVlb/7HmHelnc/PZXBD2ZR49NnN9sMMuDdEGPeeRQ5d0hqlSlEpgCX3Wl0Q=="}},"@babel/code-frame@7.27.1":{"resolution":{"integrity":"sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg=="},"engines":{"node":">=6.9.0"}},"@babel/compat-data@7.28.0":{"resolution":{"integrity":"sha512-60X7qkglvrap8mn1lh2ebxXdZYtUcpd7gsmy9kLaBJ4i/WdY8PqTSdxyA8qraikqKQK5C1KRBKXqznrVapyNaw=="},"engines":{"node":">=6.9.0"}},"@babel/core@7.28.5":{"resolution":{"integrity":"sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw=="},"engines":{"node":">=6.9.0"}},"@babel/generator@7.28.5":{"resolution":{"integrity":"sha512-3EwLFhZ38J4VyIP6WNtt2kUdW9dokXA9Cr4IVIFHuCpZ3H8/YFOl5JjZHisrn1fATPBmKKqXzDFvh9fUwHz6CQ=="},"engines":{"node":">=6.9.0"}},"@babel/helper-compilation-targets@7.27.2":{"resolution":{"integrity":"sha512-2+1thGUUWWjLTYTHZWK1n8Yga0ijBz1XAhUXcKy81rd5g6yh7hGqMp45v7cadSbEHc9G3OTv45SyneRN3ps4DQ=="},"engines":{"node":">=6.9.0"}},"@babel/helper-globals@7.28.0":{"resolution":{"integrity":"sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw=="},"engines":{"node":">=6.9.0"}},"@babel/helper-module-imports@7.27.1":{"resolution":{"integrity":"sha512-0gSFWUPNXNopqtIPQvlD5WgXYI5GY2kP2cCvoT8kczjbfcfuIljTbcWrulD1CIPIX2gt1wghbDy08yE1p+/r3w=="},"engines":{"node":">=6.9.0"}},"@babel/helper-module-transforms@7.28.3":{"resolution":{"integrity":"sha512-gytXUbs8k2sXS9PnQptz5o0QnpLL51SwASIORY6XaBKF88nsOT0Zw9szLqlSGQDP/4TljBAD5y98p2U1fqkdsw=="},"engines":{"node":">=6.9.0"},"peerDependencies":{"@babel/core":"^7.0.0"}},"@babel/helper-plugin-utils@7.27.1":{"resolution":{"integrity":"sha512-1gn1Up5YXka3YYAHGKpbideQ5Yjf1tDa9qYcgysz+cNCXukyLl6DjPXhD3VRwSb8c0J9tA4b2+rHEZtc6R0tlw=="},"engines":{"node":">=6.9.0"}},"@babel/helper-string-parser@7.27.1":{"resolution":{"integrity":"sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA=="},"engines":{"node":">=6.9.0"}},"@babel/helper-validator-identifier@7.28.5":{"resolution":{"integrity":"sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q=="},"engines":{"node":">=6.9.0"}},"@babel/helper-validator-option@7.27.1":{"resolution":{"integrity":"sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg=="},"engines":{"node":">=6.9.0"}},"@babel/helpers@7.28.4":{"resolution":{"integrity":"sha512-HFN59MmQXGHVyYadKLVumYsA9dBFun/ldYxipEjzA4196jpLZd8UjEEBLkbEkvfYreDqJhZxYAWFPtrfhNpj4w=="},"engines":{"node":">=6.9.0"}},"@babel/parser@7.28.5":{"resolution":{"integrity":"sha512-KKBU1VGYR7ORr3At5HAtUQ+TV3SzRCXmA/8OdDZiLDBIZxVyzXuztPjfLd3BV1PRAQGCMWWSHYhL0F8d5uHBDQ=="},"engines":{"node":">=6.0.0"},"hasBin":true},"@babel/parser@7.29.3":{"resolution":{"integrity":"sha512-b3ctpQwp+PROvU/cttc4OYl4MzfJUWy6FZg+PMXfzmt/+39iHVF0sDfqay8TQM3JA2EUOyKcFZt75jWriQijsA=="},"engines":{"node":">=6.0.0"},"hasBin":true},"@babel/plugin-syntax-jsx@7.27.1":{"resolution":{"integrity":"sha512-y8YTNIeKoyhGd9O0Jiyzyyqk8gdjnumGTQPsz0xOZOQ2RmkVJeZ1vmmfIvFEKqucBG6axJGBZDE/7iI5suUI/w=="},"engines":{"node":">=6.9.0"},"peerDependencies":{"@babel/core":"^7.0.0-0"}},"@babel/plugin-syntax-typescript@7.27.1":{"resolution":{"integrity":"sha512-xfYCBMxveHrRMnAWl1ZlPXOZjzkN82THFvLhQhFXFt81Z5HnN+EtUkZhv/zcKpmT3fzmWZB0ywiBrbC3vogbwQ=="},"engines":{"node":">=6.9.0"},"peerDependencies":{"@babel/core":"^7.0.0-0"}},"@babel/runtime@7.28.4":{"resolution":{"integrity":"sha512-Q/N6JNWvIvPnLDvjlE1OUBLPQHH6l3CltCEsHIujp45zQUSSh8K+gHnaEX45yAT1nyngnINhvWtzN+Nb9D8RAQ=="},"engines":{"node":">=6.9.0"}},"@babel/template@7.27.2":{"resolution":{"integrity":"sha512-LPDZ85aEJyYSd18/DkjNh4/y1ntkE5KwUHWTiqgRxruuZL2F1yuHligVHLvcHY2vMHXttKFpJn6LwfI7cw7ODw=="},"engines":{"node":">=6.9.0"}},"@babel/traverse@7.28.5":{"resolution":{"integrity":"sha512-TCCj4t55U90khlYkVV/0TfkJkAkUg3jZFA3Neb7unZT8CPok7iiRfaX0F+WnqWqt7OxhOn0uBKXCw4lbL8W0aQ=="},"engines":{"node":">=6.9.0"}},"@babel/types@7.28.5":{"resolution":{"integrity":"sha512-qQ5m48eI/MFLQ5PxQj4PFaprjyCTLI37ElWMmNs0K8Lk3dVeOdNpB3ks8jc7yM5CDmVC73eMVk/trk3fgmrUpA=="},"engines":{"node":">=6.9.0"}},"@babel/types@7.29.0":{"resolution":{"integrity":"sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A=="},"engines":{"node":">=6.9.0"}},"@bcoe/v8-coverage@1.0.2":{"resolution":{"integrity":"sha512-6zABk/ECA/QYSCQ1NGiVwwbQerUCZ+TQbp64Q3AgmfNvurHH0j8TtXa1qbShXA6qqkpAj4V5W8pP6mLe1mcMqA=="},"engines":{"node":">=18"}},"@blazediff/core@1.9.1":{"resolution":{"integrity":"sha512-ehg3jIkYKulZh+8om/O25vkvSsXXwC+skXmyA87FFx6A/45eqOkZsBltMw/TVteb0mloiGT8oGRTcjRAz66zaA=="}},"@braintree/sanitize-url@7.1.1":{"resolution":{"integrity":"sha512-i1L7noDNxtFyL5DmZafWy1wRVhGehQmzZaz1HiN5e7iylJMSZR7ekOV7NsIqa5qBldlLrsKv4HbgFUVlQrz8Mw=="}},"@bufbuild/protobuf@2.12.0":{"resolution":{"integrity":"sha512-B/XlCaFIP8LOwzo+bz5uFzATYokcwCKQcghqnlfwSmM5eX/qTkvDBnDPs+gXtX/RyjxJ4DRikECcPJbyALA8FA=="}},"@bundled-es-modules/cookie@2.0.1":{"resolution":{"integrity":"sha512-8o+5fRPLNbjbdGRRmJj3h6Hh1AQJf2dk3qQ/5ZFb+PXkRNiSoMGGUKlsgLfrxneb72axVJyIYji64E2+nNfYyw=="}},"@bundled-es-modules/statuses@1.0.1":{"resolution":{"integrity":"sha512-yn7BklA5acgcBr+7w064fGV+SGIFySjCKpqjcWgBAIfrAkY+4GQTJJHQMeT3V/sgz23VTEVV8TtOmkvJAhFVfg=="}},"@bundled-es-modules/tough-cookie@0.1.6":{"resolution":{"integrity":"sha512-dvMHbL464C0zI+Yqxbz6kZ5TOEp7GLW+pry/RWndAR8MJQAXZ2rPmIs8tziTZjeIyhSNZgZbCePtfSbdWqStJw=="}},"@changesets/apply-release-plan@7.0.13":{"resolution":{"integrity":"sha512-BIW7bofD2yAWoE8H4V40FikC+1nNFEKBisMECccS16W1rt6qqhNTBDmIw5HaqmMgtLNz9e7oiALiEUuKrQ4oHg=="}},"@changesets/assemble-release-plan@6.0.9":{"resolution":{"integrity":"sha512-tPgeeqCHIwNo8sypKlS3gOPmsS3wP0zHt67JDuL20P4QcXiw/O4Hl7oXiuLnP9yg+rXLQ2sScdV1Kkzde61iSQ=="}},"@changesets/changelog-git@0.2.1":{"resolution":{"integrity":"sha512-x/xEleCFLH28c3bQeQIyeZf8lFXyDFVn1SgcBiR2Tw/r4IAWlk1fzxCEZ6NxQAjF2Nwtczoen3OA2qR+UawQ8Q=="}},"@changesets/cli@2.29.7":{"resolution":{"integrity":"sha512-R7RqWoaksyyKXbKXBTbT4REdy22yH81mcFK6sWtqSanxUCbUi9Uf+6aqxZtDQouIqPdem2W56CdxXgsxdq7FLQ=="},"hasBin":true},"@changesets/config@3.1.1":{"resolution":{"integrity":"sha512-bd+3Ap2TKXxljCggI0mKPfzCQKeV/TU4yO2h2C6vAihIo8tzseAn2e7klSuiyYYXvgu53zMN1OeYMIQkaQoWnA=="}},"@changesets/errors@0.2.0":{"resolution":{"integrity":"sha512-6BLOQUscTpZeGljvyQXlWOItQyU71kCdGz7Pi8H8zdw6BI0g3m43iL4xKUVPWtG+qrrL9DTjpdn8eYuCQSRpow=="}},"@changesets/get-dependents-graph@2.1.3":{"resolution":{"integrity":"sha512-gphr+v0mv2I3Oxt19VdWRRUxq3sseyUpX9DaHpTUmLj92Y10AGy+XOtV+kbM6L/fDcpx7/ISDFK6T8A/P3lOdQ=="}},"@changesets/get-release-plan@4.0.13":{"resolution":{"integrity":"sha512-DWG1pus72FcNeXkM12tx+xtExyH/c9I1z+2aXlObH3i9YA7+WZEVaiHzHl03thpvAgWTRaH64MpfHxozfF7Dvg=="}},"@changesets/get-version-range-type@0.4.0":{"resolution":{"integrity":"sha512-hwawtob9DryoGTpixy1D3ZXbGgJu1Rhr+ySH2PvTLHvkZuQ7sRT4oQwMh0hbqZH1weAooedEjRsbrWcGLCeyVQ=="}},"@changesets/git@3.0.4":{"resolution":{"integrity":"sha512-BXANzRFkX+XcC1q/d27NKvlJ1yf7PSAgi8JG6dt8EfbHFHi4neau7mufcSca5zRhwOL8j9s6EqsxmT+s+/E6Sw=="}},"@changesets/logger@0.1.1":{"resolution":{"integrity":"sha512-OQtR36ZlnuTxKqoW4Sv6x5YIhOmClRd5pWsjZsddYxpWs517R0HkyiefQPIytCVh4ZcC5x9XaG8KTdd5iRQUfg=="}},"@changesets/parse@0.4.1":{"resolution":{"integrity":"sha512-iwksMs5Bf/wUItfcg+OXrEpravm5rEd9Bf4oyIPL4kVTmJQ7PNDSd6MDYkpSJR1pn7tz/k8Zf2DhTCqX08Ou+Q=="}},"@changesets/pre@2.0.2":{"resolution":{"integrity":"sha512-HaL/gEyFVvkf9KFg6484wR9s0qjAXlZ8qWPDkTyKF6+zqjBe/I2mygg3MbpZ++hdi0ToqNUF8cjj7fBy0dg8Ug=="}},"@changesets/read@0.6.5":{"resolution":{"integrity":"sha512-UPzNGhsSjHD3Veb0xO/MwvasGe8eMyNrR/sT9gR8Q3DhOQZirgKhhXv/8hVsI0QpPjR004Z9iFxoJU6in3uGMg=="}},"@changesets/should-skip-package@0.1.2":{"resolution":{"integrity":"sha512-qAK/WrqWLNCP22UDdBTMPH5f41elVDlsNyat180A33dWxuUDyNpg6fPi/FyTZwRriVjg0L8gnjJn2F9XAoF0qw=="}},"@changesets/types@4.1.0":{"resolution":{"integrity":"sha512-LDQvVDv5Kb50ny2s25Fhm3d9QSZimsoUGBsUioj6MC3qbMUCuC8GPIvk/M6IvXx3lYhAs0lwWUQLb+VIEUCECw=="}},"@changesets/types@6.1.0":{"resolution":{"integrity":"sha512-rKQcJ+o1nKNgeoYRHKOS07tAMNd3YSN0uHaJOZYjBAgxfV7TUE7JE+z4BzZdQwb5hKaYbayKN5KrYV7ODb2rAA=="}},"@changesets/write@0.4.0":{"resolution":{"integrity":"sha512-CdTLvIOPiCNuH71pyDu3rA+Q0n65cmAbXnwWH84rKGiFumFzkmHNT8KHTMEchcxN+Kl8I54xGUhJ7l3E7X396Q=="}},"@chevrotain/cst-dts-gen@11.0.3":{"resolution":{"integrity":"sha512-BvIKpRLeS/8UbfxXxgC33xOumsacaeCKAjAeLyOn7Pcp95HiRbrpl14S+9vaZLolnbssPIUuiUd8IvgkRyt6NQ=="}},"@chevrotain/gast@11.0.3":{"resolution":{"integrity":"sha512-+qNfcoNk70PyS/uxmj3li5NiECO+2YKZZQMbmjTqRI3Qchu8Hig/Q9vgkHpI3alNjr7M+a2St5pw5w5F6NL5/Q=="}},"@chevrotain/regexp-to-ast@11.0.3":{"resolution":{"integrity":"sha512-1fMHaBZxLFvWI067AVbGJav1eRY7N8DDvYCTwGBiE/ytKBgP8azTdgyrKyWZ9Mfh09eHWb5PgTSO8wi7U824RA=="}},"@chevrotain/types@11.0.3":{"resolution":{"integrity":"sha512-gsiM3G8b58kZC2HaWR50gu6Y1440cHiJ+i3JUvcp/35JchYejb2+5MVeJK0iKThYpAa/P2PYFV4hoi44HD+aHQ=="}},"@chevrotain/utils@11.0.3":{"resolution":{"integrity":"sha512-YslZMgtJUyuMbZ+aKvfF3x1f5liK4mWNxghFRv7jqRR9C3R3fAOGTTKvxXDa2Y1s9zSbcpuO0cAxDYsc9SrXoQ=="}},"@cloudflare/kv-asset-handler@0.5.0":{"resolution":{"integrity":"sha512-jxQYkj8dSIzc0cD6cMMNdOc1UVjqSqu8BZdor5s8cGjW2I8BjODt/kWPVdY+u9zj3ms75Q5qaZgnxUad83+eAg=="},"engines":{"node":">=22.0.0"}},"@cloudflare/unenv-preset@2.16.1":{"resolution":{"integrity":"sha512-ECxObrMfyTl5bhQf/lZCXwo5G6xX9IAUo+nDMKK4SZ8m4Jvvxp52vilxyySSWh2YTZz8+HQ07qGH/2rEom1vDw=="},"peerDependencies":{"unenv":"2.0.0-rc.24","workerd":">1.20260305.0 <2.0.0-0"},"peerDependenciesMeta":{"workerd":{"optional":true}}},"@cloudflare/vite-plugin@1.36.0":{"resolution":{"integrity":"sha512-Rkfa3wAbJ1lqCquWX453x4YlngO+OjNmCQvjb4D5JyMW7KprX6fEJE1NQ06giJDonEz0306EASELF93pRADibA=="},"peerDependencies":{"vite":"^6.1.0 || ^7.0.0 || ^8.0.0","wrangler":"^4.88.0"}},"@cloudflare/workerd-darwin-64@1.20260504.1":{"resolution":{"integrity":"sha512-IOMjYoftNRXabFt+QzY2Bo2mR2TNl8xsGvE0HnQ+K0S2c61VOUGUkr9gpJjnwrJ65yA9Qed4xfg0RRqXHO+nfA=="},"engines":{"node":">=16"},"cpu":["x64"],"os":["darwin"]},"@cloudflare/workerd-darwin-arm64@1.20260504.1":{"resolution":{"integrity":"sha512-7iMXxIU0N5KklZpQm2kuwTm0XtrpHXNqhejJyGquky8gSTnm31zBdutjMekH8VRr6ckbvZIl6lvqXzXdfOEojg=="},"engines":{"node":">=16"},"cpu":["arm64"],"os":["darwin"]},"@cloudflare/workerd-linux-64@1.20260504.1":{"resolution":{"integrity":"sha512-YLB0EH5FQV++oWlalFgPF3p2Bp3dn/D6RWNMw0ukEC8gKnNX6o61A+dlFUl8hRD35ja1zKRxGFUojs4U2+MoJA=="},"engines":{"node":">=16"},"cpu":["x64"],"os":["linux"]},"@cloudflare/workerd-linux-arm64@1.20260504.1":{"resolution":{"integrity":"sha512-FAh/82jDXDArfn9xDih6f/IJfF2SHXBb4nFeQAyHyvXrn18zM6Q3yl2Vj0U7LybbNbmu7TNGghwaM2NoSQS+0A=="},"engines":{"node":">=16"},"cpu":["arm64"],"os":["linux"]},"@cloudflare/workerd-windows-64@1.20260504.1":{"resolution":{"integrity":"sha512-QUg/B3dfrK/KHHHhiJzdkLkTg5mG7lA3t8iplbBoUa3XKCLOHOOXhbU4WSYlLqg8YnsQ6XLZ1HVA99fmZhJh7A=="},"engines":{"node":">=16"},"cpu":["x64"],"os":["win32"]},"@cspotcode/source-map-support@0.8.1":{"resolution":{"integrity":"sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw=="},"engines":{"node":">=12"}},"@csstools/color-helpers@5.1.0":{"resolution":{"integrity":"sha512-S11EXWJyy0Mz5SYvRmY8nJYTFFd1LCNV+7cXyAgQtOOuzb4EsgfqDufL+9esx72/eLhsRdGZwaldu/h+E4t4BA=="},"engines":{"node":">=18"}},"@csstools/css-calc@2.1.4":{"resolution":{"integrity":"sha512-3N8oaj+0juUw/1H3YwmDDJXCgTB1gKU6Hc/bB502u9zR0q2vd786XJH9QfrKIEgFlZmhZiq6epXl4rHqhzsIgQ=="},"engines":{"node":">=18"},"peerDependencies":{"@csstools/css-parser-algorithms":"^3.0.5","@csstools/css-tokenizer":"^3.0.4"}},"@csstools/css-color-parser@3.1.0":{"resolution":{"integrity":"sha512-nbtKwh3a6xNVIp/VRuXV64yTKnb1IjTAEEh3irzS+HkKjAOYLTGNb9pmVNntZ8iVBHcWDA2Dof0QtPgFI1BaTA=="},"engines":{"node":">=18"},"peerDependencies":{"@csstools/css-parser-algorithms":"^3.0.5","@csstools/css-tokenizer":"^3.0.4"}},"@csstools/css-parser-algorithms@3.0.5":{"resolution":{"integrity":"sha512-DaDeUkXZKjdGhgYaHNJTV9pV7Y9B3b644jCLs9Upc3VeNGg6LWARAT6O+Q+/COo+2gg/bM5rhpMAtf70WqfBdQ=="},"engines":{"node":">=18"},"peerDependencies":{"@csstools/css-tokenizer":"^3.0.4"}},"@csstools/css-syntax-patches-for-csstree@1.0.14":{"resolution":{"integrity":"sha512-zSlIxa20WvMojjpCSy8WrNpcZ61RqfTfX3XTaOeVlGJrt/8HF3YbzgFZa01yTbT4GWQLwfTcC3EB8i3XnB647Q=="},"engines":{"node":">=18"},"peerDependencies":{"postcss":"^8.4"}},"@csstools/css-tokenizer@3.0.4":{"resolution":{"integrity":"sha512-Vd/9EVDiu6PPJt9yAh6roZP6El1xHrdvIVGjyBsHR0RYwNHgL7FJPyIIW4fANJNG6FtyZfvlRPpFI4ZM/lubvw=="},"engines":{"node":">=18"}},"@emnapi/core@1.10.0":{"resolution":{"integrity":"sha512-yq6OkJ4p82CAfPl0u9mQebQHKPJkY7WrIuk205cTYnYe+k2Z8YBh11FrbRG/H6ihirqcacOgl2BIO8oyMQLeXw=="}},"@emnapi/core@1.4.5":{"resolution":{"integrity":"sha512-XsLw1dEOpkSX/WucdqUhPWP7hDxSvZiY+fsUC14h+FtQ2Ifni4znbBt8punRX+Uj2JG/uDb8nEHVKvrVlvdZ5Q=="}},"@emnapi/runtime@1.10.0":{"resolution":{"integrity":"sha512-ewvYlk86xUoGI0zQRNq/mC+16R1QeDlKQy21Ki3oSYXNgLb45GV1P6A0M+/s6nyCuNDqe5VpaY84BzXGwVbwFA=="}},"@emnapi/runtime@1.4.5":{"resolution":{"integrity":"sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg=="}},"@emnapi/wasi-threads@1.0.4":{"resolution":{"integrity":"sha512-PJR+bOmMOPH8AtcTGAyYNiuJ3/Fcoj2XN/gBEWzDIKh254XO+mM9XoXHk5GNEhodxeMznbg7BlRojVbKN+gC6g=="}},"@emnapi/wasi-threads@1.2.1":{"resolution":{"integrity":"sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w=="}},"@esbuild/aix-ppc64@0.25.12":{"resolution":{"integrity":"sha512-Hhmwd6CInZ3dwpuGTF8fJG6yoWmsToE+vYgD4nytZVxcu1ulHpUQRAB1UJ8+N1Am3Mz4+xOByoQoSZf4D+CpkA=="},"engines":{"node":">=18"},"cpu":["ppc64"],"os":["aix"]},"@esbuild/aix-ppc64@0.27.3":{"resolution":{"integrity":"sha512-9fJMTNFTWZMh5qwrBItuziu834eOCUcEqymSH7pY+zoMVEZg3gcPuBNxH1EvfVYe9h0x/Ptw8KBzv7qxb7l8dg=="},"engines":{"node":">=18"},"cpu":["ppc64"],"os":["aix"]},"@esbuild/android-arm64@0.25.12":{"resolution":{"integrity":"sha512-6AAmLG7zwD1Z159jCKPvAxZd4y/VTO0VkprYy+3N2FtJ8+BQWFXU+OxARIwA46c5tdD9SsKGZ/1ocqBS/gAKHg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["android"]},"@esbuild/android-arm64@0.27.3":{"resolution":{"integrity":"sha512-YdghPYUmj/FX2SYKJ0OZxf+iaKgMsKHVPF1MAq/P8WirnSpCStzKJFjOjzsW0QQ7oIAiccHdcqjbHmJxRb/dmg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["android"]},"@esbuild/android-arm@0.25.12":{"resolution":{"integrity":"sha512-VJ+sKvNA/GE7Ccacc9Cha7bpS8nyzVv0jdVgwNDaR4gDMC/2TTRc33Ip8qrNYUcpkOHUT5OZ0bUcNNVZQ9RLlg=="},"engines":{"node":">=18"},"cpu":["arm"],"os":["android"]},"@esbuild/android-arm@0.27.3":{"resolution":{"integrity":"sha512-i5D1hPY7GIQmXlXhs2w8AWHhenb00+GxjxRncS2ZM7YNVGNfaMxgzSGuO8o8SJzRc/oZwU2bcScvVERk03QhzA=="},"engines":{"node":">=18"},"cpu":["arm"],"os":["android"]},"@esbuild/android-x64@0.25.12":{"resolution":{"integrity":"sha512-5jbb+2hhDHx5phYR2By8GTWEzn6I9UqR11Kwf22iKbNpYrsmRB18aX/9ivc5cabcUiAT/wM+YIZ6SG9QO6a8kg=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["android"]},"@esbuild/android-x64@0.27.3":{"resolution":{"integrity":"sha512-IN/0BNTkHtk8lkOM8JWAYFg4ORxBkZQf9zXiEOfERX/CzxW3Vg1ewAhU7QSWQpVIzTW+b8Xy+lGzdYXV6UZObQ=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["android"]},"@esbuild/darwin-arm64@0.25.12":{"resolution":{"integrity":"sha512-N3zl+lxHCifgIlcMUP5016ESkeQjLj/959RxxNYIthIg+CQHInujFuXeWbWMgnTo4cp5XVHqFPmpyu9J65C1Yg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["darwin"]},"@esbuild/darwin-arm64@0.27.3":{"resolution":{"integrity":"sha512-Re491k7ByTVRy0t3EKWajdLIr0gz2kKKfzafkth4Q8A5n1xTHrkqZgLLjFEHVD+AXdUGgQMq+Godfq45mGpCKg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["darwin"]},"@esbuild/darwin-x64@0.25.12":{"resolution":{"integrity":"sha512-HQ9ka4Kx21qHXwtlTUVbKJOAnmG1ipXhdWTmNXiPzPfWKpXqASVcWdnf2bnL73wgjNrFXAa3yYvBSd9pzfEIpA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["darwin"]},"@esbuild/darwin-x64@0.27.3":{"resolution":{"integrity":"sha512-vHk/hA7/1AckjGzRqi6wbo+jaShzRowYip6rt6q7VYEDX4LEy1pZfDpdxCBnGtl+A5zq8iXDcyuxwtv3hNtHFg=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["darwin"]},"@esbuild/freebsd-arm64@0.25.12":{"resolution":{"integrity":"sha512-gA0Bx759+7Jve03K1S0vkOu5Lg/85dou3EseOGUes8flVOGxbhDDh/iZaoek11Y8mtyKPGF3vP8XhnkDEAmzeg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["freebsd"]},"@esbuild/freebsd-arm64@0.27.3":{"resolution":{"integrity":"sha512-ipTYM2fjt3kQAYOvo6vcxJx3nBYAzPjgTCk7QEgZG8AUO3ydUhvelmhrbOheMnGOlaSFUoHXB6un+A7q4ygY9w=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["freebsd"]},"@esbuild/freebsd-x64@0.25.12":{"resolution":{"integrity":"sha512-TGbO26Yw2xsHzxtbVFGEXBFH0FRAP7gtcPE7P5yP7wGy7cXK2oO7RyOhL5NLiqTlBh47XhmIUXuGciXEqYFfBQ=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["freebsd"]},"@esbuild/freebsd-x64@0.27.3":{"resolution":{"integrity":"sha512-dDk0X87T7mI6U3K9VjWtHOXqwAMJBNN2r7bejDsc+j03SEjtD9HrOl8gVFByeM0aJksoUuUVU9TBaZa2rgj0oA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["freebsd"]},"@esbuild/linux-arm64@0.25.12":{"resolution":{"integrity":"sha512-8bwX7a8FghIgrupcxb4aUmYDLp8pX06rGh5HqDT7bB+8Rdells6mHvrFHHW2JAOPZUbnjUpKTLg6ECyzvas2AQ=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["linux"]},"@esbuild/linux-arm64@0.27.3":{"resolution":{"integrity":"sha512-sZOuFz/xWnZ4KH3YfFrKCf1WyPZHakVzTiqji3WDc0BCl2kBwiJLCXpzLzUBLgmp4veFZdvN5ChW4Eq/8Fc2Fg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["linux"]},"@esbuild/linux-arm@0.25.12":{"resolution":{"integrity":"sha512-lPDGyC1JPDou8kGcywY0YILzWlhhnRjdof3UlcoqYmS9El818LLfJJc3PXXgZHrHCAKs/Z2SeZtDJr5MrkxtOw=="},"engines":{"node":">=18"},"cpu":["arm"],"os":["linux"]},"@esbuild/linux-arm@0.27.3":{"resolution":{"integrity":"sha512-s6nPv2QkSupJwLYyfS+gwdirm0ukyTFNl3KTgZEAiJDd+iHZcbTPPcWCcRYH+WlNbwChgH2QkE9NSlNrMT8Gfw=="},"engines":{"node":">=18"},"cpu":["arm"],"os":["linux"]},"@esbuild/linux-ia32@0.25.12":{"resolution":{"integrity":"sha512-0y9KrdVnbMM2/vG8KfU0byhUN+EFCny9+8g202gYqSSVMonbsCfLjUO+rCci7pM0WBEtz+oK/PIwHkzxkyharA=="},"engines":{"node":">=18"},"cpu":["ia32"],"os":["linux"]},"@esbuild/linux-ia32@0.27.3":{"resolution":{"integrity":"sha512-yGlQYjdxtLdh0a3jHjuwOrxQjOZYD/C9PfdbgJJF3TIZWnm/tMd/RcNiLngiu4iwcBAOezdnSLAwQDPqTmtTYg=="},"engines":{"node":">=18"},"cpu":["ia32"],"os":["linux"]},"@esbuild/linux-loong64@0.25.12":{"resolution":{"integrity":"sha512-h///Lr5a9rib/v1GGqXVGzjL4TMvVTv+s1DPoxQdz7l/AYv6LDSxdIwzxkrPW438oUXiDtwM10o9PmwS/6Z0Ng=="},"engines":{"node":">=18"},"cpu":["loong64"],"os":["linux"]},"@esbuild/linux-loong64@0.27.3":{"resolution":{"integrity":"sha512-WO60Sn8ly3gtzhyjATDgieJNet/KqsDlX5nRC5Y3oTFcS1l0KWba+SEa9Ja1GfDqSF1z6hif/SkpQJbL63cgOA=="},"engines":{"node":">=18"},"cpu":["loong64"],"os":["linux"]},"@esbuild/linux-mips64el@0.25.12":{"resolution":{"integrity":"sha512-iyRrM1Pzy9GFMDLsXn1iHUm18nhKnNMWscjmp4+hpafcZjrr2WbT//d20xaGljXDBYHqRcl8HnxbX6uaA/eGVw=="},"engines":{"node":">=18"},"cpu":["mips64el"],"os":["linux"]},"@esbuild/linux-mips64el@0.27.3":{"resolution":{"integrity":"sha512-APsymYA6sGcZ4pD6k+UxbDjOFSvPWyZhjaiPyl/f79xKxwTnrn5QUnXR5prvetuaSMsb4jgeHewIDCIWljrSxw=="},"engines":{"node":">=18"},"cpu":["mips64el"],"os":["linux"]},"@esbuild/linux-ppc64@0.25.12":{"resolution":{"integrity":"sha512-9meM/lRXxMi5PSUqEXRCtVjEZBGwB7P/D4yT8UG/mwIdze2aV4Vo6U5gD3+RsoHXKkHCfSxZKzmDssVlRj1QQA=="},"engines":{"node":">=18"},"cpu":["ppc64"],"os":["linux"]},"@esbuild/linux-ppc64@0.27.3":{"resolution":{"integrity":"sha512-eizBnTeBefojtDb9nSh4vvVQ3V9Qf9Df01PfawPcRzJH4gFSgrObw+LveUyDoKU3kxi5+9RJTCWlj4FjYXVPEA=="},"engines":{"node":">=18"},"cpu":["ppc64"],"os":["linux"]},"@esbuild/linux-riscv64@0.25.12":{"resolution":{"integrity":"sha512-Zr7KR4hgKUpWAwb1f3o5ygT04MzqVrGEGXGLnj15YQDJErYu/BGg+wmFlIDOdJp0PmB0lLvxFIOXZgFRrdjR0w=="},"engines":{"node":">=18"},"cpu":["riscv64"],"os":["linux"]},"@esbuild/linux-riscv64@0.27.3":{"resolution":{"integrity":"sha512-3Emwh0r5wmfm3ssTWRQSyVhbOHvqegUDRd0WhmXKX2mkHJe1SFCMJhagUleMq+Uci34wLSipf8Lagt4LlpRFWQ=="},"engines":{"node":">=18"},"cpu":["riscv64"],"os":["linux"]},"@esbuild/linux-s390x@0.25.12":{"resolution":{"integrity":"sha512-MsKncOcgTNvdtiISc/jZs/Zf8d0cl/t3gYWX8J9ubBnVOwlk65UIEEvgBORTiljloIWnBzLs4qhzPkJcitIzIg=="},"engines":{"node":">=18"},"cpu":["s390x"],"os":["linux"]},"@esbuild/linux-s390x@0.27.3":{"resolution":{"integrity":"sha512-pBHUx9LzXWBc7MFIEEL0yD/ZVtNgLytvx60gES28GcWMqil8ElCYR4kvbV2BDqsHOvVDRrOxGySBM9Fcv744hw=="},"engines":{"node":">=18"},"cpu":["s390x"],"os":["linux"]},"@esbuild/linux-x64@0.25.12":{"resolution":{"integrity":"sha512-uqZMTLr/zR/ed4jIGnwSLkaHmPjOjJvnm6TVVitAa08SLS9Z0VM8wIRx7gWbJB5/J54YuIMInDquWyYvQLZkgw=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["linux"]},"@esbuild/linux-x64@0.27.3":{"resolution":{"integrity":"sha512-Czi8yzXUWIQYAtL/2y6vogER8pvcsOsk5cpwL4Gk5nJqH5UZiVByIY8Eorm5R13gq+DQKYg0+JyQoytLQas4dA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["linux"]},"@esbuild/netbsd-arm64@0.25.12":{"resolution":{"integrity":"sha512-xXwcTq4GhRM7J9A8Gv5boanHhRa/Q9KLVmcyXHCTaM4wKfIpWkdXiMog/KsnxzJ0A1+nD+zoecuzqPmCRyBGjg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["netbsd"]},"@esbuild/netbsd-arm64@0.27.3":{"resolution":{"integrity":"sha512-sDpk0RgmTCR/5HguIZa9n9u+HVKf40fbEUt+iTzSnCaGvY9kFP0YKBWZtJaraonFnqef5SlJ8/TiPAxzyS+UoA=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["netbsd"]},"@esbuild/netbsd-x64@0.25.12":{"resolution":{"integrity":"sha512-Ld5pTlzPy3YwGec4OuHh1aCVCRvOXdH8DgRjfDy/oumVovmuSzWfnSJg+VtakB9Cm0gxNO9BzWkj6mtO1FMXkQ=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["netbsd"]},"@esbuild/netbsd-x64@0.27.3":{"resolution":{"integrity":"sha512-P14lFKJl/DdaE00LItAukUdZO5iqNH7+PjoBm+fLQjtxfcfFE20Xf5CrLsmZdq5LFFZzb5JMZ9grUwvtVYzjiA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["netbsd"]},"@esbuild/openbsd-arm64@0.25.12":{"resolution":{"integrity":"sha512-fF96T6KsBo/pkQI950FARU9apGNTSlZGsv1jZBAlcLL1MLjLNIWPBkj5NlSz8aAzYKg+eNqknrUJ24QBybeR5A=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["openbsd"]},"@esbuild/openbsd-arm64@0.27.3":{"resolution":{"integrity":"sha512-AIcMP77AvirGbRl/UZFTq5hjXK+2wC7qFRGoHSDrZ5v5b8DK/GYpXW3CPRL53NkvDqb9D+alBiC/dV0Fb7eJcw=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["openbsd"]},"@esbuild/openbsd-x64@0.25.12":{"resolution":{"integrity":"sha512-MZyXUkZHjQxUvzK7rN8DJ3SRmrVrke8ZyRusHlP+kuwqTcfWLyqMOE3sScPPyeIXN/mDJIfGXvcMqCgYKekoQw=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["openbsd"]},"@esbuild/openbsd-x64@0.27.3":{"resolution":{"integrity":"sha512-DnW2sRrBzA+YnE70LKqnM3P+z8vehfJWHXECbwBmH/CU51z6FiqTQTHFenPlHmo3a8UgpLyH3PT+87OViOh1AQ=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["openbsd"]},"@esbuild/openharmony-arm64@0.25.12":{"resolution":{"integrity":"sha512-rm0YWsqUSRrjncSXGA7Zv78Nbnw4XL6/dzr20cyrQf7ZmRcsovpcRBdhD43Nuk3y7XIoW2OxMVvwuRvk9XdASg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["openharmony"]},"@esbuild/openharmony-arm64@0.27.3":{"resolution":{"integrity":"sha512-NinAEgr/etERPTsZJ7aEZQvvg/A6IsZG/LgZy+81wON2huV7SrK3e63dU0XhyZP4RKGyTm7aOgmQk0bGp0fy2g=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["openharmony"]},"@esbuild/sunos-x64@0.25.12":{"resolution":{"integrity":"sha512-3wGSCDyuTHQUzt0nV7bocDy72r2lI33QL3gkDNGkod22EsYl04sMf0qLb8luNKTOmgF/eDEDP5BFNwoBKH441w=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["sunos"]},"@esbuild/sunos-x64@0.27.3":{"resolution":{"integrity":"sha512-PanZ+nEz+eWoBJ8/f8HKxTTD172SKwdXebZ0ndd953gt1HRBbhMsaNqjTyYLGLPdoWHy4zLU7bDVJztF5f3BHA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["sunos"]},"@esbuild/win32-arm64@0.25.12":{"resolution":{"integrity":"sha512-rMmLrur64A7+DKlnSuwqUdRKyd3UE7oPJZmnljqEptesKM8wx9J8gx5u0+9Pq0fQQW8vqeKebwNXdfOyP+8Bsg=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["win32"]},"@esbuild/win32-arm64@0.27.3":{"resolution":{"integrity":"sha512-B2t59lWWYrbRDw/tjiWOuzSsFh1Y/E95ofKz7rIVYSQkUYBjfSgf6oeYPNWHToFRr2zx52JKApIcAS/D5TUBnA=="},"engines":{"node":">=18"},"cpu":["arm64"],"os":["win32"]},"@esbuild/win32-ia32@0.25.12":{"resolution":{"integrity":"sha512-HkqnmmBoCbCwxUKKNPBixiWDGCpQGVsrQfJoVGYLPT41XWF8lHuE5N6WhVia2n4o5QK5M4tYr21827fNhi4byQ=="},"engines":{"node":">=18"},"cpu":["ia32"],"os":["win32"]},"@esbuild/win32-ia32@0.27.3":{"resolution":{"integrity":"sha512-QLKSFeXNS8+tHW7tZpMtjlNb7HKau0QDpwm49u0vUp9y1WOF+PEzkU84y9GqYaAVW8aH8f3GcBck26jh54cX4Q=="},"engines":{"node":">=18"},"cpu":["ia32"],"os":["win32"]},"@esbuild/win32-x64@0.25.12":{"resolution":{"integrity":"sha512-alJC0uCZpTFrSL0CCDjcgleBXPnCrEAhTBILpeAp7M/OFgoqtAetfBzX0xM00MUsVVPpVjlPuMbREqnZCXaTnA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["win32"]},"@esbuild/win32-x64@0.27.3":{"resolution":{"integrity":"sha512-4uJGhsxuptu3OcpVAzli+/gWusVGwZZHTlS63hh++ehExkVT8SgiEf7/uC/PclrPPkLhZqGgCTjd0VWLo6xMqA=="},"engines":{"node":">=18"},"cpu":["x64"],"os":["win32"]},"@iconify/types@2.0.0":{"resolution":{"integrity":"sha512-+wluvCrRhXrhyOmRDJ3q8mux9JkKy5SJ/v8ol2tu4FVjyYvtEzkc/3pK15ET6RKg4b4w4BmTk1+gsCUhf21Ykg=="}},"@iconify/utils@3.0.2":{"resolution":{"integrity":"sha512-EfJS0rLfVuRuJRn4psJHtK2A9TqVnkxPpHY6lYHiB9+8eSuudsxbwMiavocG45ujOo6FJ+CIRlRnlOGinzkaGQ=="}},"@img/colour@1.1.0":{"resolution":{"integrity":"sha512-Td76q7j57o/tLVdgS746cYARfSyxk8iEfRxewL9h4OMzYhbW4TAcppl0mT4eyqXddh6L/jwoM75mo7ixa/pCeQ=="},"engines":{"node":">=18"}},"@img/sharp-darwin-arm64@0.34.5":{"resolution":{"integrity":"sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm64"],"os":["darwin"]},"@img/sharp-darwin-x64@0.34.5":{"resolution":{"integrity":"sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["x64"],"os":["darwin"]},"@img/sharp-libvips-darwin-arm64@1.2.4":{"resolution":{"integrity":"sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g=="},"cpu":["arm64"],"os":["darwin"]},"@img/sharp-libvips-darwin-x64@1.2.4":{"resolution":{"integrity":"sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg=="},"cpu":["x64"],"os":["darwin"]},"@img/sharp-libvips-linux-arm64@1.2.4":{"resolution":{"integrity":"sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw=="},"cpu":["arm64"],"os":["linux"]},"@img/sharp-libvips-linux-arm@1.2.4":{"resolution":{"integrity":"sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A=="},"cpu":["arm"],"os":["linux"]},"@img/sharp-libvips-linux-ppc64@1.2.4":{"resolution":{"integrity":"sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA=="},"cpu":["ppc64"],"os":["linux"]},"@img/sharp-libvips-linux-riscv64@1.2.4":{"resolution":{"integrity":"sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA=="},"cpu":["riscv64"],"os":["linux"]},"@img/sharp-libvips-linux-s390x@1.2.4":{"resolution":{"integrity":"sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ=="},"cpu":["s390x"],"os":["linux"]},"@img/sharp-libvips-linux-x64@1.2.4":{"resolution":{"integrity":"sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw=="},"cpu":["x64"],"os":["linux"]},"@img/sharp-libvips-linuxmusl-arm64@1.2.4":{"resolution":{"integrity":"sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw=="},"cpu":["arm64"],"os":["linux"]},"@img/sharp-libvips-linuxmusl-x64@1.2.4":{"resolution":{"integrity":"sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg=="},"cpu":["x64"],"os":["linux"]},"@img/sharp-linux-arm64@0.34.5":{"resolution":{"integrity":"sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm64"],"os":["linux"]},"@img/sharp-linux-arm@0.34.5":{"resolution":{"integrity":"sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm"],"os":["linux"]},"@img/sharp-linux-ppc64@0.34.5":{"resolution":{"integrity":"sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["ppc64"],"os":["linux"]},"@img/sharp-linux-riscv64@0.34.5":{"resolution":{"integrity":"sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["riscv64"],"os":["linux"]},"@img/sharp-linux-s390x@0.34.5":{"resolution":{"integrity":"sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["s390x"],"os":["linux"]},"@img/sharp-linux-x64@0.34.5":{"resolution":{"integrity":"sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["x64"],"os":["linux"]},"@img/sharp-linuxmusl-arm64@0.34.5":{"resolution":{"integrity":"sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm64"],"os":["linux"]},"@img/sharp-linuxmusl-x64@0.34.5":{"resolution":{"integrity":"sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["x64"],"os":["linux"]},"@img/sharp-wasm32@0.34.5":{"resolution":{"integrity":"sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["wasm32"]},"@img/sharp-win32-arm64@0.34.5":{"resolution":{"integrity":"sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["arm64"],"os":["win32"]},"@img/sharp-win32-ia32@0.34.5":{"resolution":{"integrity":"sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["ia32"],"os":["win32"]},"@img/sharp-win32-x64@0.34.5":{"resolution":{"integrity":"sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"},"cpu":["x64"],"os":["win32"]},"@inquirer/ansi@1.0.2":{"resolution":{"integrity":"sha512-S8qNSZiYzFd0wAcyG5AXCvUHC5Sr7xpZ9wZ2py9XR88jUz8wooStVx5M6dRzczbBWjic9NP7+rY0Xi7qqK/aMQ=="},"engines":{"node":">=18"}},"@inquirer/confirm@5.1.21":{"resolution":{"integrity":"sha512-KR8edRkIsUayMXV+o3Gv+q4jlhENF9nMYUZs9PA2HzrXeHI8M5uDag70U7RJn9yyiMZSbtF5/UexBtAVtZGSbQ=="},"engines":{"node":">=18"},"peerDependencies":{"@types/node":">=18"},"peerDependenciesMeta":{"@types/node":{"optional":true}}},"@inquirer/core@10.3.2":{"resolution":{"integrity":"sha512-43RTuEbfP8MbKzedNqBrlhhNKVwoK//vUFNW3Q3vZ88BLcrs4kYpGg+B2mm5p2K/HfygoCxuKwJJiv8PbGmE0A=="},"engines":{"node":">=18"},"peerDependencies":{"@types/node":">=18"},"peerDependenciesMeta":{"@types/node":{"optional":true}}},"@inquirer/external-editor@1.0.1":{"resolution":{"integrity":"sha512-Oau4yL24d2B5IL4ma4UpbQigkVhzPDXLoqy1ggK4gnHg/stmkffJE4oOXHXF3uz0UEpywG68KcyXsyYpA1Re/Q=="},"engines":{"node":">=18"},"peerDependencies":{"@types/node":">=18"},"peerDependenciesMeta":{"@types/node":{"optional":true}}},"@inquirer/figures@1.0.15":{"resolution":{"integrity":"sha512-t2IEY+unGHOzAaVM5Xx6DEWKeXlDDcNPeDyUpsRc6CUhBfU3VQOEl+Vssh7VNp1dR8MdUJBWhuObjXCsVpjN5g=="},"engines":{"node":">=18"}},"@inquirer/type@3.0.10":{"resolution":{"integrity":"sha512-BvziSRxfz5Ov8ch0z/n3oijRSEcEsHnhggm4xFZe93DHcUCTlutlq9Ox4SVENAfcRD22UQq7T/atg9Wr3k09eA=="},"engines":{"node":">=18"},"peerDependencies":{"@types/node":">=18"},"peerDependenciesMeta":{"@types/node":{"optional":true}}},"@isaacs/cliui@8.0.2":{"resolution":{"integrity":"sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA=="},"engines":{"node":">=12"}},"@istanbuljs/schema@0.1.3":{"resolution":{"integrity":"sha512-ZXRY4jNvVgSVQ8DL3LTcakaAtXwTVUxE81hslsyD2AtoXW/wVob10HkOJ1X/pAlcI7D+2YoZKg5do8G/w6RYgA=="},"engines":{"node":">=8"}},"@jest/diff-sequences@30.0.1":{"resolution":{"integrity":"sha512-n5H8QLDJ47QqbCNn5SuFjCRDrOLEZ0h8vAHCK5RL9Ls7Xa8AQLa/YxAc9UjFqoEDM48muwtBGjtMY5cr0PLDCw=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"@jest/get-type@30.1.0":{"resolution":{"integrity":"sha512-eMbZE2hUnx1WV0pmURZY9XoXPkUYjpc55mb0CrhtdWLtzMQPFvu/rZkTLZFTsdaVQa+Tr4eWAteqcUzoawq/uA=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"@jest/schemas@30.0.5":{"resolution":{"integrity":"sha512-DmdYgtezMkh3cpU8/1uyXakv3tJRcmcXxBOcO0tbaozPwpmh4YMsnWrQm9ZmZMfa5ocbxzbFk6O4bDPEc/iAnA=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"@jridgewell/gen-mapping@0.3.13":{"resolution":{"integrity":"sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA=="}},"@jridgewell/remapping@2.3.5":{"resolution":{"integrity":"sha512-LI9u/+laYG4Ds1TDKSJW2YPrIlcVYOwi2fUC6xB43lueCjgxV4lffOCZCtYFiH6TNOX+tQKXx97T4IKHbhyHEQ=="}},"@jridgewell/resolve-uri@3.1.2":{"resolution":{"integrity":"sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw=="},"engines":{"node":">=6.0.0"}},"@jridgewell/source-map@0.3.11":{"resolution":{"integrity":"sha512-ZMp1V8ZFcPG5dIWnQLr3NSI1MiCU7UETdS/A0G8V/XWHvJv3ZsFqutJn1Y5RPmAPX6F3BiE397OqveU/9NCuIA=="}},"@jridgewell/sourcemap-codec@1.5.5":{"resolution":{"integrity":"sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og=="}},"@jridgewell/trace-mapping@0.3.30":{"resolution":{"integrity":"sha512-GQ7Nw5G2lTu/BtHTKfXhKHok2WGetd4XYcVKGx00SjAk8GMwgJM3zr6zORiPGuOE+/vkc90KtTosSSvaCjKb2Q=="}},"@jridgewell/trace-mapping@0.3.31":{"resolution":{"integrity":"sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw=="}},"@jridgewell/trace-mapping@0.3.9":{"resolution":{"integrity":"sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ=="}},"@jsonjoy.com/buffers@17.63.0":{"resolution":{"integrity":"sha512-IZB5WQRVNPEbuqouOQxZHl59AL6/ff+gmM20+xAx4SRX6DjZnQAxs03pQ2J6g5ssN+pzmShrBuGeksjlcZ3HCw=="},"engines":{"node":">=10.0"},"peerDependencies":{"tslib":"2"}},"@jsonjoy.com/codegen@17.63.0":{"resolution":{"integrity":"sha512-vQ18JiRQ8YfZQwzwCQs88rR5eGuy6AFfu+anz9RTvHQs9L4AE8dGA/mLzu6teh6CiSQTo2TNOQbqRh4Vy+7LEQ=="},"engines":{"node":">=10.0"},"peerDependencies":{"tslib":"2"}},"@jsonjoy.com/json-pointer@17.63.0":{"resolution":{"integrity":"sha512-wAW7rQsGW2zWtE+77cXU8lXsoXYCKa9eHptK3a2CCoNTm5YpPA3dev6LuEyaTDYKdF4DTjtwREv2PpjJidHE5w=="},"engines":{"node":">=10.0"},"peerDependencies":{"tslib":"2"}},"@jsonjoy.com/util@17.63.0":{"resolution":{"integrity":"sha512-AhpTIOFvuixKwem4d+ey4In78KJLCrDIUyp0IQ8xgpbs0IjNPTTfT3nXXbYMgJGxjegmqa9otl9nqbCvxOaiXw=="},"engines":{"node":">=10.0"},"peerDependencies":{"tslib":"2"}},"@lix-js/plugin-json@1.0.1":{"resolution":{"integrity":"sha512-pCqzG08D8jLtVy8RnITPZIy92XNlRAJWLrlRrzh3ttwS/PWM/iXiOPPuzvb23MoFhYxerzJ8uDGXhEXfVagY2w=="}},"@lix-js/sdk@0.5.1":{"resolution":{"integrity":"sha512-FiDGp6BznOLdzNOCUC5OvTJ6KfdKGk8wd5edD1dhU46quS4vi4EkHjS/N+12PSpCfl/p3wBWSQD6vzvZcIHTFg=="},"engines":{"node":">=22"}},"@lix-js/server-protocol-schema@0.1.1":{"resolution":{"integrity":"sha512-jBeALB6prAbtr5q4vTuxnRZZv1M2rKe8iNqRQhFJ4Tv7150unEa0vKyz0hs8Gl3fUGsWaNJBh3J8++fpbrpRBQ=="}},"@manypkg/find-root@1.1.0":{"resolution":{"integrity":"sha512-mki5uBvhHzO8kYYix/WRy2WX8S3B5wdVSc9D6KcU5lQNglP2yt58/VfLuAK49glRXChosY8ap2oJ1qgma3GUVA=="}},"@manypkg/get-packages@1.1.3":{"resolution":{"integrity":"sha512-fo+QhuU3qE/2TQMQmbVMqaQ6EWbMhi4ABWP+O4AM1NqPBuy0OrApV5LO6BrrgnhtAHS2NH6RrVk9OL181tTi8A=="}},"@marcbachmann/cel-js@2.5.2":{"resolution":{"integrity":"sha512-QnvFBFQ+2T8gX4H4pmcgIfs3gXwfhRjv7hYoRRDLwKeXxgPEZ+zvExe1pGtPs8xPWHu4ng0CmllNpVHWi4kB9A=="},"engines":{"node":">=20.19.0"}},"@mermaid-js/parser@0.6.3":{"resolution":{"integrity":"sha512-lnjOhe7zyHjc+If7yT4zoedx2vo4sHaTmtkl1+or8BRTnCtDmcTpAjpzDSfCZrshM5bCoz0GyidzadJAH1xobA=="}},"@mswjs/interceptors@0.39.8":{"resolution":{"integrity":"sha512-2+BzZbjRO7Ct61k8fMNHEtoKjeWI9pIlHFTqBwZ5icHpqszIgEZbjb1MW5Z0+bITTCTl3gk4PDBxs9tA/csXvA=="},"engines":{"node":">=18"}},"@napi-rs/wasm-runtime@0.2.4":{"resolution":{"integrity":"sha512-9zESzOO5aDByvhIAsOy9TbpZ0Ur2AJbUI7UT73kcUTS2mxAMHOBaa1st/jAymNoCtvrit99kkzT1FZuXVcgfIQ=="}},"@napi-rs/wasm-runtime@1.1.4":{"resolution":{"integrity":"sha512-3NQNNgA1YSlJb/kMH1ildASP9HW7/7kYnRI2szWJaofaS1hWmbGI4H+d3+22aGzXXN9IJ+n+GiFVcGipJP18ow=="},"peerDependencies":{"@emnapi/core":"^1.7.1","@emnapi/runtime":"^1.7.1"}},"@nodelib/fs.scandir@2.1.5":{"resolution":{"integrity":"sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g=="},"engines":{"node":">= 8"}},"@nodelib/fs.stat@2.0.5":{"resolution":{"integrity":"sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A=="},"engines":{"node":">= 8"}},"@nodelib/fs.walk@1.2.8":{"resolution":{"integrity":"sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg=="},"engines":{"node":">= 8"}},"@nrwl/nx-cloud@19.1.0":{"resolution":{"integrity":"sha512-krngXVPfX0Zf6+zJDtcI59/Pt3JfcMPMZ9C/+/x6rvz4WGgyv1s0MI4crEUM0Lx5ZpS4QI0WNDCFVQSfGEBXUg=="}},"@nx/nx-darwin-arm64@21.4.1":{"resolution":{"integrity":"sha512-9BbkQnxGEDNX2ESbW4Zdrq1i09y6HOOgTuGbMJuy4e8F8rU/motMUqOpwmFgLHkLgPNZiOC2VXht3or/kQcpOg=="},"cpu":["arm64"],"os":["darwin"]},"@nx/nx-darwin-x64@21.4.1":{"resolution":{"integrity":"sha512-dnkmap1kc6aLV8CW1ihjsieZyaDDjlIB5QA2reTCLNSdTV446K6Fh0naLdaoG4ZkF27zJA/qBOuAaLzRHFJp3g=="},"cpu":["x64"],"os":["darwin"]},"@nx/nx-freebsd-x64@21.4.1":{"resolution":{"integrity":"sha512-RpxDBGOPeDqJjpbV7F3lO/w1aIKfLyG/BM0OpJfTgFVpUIl50kMj5M1m4W9A8kvYkfOD9pDbUaWszom7d57yjg=="},"cpu":["x64"],"os":["freebsd"]},"@nx/nx-linux-arm-gnueabihf@21.4.1":{"resolution":{"integrity":"sha512-2OyBoag2738XWmWK3ZLBuhaYb7XmzT3f8HzomggLDJoDhwDekjgRoNbTxogAAj6dlXSeuPjO81BSlIfXQcth3w=="},"cpu":["arm"],"os":["linux"]},"@nx/nx-linux-arm64-gnu@21.4.1":{"resolution":{"integrity":"sha512-2pg7/zjBDioUWJ3OY8Ixqy64eokKT5sh4iq1bk22bxOCf676aGrAu6khIxy4LBnPIdO0ZOK7KCJ7xOFP4phZqA=="},"cpu":["arm64"],"os":["linux"]},"@nx/nx-linux-arm64-musl@21.4.1":{"resolution":{"integrity":"sha512-whNxh12au/inQtkZju1ZfXSqDS0hCh/anzVCXfLYWFstdwv61XiRmFCSHeN0gRDthlncXFdgKoT1bGG5aMYLtA=="},"cpu":["arm64"],"os":["linux"]},"@nx/nx-linux-x64-gnu@21.4.1":{"resolution":{"integrity":"sha512-UHw57rzLio0AUDXV3l+xcxT3LjuXil7SHj+H8aYmXTpXktctQU2eYGOs5ATqJ1avVQRSejJugHF0i8oLErC28A=="},"cpu":["x64"],"os":["linux"]},"@nx/nx-linux-x64-musl@21.4.1":{"resolution":{"integrity":"sha512-qqE2Gy/DwOLIyePjM7GLHp/nDLZJnxHmqTeCiTQCp/BdbmqjRkSUz5oL+Uua0SNXaTu5hjAfvjXAhSTgBwVO6g=="},"cpu":["x64"],"os":["linux"]},"@nx/nx-win32-arm64-msvc@21.4.1":{"resolution":{"integrity":"sha512-NtEzMiRrSm2DdL4ntoDdjeze8DBrfZvLtx3Dq6+XmOhwnigR6umfWfZ6jbluZpuSQcxzQNVifqirdaQKYaYwDQ=="},"cpu":["arm64"],"os":["win32"]},"@nx/nx-win32-x64-msvc@21.4.1":{"resolution":{"integrity":"sha512-gpG+Y4G/mxGrfkUls6IZEuuBxRaKLMSEoVFLMb9JyyaLEDusn+HJ1m90XsOedjNLBHGMFigsd/KCCsXfFn4njg=="},"cpu":["x64"],"os":["win32"]},"@oozcitak/dom@2.0.2":{"resolution":{"integrity":"sha512-GjpKhkSYC3Mj4+lfwEyI1dqnsKTgwGy48ytZEhm4A/xnH/8z9M3ZVXKr/YGQi3uCLs1AEBS+x5T2JPiueEDW8w=="},"engines":{"node":">=20.0"}},"@oozcitak/infra@2.0.2":{"resolution":{"integrity":"sha512-2g+E7hoE2dgCz/APPOEK5s3rMhJvNxSMBrP+U+j1OWsIbtSpWxxlUjq1lU8RIsFJNYv7NMlnVsCuHcUzJW+8vA=="},"engines":{"node":">=20.0"}},"@oozcitak/url@3.0.0":{"resolution":{"integrity":"sha512-ZKfET8Ak1wsLAiLWNfFkZc/BraDccuTJKR6svTYc7sVjbR+Iu0vtXdiDMY4o6jaFl5TW2TlS7jbLl4VovtAJWQ=="},"engines":{"node":">=20.0"}},"@oozcitak/util@10.0.0":{"resolution":{"integrity":"sha512-hAX0pT/73190NLqBPPWSdBVGtbY6VOhWYK3qqHqtXQ1gK7kS2yz4+ivsN07hpJ6I3aeMtKP6J6npsEKOAzuTLA=="},"engines":{"node":">=20.0"}},"@open-draft/deferred-promise@2.2.0":{"resolution":{"integrity":"sha512-CecwLWx3rhxVQF6V4bAgPS5t+So2sTbPgAzafKkVizyi7tlwpcFpdFqq+wqF2OwNBmqFuu6tOyouTuxgpMfzmA=="}},"@open-draft/logger@0.3.0":{"resolution":{"integrity":"sha512-X2g45fzhxH238HKO4xbSr7+wBS8Fvw6ixhTDuvLd5mqh6bJJCFAPwU9mPDxbcrRtfxv4u5IHCEH77BmxvXmmxQ=="}},"@open-draft/until@2.1.0":{"resolution":{"integrity":"sha512-U69T3ItWHvLwGg5eJ0n3I62nWuE6ilHlmz7zM0npLBRvPRd7e6NYmg54vvRtP5mZG7kZqZCFVdsTWo7BPtBujg=="}},"@opentelemetry/api-logs@0.208.0":{"resolution":{"integrity":"sha512-CjruKY9V6NMssL/T1kAFgzosF1v9o6oeN+aX5JB/C/xPNtmgIJqcXHG7fA82Ou1zCpWGl4lROQUKwUNE1pMCyg=="},"engines":{"node":">=8.0.0"}},"@opentelemetry/api@1.9.0":{"resolution":{"integrity":"sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg=="},"engines":{"node":">=8.0.0"}},"@opentelemetry/core@2.2.0":{"resolution":{"integrity":"sha512-FuabnnUm8LflnieVxs6eP7Z383hgQU4W1e3KJS6aOG3RxWxcHyBxH8fDMHNgu/gFx/M2jvTOW/4/PHhLz6bjWw=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.0.0 <1.10.0"}},"@opentelemetry/core@2.4.0":{"resolution":{"integrity":"sha512-KtcyFHssTn5ZgDu6SXmUznS80OFs/wN7y6MyFRRcKU6TOw8hNcGxKvt8hsdaLJfhzUszNSjURetq5Qpkad14Gw=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.0.0 <1.10.0"}},"@opentelemetry/exporter-logs-otlp-http@0.208.0":{"resolution":{"integrity":"sha512-jOv40Bs9jy9bZVLo/i8FwUiuCvbjWDI+ZW13wimJm4LjnlwJxGgB+N/VWOZUTpM+ah/awXeQqKdNlpLf2EjvYg=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":"^1.3.0"}},"@opentelemetry/otlp-exporter-base@0.208.0":{"resolution":{"integrity":"sha512-gMd39gIfVb2OgxldxUtOwGJYSH8P1kVFFlJLuut32L6KgUC4gl1dMhn+YC2mGn0bDOiQYSk/uHOdSjuKp58vvA=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":"^1.3.0"}},"@opentelemetry/otlp-transformer@0.208.0":{"resolution":{"integrity":"sha512-DCFPY8C6lAQHUNkzcNT9R+qYExvsk6C5Bto2pbNxgicpcSWbe2WHShLxkOxIdNcBiYPdVHv/e7vH7K6TI+C+fQ=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":"^1.3.0"}},"@opentelemetry/resources@2.2.0":{"resolution":{"integrity":"sha512-1pNQf/JazQTMA0BiO5NINUzH0cbLbbl7mntLa4aJNmCCXSj0q03T5ZXXL0zw4G55TjdL9Tz32cznGClf+8zr5A=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.3.0 <1.10.0"}},"@opentelemetry/resources@2.4.0":{"resolution":{"integrity":"sha512-RWvGLj2lMDZd7M/5tjkI/2VHMpXebLgPKvBUd9LRasEWR2xAynDwEYZuLvY9P2NGG73HF07jbbgWX2C9oavcQg=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.3.0 <1.10.0"}},"@opentelemetry/sdk-logs@0.208.0":{"resolution":{"integrity":"sha512-QlAyL1jRpOeaqx7/leG1vJMp84g0xKP6gJmfELBpnI4O/9xPX+Hu5m1POk9Kl+veNkyth5t19hRlN6tNY1sjbA=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.4.0 <1.10.0"}},"@opentelemetry/sdk-metrics@2.2.0":{"resolution":{"integrity":"sha512-G5KYP6+VJMZzpGipQw7Giif48h6SGQ2PFKEYCybeXJsOCB4fp8azqMAAzE5lnnHK3ZVwYQrgmFbsUJO/zOnwGw=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.9.0 <1.10.0"}},"@opentelemetry/sdk-trace-base@2.2.0":{"resolution":{"integrity":"sha512-xWQgL0Bmctsalg6PaXExmzdedSp3gyKV8mQBwK/j9VGdCDu2fmXIb2gAehBKbkXCpJ4HPkgv3QfoJWRT4dHWbw=="},"engines":{"node":"^18.19.0 || >=20.6.0"},"peerDependencies":{"@opentelemetry/api":">=1.3.0 <1.10.0"}},"@opentelemetry/semantic-conventions@1.38.0":{"resolution":{"integrity":"sha512-kocjix+/sSggfJhwXqClZ3i9Y/MI0fp7b+g7kCRm6psy2dsf8uApTRclwG18h8Avm7C9+fnt+O36PspJ/OzoWg=="},"engines":{"node":">=14"}},"@opral/markdown-wc@0.9.0":{"resolution":{"integrity":"sha512-m5I3WklqED3mTcUOR3J9CRFIttMYsCmSCZnZYXNdL0Oj0EtSVWXPetPhKsHTEK+MrWPaqfsiKIFq6+l7dKgtNg=="},"peerDependencies":{"@tiptap/core":"^3.0.0"},"peerDependenciesMeta":{"@tiptap/core":{"optional":true}}},"@opral/zettel-ast@0.1.0":{"resolution":{"integrity":"sha512-pZDiecYrpSxw7miv4ZSufCRB9sqFMXRa0Rf+LQcoEEh0VOBI6beOmvB+iXmWJ7vxMQINuS7yfsvm5ZyrTm/W5A=="},"engines":{"node":">=20"}},"@oxc-project/types@0.127.0":{"resolution":{"integrity":"sha512-aIYXQBo4lCbO4z0R3FHeucQHpF46l2LbMdxRvqvuRuW2OxdnSkcng5B8+K12spgLDj93rtN3+J2Vac/TIO+ciQ=="}},"@oxlint/darwin-arm64@1.26.0":{"resolution":{"integrity":"sha512-kTmm1opqyn7iZopWHO3Ml4D/44pA5eknZBepgxCnTaPrW8XgCEUI85Q5AvOOvoNve8NziTYb8ax+CyuGJIgn/Q=="},"cpu":["arm64"],"os":["darwin"]},"@oxlint/darwin-x64@1.26.0":{"resolution":{"integrity":"sha512-/hMfZ9j7ZzVPRmMm02PHNc6MIMk0QYv5VowZJRIp40YLqLPvFfGNGZBj8e1fDVgZMFEGWDQK3yrt1uBKxXAK4Q=="},"cpu":["x64"],"os":["darwin"]},"@oxlint/linux-arm64-gnu@1.26.0":{"resolution":{"integrity":"sha512-iv4wdrwdCa8bhJxOpKlvfxqTs0LgW5tKBUMvH9B13zREHm1xT9JRZ8cQbbKiyC6LNdggwu5S6TSvODgAu7/DlA=="},"cpu":["arm64"],"os":["linux"]},"@oxlint/linux-arm64-musl@1.26.0":{"resolution":{"integrity":"sha512-a3gTbnN1JzedxqYeGTkg38BAs/r3Krd2DPNs/MF7nnHthT3RzkPUk47isMePLuNc4e/Weljn7m2m/Onx22tiNg=="},"cpu":["arm64"],"os":["linux"]},"@oxlint/linux-x64-gnu@1.26.0":{"resolution":{"integrity":"sha512-cCAyqyuKpFImjlgiBuuwSF+aDBW2h19/aCmHMTMSp6KXwhoQK7/Xx7/EhZKP5wiQJzVUYq5fXr0D8WmpLGsjRg=="},"cpu":["x64"],"os":["linux"]},"@oxlint/linux-x64-musl@1.26.0":{"resolution":{"integrity":"sha512-8VOJ4vQo0G1tNdaghxrWKjKZGg73tv+FoMDrtNYuUesqBHZN68FkYCsgPwEsacLhCmtoZrkF3ePDWDuWEpDyAg=="},"cpu":["x64"],"os":["linux"]},"@oxlint/win32-arm64@1.26.0":{"resolution":{"integrity":"sha512-N8KUtzP6gfEHKvaIBZCS9g8wRfqV5v55a/B8iJjIEhtMehcEM+UX+aYRsQ4dy5oBCrK3FEp4Yy/jHgb0moLm3Q=="},"cpu":["arm64"],"os":["win32"]},"@oxlint/win32-x64@1.26.0":{"resolution":{"integrity":"sha512-7tCyG0laduNQ45vzB9blVEGq/6DOvh7AFmiUAana8mTp0zIKQQmwJ21RqhazH0Rk7O6lL7JYzKcu+zaJHGpRLA=="},"cpu":["x64"],"os":["win32"]},"@pkgjs/parseargs@0.11.0":{"resolution":{"integrity":"sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg=="},"engines":{"node":">=14"}},"@polka/url@1.0.0-next.29":{"resolution":{"integrity":"sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww=="}},"@poppinss/colors@4.1.5":{"resolution":{"integrity":"sha512-FvdDqtcRCtz6hThExcFOgW0cWX+xwSMWcRuQe5ZEb2m7cVQOAVZOIMt+/v9RxGiD9/OY16qJBXK4CVKWAPalBw=="}},"@poppinss/dumper@0.6.5":{"resolution":{"integrity":"sha512-NBdYIb90J7LfOI32dOewKI1r7wnkiH6m920puQ3qHUeZkxNkQiFnXVWoE6YtFSv6QOiPPf7ys6i+HWWecDz7sw=="}},"@poppinss/exception@1.2.2":{"resolution":{"integrity":"sha512-m7bpKCD4QMlFCjA/nKTs23fuvoVFoA83brRKmObCUNmi/9tVu8Ve3w4YQAnJu4q3Tjf5fr685HYIC/IA2zHRSg=="}},"@posthog/core@1.9.1":{"resolution":{"integrity":"sha512-kRb1ch2dhQjsAapZmu6V66551IF2LnCbc1rnrQqnR7ArooVyJN9KOPXre16AJ3ObJz2eTfuP7x25BMyS2Y5Exw=="}},"@posthog/types@1.321.2":{"resolution":{"integrity":"sha512-nsMeHlVNlTB68JyV3/0+5FDreiTpUCStDH8ZUH/Hfsbw1howyf9a7DyURTwwhXdnyO0DksEFUIX+4IKCJs/H9g=="}},"@promptbook/utils@0.69.5":{"resolution":{"integrity":"sha512-xm5Ti/Hp3o4xHrsK9Yy3MS6KbDxYbq485hDsFvxqaNA7equHLPdo8H8faTitTeb14QCDfLW4iwCxdVYu5sn6YQ=="}},"@protobufjs/aspromise@1.1.2":{"resolution":{"integrity":"sha512-j+gKExEuLmKwvz3OgROXtrJ2UG2x8Ch2YZUxahh+s1F2HZ+wAceUNLkvy6zKCPVRkU++ZWQrdxsUeQXmcg4uoQ=="}},"@protobufjs/base64@1.1.2":{"resolution":{"integrity":"sha512-AZkcAA5vnN/v4PDqKyMR5lx7hZttPDgClv83E//FMNhR2TMcLUhfRUBHCmSl0oi9zMgDDqRUJkSxO3wm85+XLg=="}},"@protobufjs/codegen@2.0.4":{"resolution":{"integrity":"sha512-YyFaikqM5sH0ziFZCN3xDC7zeGaB/d0IUb9CATugHWbd1FRFwWwt4ld4OYMPWu5a3Xe01mGAULCdqhMlPl29Jg=="}},"@protobufjs/eventemitter@1.1.0":{"resolution":{"integrity":"sha512-j9ednRT81vYJ9OfVuXG6ERSTdEL1xVsNgqpkxMsbIabzSo3goCjDIveeGv5d03om39ML71RdmrGNjG5SReBP/Q=="}},"@protobufjs/fetch@1.1.0":{"resolution":{"integrity":"sha512-lljVXpqXebpsijW71PZaCYeIcE5on1w5DlQy5WH6GLbFryLUrBD4932W/E2BSpfRJWseIL4v/KPgBFxDOIdKpQ=="}},"@protobufjs/float@1.0.2":{"resolution":{"integrity":"sha512-Ddb+kVXlXst9d+R9PfTIxh1EdNkgoRe5tOX6t01f1lYWOvJnSPDBlG241QLzcyPdoNTsblLUdujGSE4RzrTZGQ=="}},"@protobufjs/inquire@1.1.0":{"resolution":{"integrity":"sha512-kdSefcPdruJiFMVSbn801t4vFK7KB/5gd2fYvrxhuJYg8ILrmn9SKSX2tZdV6V+ksulWqS7aXjBcRXl3wHoD9Q=="}},"@protobufjs/path@1.1.2":{"resolution":{"integrity":"sha512-6JOcJ5Tm08dOHAbdR3GrvP+yUUfkjG5ePsHYczMFLq3ZmMkAD98cDgcT2iA1lJ9NVwFd4tH/iSSoe44YWkltEA=="}},"@protobufjs/pool@1.1.0":{"resolution":{"integrity":"sha512-0kELaGSIDBKvcgS4zkjz1PeddatrjYcmMWOlAuAPwAeccUrPHdUqo/J6LiymHHEiJT5NrF1UVwxY14f+fy4WQw=="}},"@protobufjs/utf8@1.1.0":{"resolution":{"integrity":"sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw=="}},"@puppeteer/browsers@2.13.1":{"resolution":{"integrity":"sha512-zmS4RTK9fbrc++WlAJhxYbfz3IjDeOmkK/CwwbLmk7ydfS9e2CiEeRJHEPvjDVElO/bwXbidwGA37Bsm6LzCnQ=="},"engines":{"node":">=18"},"hasBin":true},"@rolldown/binding-android-arm64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-s70pVGhw4zqGeFnXWvAzJDlvxhlRollagdCCKRgOsgUOH3N1l0LIxf83AtGzmb5SiVM4Hjl5HyarMRfdfj3DaQ=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["android"]},"@rolldown/binding-darwin-arm64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-4ksWc9n0mhlZpZ9PMZgTGjeOPRu8MB1Z3Tz0Mo02eWfWCHMW1zN82Qz/pL/rC+yQa+8ZnutMF0JjJe7PjwasYw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["darwin"]},"@rolldown/binding-darwin-x64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-SUSDOI6WwUVNcWxd02QEBjLdY1VPHvlEkw6T/8nYG322iYWCTxRb1vzk4E+mWWYehTp7ERibq54LSJGjmouOsw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["darwin"]},"@rolldown/binding-freebsd-x64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-hwnz3nw9dbJ05EDO/PvcjaaewqqDy7Y1rn1UO81l8iIK1GjenME75dl16ajbvSSMfv66WXSRCYKIqfgq2KCfxw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["freebsd"]},"@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.17":{"resolution":{"integrity":"sha512-IS+W7epTcwANmFSQFrS1SivEXHtl1JtuQA9wlxrZTcNi6mx+FDOYrakGevvvTwgj2JvWiK8B29/qD9BELZPyXQ=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm"],"os":["linux"]},"@rolldown/binding-linux-arm64-gnu@1.0.0-rc.17":{"resolution":{"integrity":"sha512-e6usGaHKW5BMNZOymS1UcEYGowQMWcgZ71Z17Sl/h2+ZziNJ1a9n3Zvcz6LdRyIW5572wBCTH/Z+bKuZouGk9Q=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["linux"]},"@rolldown/binding-linux-arm64-musl@1.0.0-rc.17":{"resolution":{"integrity":"sha512-b/CgbwAJpmrRLp02RPfhbudf5tZnN9nsPWK82znefso832etkem8H7FSZwxrOI9djcdTP7U6YfNhbRnh7djErg=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["linux"]},"@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.17":{"resolution":{"integrity":"sha512-4EII1iNGRUN5WwGbF/kOh/EIkoDN9HsupgLQoXfY+D1oyJm7/F4t5PYU5n8SWZgG0FEwakyM8pGgwcBYruGTlA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["ppc64"],"os":["linux"]},"@rolldown/binding-linux-s390x-gnu@1.0.0-rc.17":{"resolution":{"integrity":"sha512-AH8oq3XqQo4IibpVXvPeLDI5pzkpYn0WiZAfT05kFzoJ6tQNzwRdDYQ45M8I/gslbodRZwW8uxLhbSBbkv96rA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["s390x"],"os":["linux"]},"@rolldown/binding-linux-x64-gnu@1.0.0-rc.17":{"resolution":{"integrity":"sha512-cLnjV3xfo7KslbU41Z7z8BH/E1y5mzUYzAqih1d1MDaIGZRCMqTijqLv76/P7fyHuvUcfGsIpqCdddbxLLK9rA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["linux"]},"@rolldown/binding-linux-x64-musl@1.0.0-rc.17":{"resolution":{"integrity":"sha512-0phclDw1spsL7dUB37sIARuis2tAgomCJXAHZlpt8PXZ4Ba0dRP1e+66lsRqrfhISeN9bEGNjQs+T/Fbd7oYGw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["linux"]},"@rolldown/binding-openharmony-arm64@1.0.0-rc.17":{"resolution":{"integrity":"sha512-0ag/hEgXOwgw4t8QyQvUCxvEg+V0KBcA6YuOx9g0r02MprutRF5dyljgm3EmR02O292UX7UeS6HzWHAl6KgyhA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["openharmony"]},"@rolldown/binding-wasm32-wasi@1.0.0-rc.17":{"resolution":{"integrity":"sha512-LEXei6vo0E5wTGwpkJ4KoT3OZJRnglwldt5ziLzOlc6qqb55z4tWNq2A+PFqCJuvWWdP53CVhG1Z9NtToDPJrA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["wasm32"]},"@rolldown/binding-win32-arm64-msvc@1.0.0-rc.17":{"resolution":{"integrity":"sha512-gUmyzBl3SPMa6hrqFUth9sVfcLBlYsbMzBx5PlexMroZStgzGqlZ26pYG89rBb45Mnia+oil6YAIFeEWGWhoZA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["arm64"],"os":["win32"]},"@rolldown/binding-win32-x64-msvc@1.0.0-rc.17":{"resolution":{"integrity":"sha512-3hkiolcUAvPB9FLb3UZdfjVVNWherN1f/skkGWJP/fgSQhYUZpSIRr0/I8ZK9TkF3F7kxvJAk0+IcKvPHk9qQg=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"cpu":["x64"],"os":["win32"]},"@rolldown/pluginutils@1.0.0-beta.40":{"resolution":{"integrity":"sha512-s3GeJKSQOwBlzdUrj4ISjJj5SfSh+aqn0wjOar4Bx95iV1ETI7F6S/5hLcfAxZ9kXDcyrAkxPlqmd1ZITttf+w=="}},"@rolldown/pluginutils@1.0.0-rc.17":{"resolution":{"integrity":"sha512-n8iosDOt6Ig1UhJ2AYqoIhHWh/isz0xpicHTzpKBeotdVsTEcxsSA/i3EVM7gQAj0rU27OLAxCjzlj15IWY7bg=="}},"@rolldown/pluginutils@1.0.0-rc.7":{"resolution":{"integrity":"sha512-qujRfC8sFVInYSPPMLQByRh7zhwkGFS4+tyMQ83srV1qrxL4g8E2tyxVVyxd0+8QeBM1mIk9KbWxkegRr76XzA=="}},"@rollup/rollup-android-arm-eabi@4.53.2":{"resolution":{"integrity":"sha512-yDPzwsgiFO26RJA4nZo8I+xqzh7sJTZIWQOxn+/XOdPE31lAvLIYCKqjV+lNH/vxE2L2iH3plKxDCRK6i+CwhA=="},"cpu":["arm"],"os":["android"]},"@rollup/rollup-android-arm64@4.53.2":{"resolution":{"integrity":"sha512-k8FontTxIE7b0/OGKeSN5B6j25EuppBcWM33Z19JoVT7UTXFSo3D9CdU39wGTeb29NO3XxpMNauh09B+Ibw+9g=="},"cpu":["arm64"],"os":["android"]},"@rollup/rollup-darwin-arm64@4.53.2":{"resolution":{"integrity":"sha512-A6s4gJpomNBtJ2yioj8bflM2oogDwzUiMl2yNJ2v9E7++sHrSrsQ29fOfn5DM/iCzpWcebNYEdXpaK4tr2RhfQ=="},"cpu":["arm64"],"os":["darwin"]},"@rollup/rollup-darwin-x64@4.53.2":{"resolution":{"integrity":"sha512-e6XqVmXlHrBlG56obu9gDRPW3O3hLxpwHpLsBJvuI8qqnsrtSZ9ERoWUXtPOkY8c78WghyPHZdmPhHLWNdAGEw=="},"cpu":["x64"],"os":["darwin"]},"@rollup/rollup-freebsd-arm64@4.53.2":{"resolution":{"integrity":"sha512-v0E9lJW8VsrwPux5Qe5CwmH/CF/2mQs6xU1MF3nmUxmZUCHazCjLgYvToOk+YuuUqLQBio1qkkREhxhc656ViA=="},"cpu":["arm64"],"os":["freebsd"]},"@rollup/rollup-freebsd-x64@4.53.2":{"resolution":{"integrity":"sha512-ClAmAPx3ZCHtp6ysl4XEhWU69GUB1D+s7G9YjHGhIGCSrsg00nEGRRZHmINYxkdoJehde8VIsDC5t9C0gb6yqA=="},"cpu":["x64"],"os":["freebsd"]},"@rollup/rollup-linux-arm-gnueabihf@4.53.2":{"resolution":{"integrity":"sha512-EPlb95nUsz6Dd9Qy13fI5kUPXNSljaG9FiJ4YUGU1O/Q77i5DYFW5KR8g1OzTcdZUqQQ1KdDqsTohdFVwCwjqg=="},"cpu":["arm"],"os":["linux"]},"@rollup/rollup-linux-arm-musleabihf@4.53.2":{"resolution":{"integrity":"sha512-BOmnVW+khAUX+YZvNfa0tGTEMVVEerOxN0pDk2E6N6DsEIa2Ctj48FOMfNDdrwinocKaC7YXUZ1pHlKpnkja/Q=="},"cpu":["arm"],"os":["linux"]},"@rollup/rollup-linux-arm64-gnu@4.53.2":{"resolution":{"integrity":"sha512-Xt2byDZ+6OVNuREgBXr4+CZDJtrVso5woFtpKdGPhpTPHcNG7D8YXeQzpNbFRxzTVqJf7kvPMCub/pcGUWgBjA=="},"cpu":["arm64"],"os":["linux"]},"@rollup/rollup-linux-arm64-musl@4.53.2":{"resolution":{"integrity":"sha512-+LdZSldy/I9N8+klim/Y1HsKbJ3BbInHav5qE9Iy77dtHC/pibw1SR/fXlWyAk0ThnpRKoODwnAuSjqxFRDHUQ=="},"cpu":["arm64"],"os":["linux"]},"@rollup/rollup-linux-loong64-gnu@4.53.2":{"resolution":{"integrity":"sha512-8ms8sjmyc1jWJS6WdNSA23rEfdjWB30LH8Wqj0Cqvv7qSHnvw6kgMMXRdop6hkmGPlyYBdRPkjJnj3KCUHV/uQ=="},"cpu":["loong64"],"os":["linux"]},"@rollup/rollup-linux-ppc64-gnu@4.53.2":{"resolution":{"integrity":"sha512-3HRQLUQbpBDMmzoxPJYd3W6vrVHOo2cVW8RUo87Xz0JPJcBLBr5kZ1pGcQAhdZgX9VV7NbGNipah1omKKe23/g=="},"cpu":["ppc64"],"os":["linux"]},"@rollup/rollup-linux-riscv64-gnu@4.53.2":{"resolution":{"integrity":"sha512-fMjKi+ojnmIvhk34gZP94vjogXNNUKMEYs+EDaB/5TG/wUkoeua7p7VCHnE6T2Tx+iaghAqQX8teQzcvrYpaQA=="},"cpu":["riscv64"],"os":["linux"]},"@rollup/rollup-linux-riscv64-musl@4.53.2":{"resolution":{"integrity":"sha512-XuGFGU+VwUUV5kLvoAdi0Wz5Xbh2SrjIxCtZj6Wq8MDp4bflb/+ThZsVxokM7n0pcbkEr2h5/pzqzDYI7cCgLQ=="},"cpu":["riscv64"],"os":["linux"]},"@rollup/rollup-linux-s390x-gnu@4.53.2":{"resolution":{"integrity":"sha512-w6yjZF0P+NGzWR3AXWX9zc0DNEGdtvykB03uhonSHMRa+oWA6novflo2WaJr6JZakG2ucsyb+rvhrKac6NIy+w=="},"cpu":["s390x"],"os":["linux"]},"@rollup/rollup-linux-x64-gnu@4.53.2":{"resolution":{"integrity":"sha512-yo8d6tdfdeBArzC7T/PnHd7OypfI9cbuZzPnzLJIyKYFhAQ8SvlkKtKBMbXDxe1h03Rcr7u++nFS7tqXz87Gtw=="},"cpu":["x64"],"os":["linux"]},"@rollup/rollup-linux-x64-musl@4.53.2":{"resolution":{"integrity":"sha512-ah59c1YkCxKExPP8O9PwOvs+XRLKwh/mV+3YdKqQ5AMQ0r4M4ZDuOrpWkUaqO7fzAHdINzV9tEVu8vNw48z0lA=="},"cpu":["x64"],"os":["linux"]},"@rollup/rollup-openharmony-arm64@4.53.2":{"resolution":{"integrity":"sha512-4VEd19Wmhr+Zy7hbUsFZ6YXEiP48hE//KPLCSVNY5RMGX2/7HZ+QkN55a3atM1C/BZCGIgqN+xrVgtdak2S9+A=="},"cpu":["arm64"],"os":["openharmony"]},"@rollup/rollup-win32-arm64-msvc@4.53.2":{"resolution":{"integrity":"sha512-IlbHFYc/pQCgew/d5fslcy1KEaYVCJ44G8pajugd8VoOEI8ODhtb/j8XMhLpwHCMB3yk2J07ctup10gpw2nyMA=="},"cpu":["arm64"],"os":["win32"]},"@rollup/rollup-win32-ia32-msvc@4.53.2":{"resolution":{"integrity":"sha512-lNlPEGgdUfSzdCWU176ku/dQRnA7W+Gp8d+cWv73jYrb8uT7HTVVxq62DUYxjbaByuf1Yk0RIIAbDzp+CnOTFg=="},"cpu":["ia32"],"os":["win32"]},"@rollup/rollup-win32-x64-gnu@4.53.2":{"resolution":{"integrity":"sha512-S6YojNVrHybQis2lYov1sd+uj7K0Q05NxHcGktuMMdIQ2VixGwAfbJ23NnlvvVV1bdpR2m5MsNBViHJKcA4ADw=="},"cpu":["x64"],"os":["win32"]},"@rollup/rollup-win32-x64-msvc@4.53.2":{"resolution":{"integrity":"sha512-k+/Rkcyx//P6fetPoLMb8pBeqJBNGx81uuf7iljX9++yNBVRDQgD04L+SVXmXmh5ZP4/WOp4mWF0kmi06PW2tA=="},"cpu":["x64"],"os":["win32"]},"@shikijs/core@3.15.0":{"resolution":{"integrity":"sha512-8TOG6yG557q+fMsSVa8nkEDOZNTSxjbbR8l6lF2gyr6Np+jrPlslqDxQkN6rMXCECQ3isNPZAGszAfYoJOPGlg=="}},"@shikijs/engine-javascript@3.15.0":{"resolution":{"integrity":"sha512-ZedbOFpopibdLmvTz2sJPJgns8Xvyabe2QbmqMTz07kt1pTzfEvKZc5IqPVO/XFiEbbNyaOpjPBkkr1vlwS+qg=="}},"@shikijs/engine-oniguruma@3.15.0":{"resolution":{"integrity":"sha512-HnqFsV11skAHvOArMZdLBZZApRSYS4LSztk2K3016Y9VCyZISnlYUYsL2hzlS7tPqKHvNqmI5JSUJZprXloMvA=="}},"@shikijs/langs@3.15.0":{"resolution":{"integrity":"sha512-WpRvEFvkVvO65uKYW4Rzxs+IG0gToyM8SARQMtGGsH4GDMNZrr60qdggXrFOsdfOVssG/QQGEl3FnJ3EZ+8w8A=="}},"@shikijs/themes@3.15.0":{"resolution":{"integrity":"sha512-8ow2zWb1IDvCKjYb0KiLNrK4offFdkfNVPXb1OZykpLCzRU6j+efkY+Y7VQjNlNFXonSw+4AOdGYtmqykDbRiQ=="}},"@shikijs/types@3.15.0":{"resolution":{"integrity":"sha512-BnP+y/EQnhihgHy4oIAN+6FFtmfTekwOLsQbRw9hOKwqgNy8Bdsjq8B05oAt/ZgvIWWFrshV71ytOrlPfYjIJw=="}},"@shikijs/vscode-textmate@10.0.2":{"resolution":{"integrity":"sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg=="}},"@sinclair/typebox@0.34.40":{"resolution":{"integrity":"sha512-gwBNIP8ZAYev/ORDWW0QvxdwPXwxBtLsdsJgSc7eDIRt8ubP+rxUBzPsrwnu16fgEF8Bx4lh/+mvQvJzcTM6Kw=="}},"@sindresorhus/is@7.1.1":{"resolution":{"integrity":"sha512-rO92VvpgMc3kfiTjGT52LEtJ8Yc5kCWhZjLQ3LwlA4pSgPpQO7bVpYXParOD8Jwf+cVQECJo3yP/4I8aZtUQTQ=="},"engines":{"node":">=18"}},"@speed-highlight/core@1.2.12":{"resolution":{"integrity":"sha512-uilwrK0Ygyri5dToHYdZSjcvpS2ZwX0w5aSt3GCEN9hrjxWCoeV4Z2DTXuxjwbntaLQIEEAlCeNQss5SoHvAEA=="}},"@sqlite.org/sqlite-wasm@3.50.4-build1":{"resolution":{"integrity":"sha512-Qig2Wso7gPkU1PtXwFzndh+CTRzrIFxVGqv6eCetjU7YqxlHItj+GvQYwYTppCRgAPawtRN/4AJcEgB9xDHGug=="},"hasBin":true},"@standard-schema/spec@1.0.0":{"resolution":{"integrity":"sha512-m2bOd0f2RT9k8QJx1JN85cZYyH1RqFBdlwtkSlf4tBDYLCiiZnv1fIIwacK6cqwXavOydf0NPToMQgpKq+dVlA=="}},"@standard-schema/spec@1.1.0":{"resolution":{"integrity":"sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="}},"@tailwindcss/node@4.2.4":{"resolution":{"integrity":"sha512-Ai7+yQPxz3ddrDQzFfBKdHEVBg0w3Zl83jnjuwxnZOsnH9pGn93QHQtpU0p/8rYWxvbFZHneni6p1BSLK4DkGA=="}},"@tailwindcss/oxide-android-arm64@4.2.4":{"resolution":{"integrity":"sha512-e7MOr1SAn9U8KlZzPi1ZXGZHeC5anY36qjNwmZv9pOJ8E4Q6jmD1vyEHkQFmNOIN7twGPEMXRHmitN4zCMN03g=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["android"]},"@tailwindcss/oxide-darwin-arm64@4.2.4":{"resolution":{"integrity":"sha512-tSC/Kbqpz/5/o/C2sG7QvOxAKqyd10bq+ypZNf+9Fi2TvbVbv1zNpcEptcsU7DPROaSbVgUXmrzKhurFvo5eDg=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["darwin"]},"@tailwindcss/oxide-darwin-x64@4.2.4":{"resolution":{"integrity":"sha512-yPyUXn3yO/ufR6+Kzv0t4fCg2qNr90jxXc5QqBpjlPNd0NqyDXcmQb/6weunH/MEDXW5dhyEi+agTDiqa3WsGg=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["darwin"]},"@tailwindcss/oxide-freebsd-x64@4.2.4":{"resolution":{"integrity":"sha512-BoMIB4vMQtZsXdGLVc2z+P9DbETkiopogfWZKbWwM8b/1Vinbs4YcUwo+kM/KeLkX3Ygrf4/PsRndKaYhS8Eiw=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["freebsd"]},"@tailwindcss/oxide-linux-arm-gnueabihf@4.2.4":{"resolution":{"integrity":"sha512-7pIHBLTHYRAlS7V22JNuTh33yLH4VElwKtB3bwchK/UaKUPpQ0lPQiOWcbm4V3WP2I6fNIJ23vABIvoy2izdwA=="},"engines":{"node":">= 20"},"cpu":["arm"],"os":["linux"]},"@tailwindcss/oxide-linux-arm64-gnu@4.2.4":{"resolution":{"integrity":"sha512-+E4wxJ0ZGOzSH325reXTWB48l42i93kQqMvDyz5gqfRzRZ7faNhnmvlV4EPGJU3QJM/3Ab5jhJ5pCRUsKn6OQw=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["linux"]},"@tailwindcss/oxide-linux-arm64-musl@4.2.4":{"resolution":{"integrity":"sha512-bBADEGAbo4ASnppIziaQJelekCxdMaxisrk+fB7Thit72IBnALp9K6ffA2G4ruj90G9XRS2VQ6q2bCKbfFV82g=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["linux"]},"@tailwindcss/oxide-linux-x64-gnu@4.2.4":{"resolution":{"integrity":"sha512-7Mx25E4WTfnht0TVRTyC00j3i0M+EeFe7wguMDTlX4mRxafznw0CA8WJkFjWYH5BlgELd1kSjuU2JiPnNZbJDA=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["linux"]},"@tailwindcss/oxide-linux-x64-musl@4.2.4":{"resolution":{"integrity":"sha512-2wwJRF7nyhOR0hhHoChc04xngV3iS+akccHTGtz965FwF0up4b2lOdo6kI1EbDaEXKgvcrFBYcYQQ/rrnWFVfA=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["linux"]},"@tailwindcss/oxide-wasm32-wasi@4.2.4":{"resolution":{"integrity":"sha512-FQsqApeor8Fo6gUEklzmaa9994orJZZDBAlQpK2Mq+DslRKFJeD6AjHpBQ0kZFQohVr8o85PPh8eOy86VlSCmw=="},"engines":{"node":">=14.0.0"},"cpu":["wasm32"],"bundledDependencies":["@napi-rs/wasm-runtime","@emnapi/core","@emnapi/runtime","@tybys/wasm-util","@emnapi/wasi-threads","tslib"]},"@tailwindcss/oxide-win32-arm64-msvc@4.2.4":{"resolution":{"integrity":"sha512-L9BXqxC4ToVgwMFqj3pmZRqyHEztulpUJzCxUtLjobMCzTPsGt1Fa9enKbOpY2iIyVtaHNeNvAK8ERP/64sqGQ=="},"engines":{"node":">= 20"},"cpu":["arm64"],"os":["win32"]},"@tailwindcss/oxide-win32-x64-msvc@4.2.4":{"resolution":{"integrity":"sha512-ESlKG0EpVJQwRjXDDa9rLvhEAh0mhP1sF7sap9dNZT0yyl9SAG6T7gdP09EH0vIv0UNTlo6jPWyujD6559fZvw=="},"engines":{"node":">= 20"},"cpu":["x64"],"os":["win32"]},"@tailwindcss/oxide@4.2.4":{"resolution":{"integrity":"sha512-9El/iI069DKDSXwTvB9J4BwdO5JhRrOweGaK25taBAvBXyXqJAX+Jqdvs8r8gKpsI/1m0LeJLyQYTf/WLrBT1Q=="},"engines":{"node":">= 20"}},"@tailwindcss/vite@4.2.4":{"resolution":{"integrity":"sha512-pCvohwOCspk3ZFn6eJzrrX3g4n2JY73H6MmYC87XfGPyTty4YsCjYTMArRZm/zOI8dIt3+EcrLHAFPe5A4bgtw=="},"peerDependencies":{"vite":"^5.2.0 || ^6 || ^7 || ^8"}},"@tanstack/history@1.161.6":{"resolution":{"integrity":"sha512-NaOGLRrddszbQj9upGat6HG/4TKvXLvu+osAIgfxPYA+eIvYKv8GKDJOrY2D3/U9MRnKfMWD7bU4jeD4xmqyIg=="},"engines":{"node":">=20.19"}},"@tanstack/react-router@1.169.2":{"resolution":{"integrity":"sha512-OJM7Kguc7ERnweaNRWsyWgIKcl3z23rD1B4jaxjzd9RGdnzpt2HfrWa9rggbT0Hfzhfo4D2ZmsfoTme035tniQ=="},"engines":{"node":">=20.19"},"peerDependencies":{"react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0"}},"@tanstack/react-start-client@1.166.48":{"resolution":{"integrity":"sha512-6fqwCwe6v+Nvtdf6vg6gxs/0gCXyZEHF18EslNeG/kca2wnXYFuXRhqGJjJaEgMk3WF4IE9mUgFuBSAOY3P7nQ=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0"}},"@tanstack/react-start-rsc@0.0.43":{"resolution":{"integrity":"sha512-2RCa8Caw/HKrHi9pxmUvsiUrBtjddeBiP93e7OYQOCL3rHxoMD9CSscwT9/ziCaqnIOuBFbKWgvRTahR4jSfsw=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"@rspack/core":">=2.0.0-0","@vitejs/plugin-rsc":">=0.5.20","react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0","react-server-dom-rspack":">=0.0.2"},"peerDependenciesMeta":{"@rspack/core":{"optional":true},"@vitejs/plugin-rsc":{"optional":true},"react-server-dom-rspack":{"optional":true}}},"@tanstack/react-start-server@1.166.52":{"resolution":{"integrity":"sha512-46Gx+byIndYywUtyna5h3qatHipJkPFqo/miexfuYPgeVAI6ypQzsw7wxF194H6VAP43m2q+fdLPBXStufoOGw=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0"}},"@tanstack/react-start@1.167.64":{"resolution":{"integrity":"sha512-gxtesUkHIZmKR/OEFAx6ifedIs7UM1cG5B/TJhcs6c/BrJpjeQIrkF9/GmWRpslaWCpo3tXA2IOxNSH49KFhoA=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"@rsbuild/core":"^2.0.0","@vitejs/plugin-rsc":"*","react":">=18.0.0 || >=19.0.0","react-dom":">=18.0.0 || >=19.0.0","vite":">=7.0.0"},"peerDependenciesMeta":{"@rsbuild/core":{"optional":true},"@vitejs/plugin-rsc":{"optional":true},"vite":{"optional":true}}},"@tanstack/react-store@0.9.3":{"resolution":{"integrity":"sha512-y2iHd/N9OkoQbFJLUX1T9vbc2O9tjH0pQRgTcx1/Nz4IlwLvkgpuglXUx+mXt0g5ZDFrEeDnONPqkbfxXJKwRg=="},"peerDependencies":{"react":"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0","react-dom":"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"}},"@tanstack/router-core@1.169.2":{"resolution":{"integrity":"sha512-5sm0DJF1A7Mz+9gy4Gz/lLovNailK3yot4vYvz9MkBUPw26uLnhQiR8hSCYxucjE0wD6Mdlc5l+Z0/XTlZ7xHw=="},"engines":{"node":">=20.19"}},"@tanstack/router-generator@1.166.41":{"resolution":{"integrity":"sha512-XpnkVvk9AlCtw5vggJsnSx3MdKGk8Asopwy9wUFAqFAHqlrRJzV9PoZ5kGkNEJMOYYcMTriJLN4D+kyXRUJpDQ=="},"engines":{"node":">=20.19"}},"@tanstack/router-plugin@1.167.34":{"resolution":{"integrity":"sha512-hU0Cuw79Yo6FGPBB0mW9Ik8bnTzmnUKtbgbvmIzeFdK3wKBPS4+xN7kcxVaBqXfP6xR3PFkIf2SSoYsiuLjVtg=="},"engines":{"node":">=20.19"},"peerDependencies":{"@rsbuild/core":">=1.0.2 || ^2.0.0","@tanstack/react-router":"^1.169.2","vite":">=5.0.0 || >=6.0.0 || >=7.0.0 || >=8.0.0","vite-plugin-solid":"^2.11.10 || ^3.0.0-0","webpack":">=5.92.0"},"peerDependenciesMeta":{"@rsbuild/core":{"optional":true},"@tanstack/react-router":{"optional":true},"vite":{"optional":true},"vite-plugin-solid":{"optional":true},"webpack":{"optional":true}}},"@tanstack/router-utils@1.161.8":{"resolution":{"integrity":"sha512-xyiLWEKjfBAVhauDSSjXxyf7s8elU6SM+V050sbkofvGmIIvkwPFtDsX7Gvwh14kBd6iCwAT+RiPvXTxAptY0Q=="},"engines":{"node":">=20.19"}},"@tanstack/start-client-core@1.168.2":{"resolution":{"integrity":"sha512-/bckv9k/yxY4VmSY2V2MeX7NBsS5uqGvdSPs5WIvW3Uv35DXPrdiumKXTNJeZRNRMtxrM+YfxQPjXLx3C7ykvg=="},"engines":{"node":">=22.12.0"}},"@tanstack/start-fn-stubs@1.161.6":{"resolution":{"integrity":"sha512-Y6QSlGiLga8cHfvxGGaonXIlt2bIUTVdH6AMjmpMp7+ANNCp+N96GQbjjhLye3JkaxDfP68x5iZA8NK4imgRig=="},"engines":{"node":">=22.12.0"}},"@tanstack/start-plugin-core@1.169.19":{"resolution":{"integrity":"sha512-z3/Tkytb6eRQKDnFU31QLimwrcVyDi9uHMtUQKmJkxQg+Bz85di+MxMrbnvd8XXP9OHcFlWK8HpG/HpVncZq4Q=="},"engines":{"node":">=22.12.0"},"peerDependencies":{"@rsbuild/core":"^2.0.0","vite":">=7.0.0"},"peerDependenciesMeta":{"@rsbuild/core":{"optional":true},"vite":{"optional":true}}},"@tanstack/start-server-core@1.167.30":{"resolution":{"integrity":"sha512-GC0PXzYYSEwfAOC2NxGXFUyYvfbSjVoqnIrzJsyInKd8xQxGEQaVdrebbyx9TV5cj7A5e7EJcWAsf3G3wRDQBw=="},"engines":{"node":">=22.12.0"}},"@tanstack/start-storage-context@1.166.35":{"resolution":{"integrity":"sha512-ZKDkKiorJrKwfEHjatEwRHG7EP3raJPhh6CSl4CFmHW0naIvwaW5gQcxcT8IlHtoGDLYDAjBEcSr3MZyXgqmOA=="},"engines":{"node":">=22.12.0"}},"@tanstack/store@0.9.3":{"resolution":{"integrity":"sha512-8reSzl/qGWGGVKhBoxXPMWzATSbZLZFWhwBAFO9NAyp0TxzfBP0mIrGb8CP8KrQTmvzXlR/vFPPUrHTLBGyFyw=="}},"@tanstack/virtual-file-routes@1.161.7":{"resolution":{"integrity":"sha512-olW33+Cn+bsCsZKPwEGhlkqS6w3M2slFv11JIobdnCFKMLG97oAI2kWKdx5/zsywTL8flpnoIgaZZPlQTFYhdQ=="},"engines":{"node":">=20.19"},"hasBin":true},"@testing-library/dom@10.4.1":{"resolution":{"integrity":"sha512-o4PXJQidqJl82ckFaXUeoAW+XysPLauYI43Abki5hABd853iMhitooc6znOnczgbTYmEP6U6/y1ZyKAIsvMKGg=="},"engines":{"node":">=18"}},"@testing-library/react@16.3.0":{"resolution":{"integrity":"sha512-kFSyxiEDwv1WLl2fgsq6pPBbw5aWKrsY2/noi1Id0TK0UParSF62oFQFGHXIyaG4pp2tEub/Zlel+fjjZILDsw=="},"engines":{"node":">=18"},"peerDependencies":{"@testing-library/dom":"^10.0.0","@types/react":"^18.0.0 || ^19.0.0","@types/react-dom":"^18.0.0 || ^19.0.0","react":"^18.0.0 || ^19.0.0","react-dom":"^18.0.0 || ^19.0.0"},"peerDependenciesMeta":{"@types/react":{"optional":true},"@types/react-dom":{"optional":true}}},"@testing-library/user-event@14.6.1":{"resolution":{"integrity":"sha512-vq7fv0rnt+QTXgPxr5Hjc210p6YKq2kmdziLgnsZGgLJ9e6VAShx1pACLuRjd/AS/sr7phAR58OIIpf0LlmQNw=="},"engines":{"node":">=12","npm":">=6"},"peerDependencies":{"@testing-library/dom":">=7.21.4"}},"@tootallnate/quickjs-emscripten@0.23.0":{"resolution":{"integrity":"sha512-C5Mc6rdnsaJDjO3UpGW/CQTHtCKaYlScZTly4JIu97Jxo/odCiH0ITnDXSJPTOrEKk/ycSZ0AOgTmkDtkOsvIA=="}},"@tybys/wasm-util@0.10.2":{"resolution":{"integrity":"sha512-RoBvJ2X0wuKlWFIjrwffGw1IqZHKQqzIchKaadZZfnNpsAYp2mM0h36JtPCjNDAHGgYez/15uMBpfGwchhiMgg=="}},"@tybys/wasm-util@0.9.0":{"resolution":{"integrity":"sha512-6+7nlbMVX/PVDCwaIQ8nTOPveOcFLSt8GcXdx8hD0bt39uWxYT88uXzqTd4fTvqta7oeUJqudepapKNt2DYJFw=="}},"@types/aria-query@5.0.4":{"resolution":{"integrity":"sha512-rfT93uj5s0PRL7EzccGMs3brplhcrghnDoV26NqKhCAS1hVo+WdNsPvE/yb6ilfr5hi2MEk6d5EWJTKdxg8jVw=="}},"@types/chai@5.2.2":{"resolution":{"integrity":"sha512-8kB30R7Hwqf40JPiKhVzodJs2Qc1ZJ5zuT3uzw5Hq/dhNCl3G3l83jfpdI1e20BP348+fV7VIL/+FxaXkqBmWg=="}},"@types/chai@5.2.3":{"resolution":{"integrity":"sha512-Mw558oeA9fFbv65/y4mHtXDs9bPnFMZAL/jxdPFUpOHHIXX91mcgEHbS5Lahr+pwZFR8A7GQleRWeI6cGFC2UA=="}},"@types/cookie@0.6.0":{"resolution":{"integrity":"sha512-4Kh9a6B2bQciAhf7FSuMRRkUWecJgJu9nPnx3yzpsfXX/c50REIqpHY4C82bXP90qrLtXtkDxTZosYO3UpOwlA=="}},"@types/d3-array@3.2.1":{"resolution":{"integrity":"sha512-Y2Jn2idRrLzUfAKV2LyRImR+y4oa2AntrgID95SHJxuMUrkNXmanDSed71sRNZysveJVt1hLLemQZIady0FpEg=="}},"@types/d3-axis@3.0.6":{"resolution":{"integrity":"sha512-pYeijfZuBd87T0hGn0FO1vQ/cgLk6E1ALJjfkC0oJ8cbwkZl3TpgS8bVBLZN+2jjGgg38epgxb2zmoGtSfvgMw=="}},"@types/d3-brush@3.0.6":{"resolution":{"integrity":"sha512-nH60IZNNxEcrh6L1ZSMNA28rj27ut/2ZmI3r96Zd+1jrZD++zD3LsMIjWlvg4AYrHn/Pqz4CF3veCxGjtbqt7A=="}},"@types/d3-chord@3.0.6":{"resolution":{"integrity":"sha512-LFYWWd8nwfwEmTZG9PfQxd17HbNPksHBiJHaKuY1XeqscXacsS2tyoo6OdRsjf+NQYeB6XrNL3a25E3gH69lcg=="}},"@types/d3-color@3.1.3":{"resolution":{"integrity":"sha512-iO90scth9WAbmgv7ogoq57O9YpKmFBbmoEoCHDB2xMBY0+/KVrqAaCDyCE16dUspeOvIxFFRI+0sEtqDqy2b4A=="}},"@types/d3-contour@3.0.6":{"resolution":{"integrity":"sha512-BjzLgXGnCWjUSYGfH1cpdo41/hgdWETu4YxpezoztawmqsvCeep+8QGfiY6YbDvfgHz/DkjeIkkZVJavB4a3rg=="}},"@types/d3-delaunay@6.0.4":{"resolution":{"integrity":"sha512-ZMaSKu4THYCU6sV64Lhg6qjf1orxBthaC161plr5KuPHo3CNm8DTHiLw/5Eq2b6TsNP0W0iJrUOFscY6Q450Hw=="}},"@types/d3-dispatch@3.0.6":{"resolution":{"integrity":"sha512-4fvZhzMeeuBJYZXRXrRIQnvUYfyXwYmLsdiN7XXmVNQKKw1cM8a5WdID0g1hVFZDqT9ZqZEY5pD44p24VS7iZQ=="}},"@types/d3-drag@3.0.7":{"resolution":{"integrity":"sha512-HE3jVKlzU9AaMazNufooRJ5ZpWmLIoc90A37WU2JMmeq28w1FQqCZswHZ3xR+SuxYftzHq6WU6KJHvqxKzTxxQ=="}},"@types/d3-dsv@3.0.7":{"resolution":{"integrity":"sha512-n6QBF9/+XASqcKK6waudgL0pf/S5XHPPI8APyMLLUHd8NqouBGLsU8MgtO7NINGtPBtk9Kko/W4ea0oAspwh9g=="}},"@types/d3-ease@3.0.2":{"resolution":{"integrity":"sha512-NcV1JjO5oDzoK26oMzbILE6HW7uVXOHLQvHshBUW4UMdZGfiY6v5BeQwh9a9tCzv+CeefZQHJt5SRgK154RtiA=="}},"@types/d3-fetch@3.0.7":{"resolution":{"integrity":"sha512-fTAfNmxSb9SOWNB9IoG5c8Hg6R+AzUHDRlsXsDZsNp6sxAEOP0tkP3gKkNSO/qmHPoBFTxNrjDprVHDQDvo5aA=="}},"@types/d3-force@3.0.10":{"resolution":{"integrity":"sha512-ZYeSaCF3p73RdOKcjj+swRlZfnYpK1EbaDiYICEEp5Q6sUiqFaFQ9qgoshp5CzIyyb/yD09kD9o2zEltCexlgw=="}},"@types/d3-format@3.0.4":{"resolution":{"integrity":"sha512-fALi2aI6shfg7vM5KiR1wNJnZ7r6UuggVqtDA+xiEdPZQwy/trcQaHnwShLuLdta2rTymCNpxYTiMZX/e09F4g=="}},"@types/d3-geo@3.1.0":{"resolution":{"integrity":"sha512-856sckF0oP/diXtS4jNsiQw/UuK5fQG8l/a9VVLeSouf1/PPbBE1i1W852zVwKwYCBkFJJB7nCFTbk6UMEXBOQ=="}},"@types/d3-hierarchy@3.1.7":{"resolution":{"integrity":"sha512-tJFtNoYBtRtkNysX1Xq4sxtjK8YgoWUNpIiUee0/jHGRwqvzYxkq0hGVbbOGSz+JgFxxRu4K8nb3YpG3CMARtg=="}},"@types/d3-interpolate@3.0.4":{"resolution":{"integrity":"sha512-mgLPETlrpVV1YRJIglr4Ez47g7Yxjl1lj7YKsiMCb27VJH9W8NVM6Bb9d8kkpG/uAQS5AmbA48q2IAolKKo1MA=="}},"@types/d3-path@3.1.0":{"resolution":{"integrity":"sha512-P2dlU/q51fkOc/Gfl3Ul9kicV7l+ra934qBFXCFhrZMOL6du1TM0pm1ThYvENukyOn5h9v+yMJ9Fn5JK4QozrQ=="}},"@types/d3-polygon@3.0.2":{"resolution":{"integrity":"sha512-ZuWOtMaHCkN9xoeEMr1ubW2nGWsp4nIql+OPQRstu4ypeZ+zk3YKqQT0CXVe/PYqrKpZAi+J9mTs05TKwjXSRA=="}},"@types/d3-quadtree@3.0.6":{"resolution":{"integrity":"sha512-oUzyO1/Zm6rsxKRHA1vH0NEDG58HrT5icx/azi9MF1TWdtttWl0UIUsjEQBBh+SIkrpd21ZjEv7ptxWys1ncsg=="}},"@types/d3-random@3.0.3":{"resolution":{"integrity":"sha512-Imagg1vJ3y76Y2ea0871wpabqp613+8/r0mCLEBfdtqC7xMSfj9idOnmBYyMoULfHePJyxMAw3nWhJxzc+LFwQ=="}},"@types/d3-scale-chromatic@3.1.0":{"resolution":{"integrity":"sha512-iWMJgwkK7yTRmWqRB5plb1kadXyQ5Sj8V/zYlFGMUBbIPKQScw+Dku9cAAMgJG+z5GYDoMjWGLVOvjghDEFnKQ=="}},"@types/d3-scale@4.0.8":{"resolution":{"integrity":"sha512-gkK1VVTr5iNiYJ7vWDI+yUFFlszhNMtVeneJ6lUTKPjprsvLLI9/tgEGiXJOnlINJA8FyA88gfnQsHbybVZrYQ=="}},"@types/d3-selection@3.0.11":{"resolution":{"integrity":"sha512-bhAXu23DJWsrI45xafYpkQ4NtcKMwWnAC/vKrd2l+nxMFuvOT3XMYTIj2opv8vq8AO5Yh7Qac/nSeP/3zjTK0w=="}},"@types/d3-shape@3.1.7":{"resolution":{"integrity":"sha512-VLvUQ33C+3J+8p+Daf+nYSOsjB4GXp19/S/aGo60m9h1v6XaxjiT82lKVWJCfzhtuZ3yD7i/TPeC/fuKLLOSmg=="}},"@types/d3-time-format@4.0.3":{"resolution":{"integrity":"sha512-5xg9rC+wWL8kdDj153qZcsJ0FWiFt0J5RB6LYUNZjwSnesfblqrI/bJ1wBdJ8OQfncgbJG5+2F+qfqnqyzYxyg=="}},"@types/d3-time@3.0.4":{"resolution":{"integrity":"sha512-yuzZug1nkAAaBlBBikKZTgzCeA+k1uy4ZFwWANOfKw5z5LRhV0gNA7gNkKm7HoK+HRN0wX3EkxGk0fpbWhmB7g=="}},"@types/d3-timer@3.0.2":{"resolution":{"integrity":"sha512-Ps3T8E8dZDam6fUyNiMkekK3XUsaUEik+idO9/YjPtfj2qruF8tFBXS7XhtE4iIXBLxhmLjP3SXpLhVf21I9Lw=="}},"@types/d3-transition@3.0.9":{"resolution":{"integrity":"sha512-uZS5shfxzO3rGlu0cC3bjmMFKsXv+SmZZcgp0KD22ts4uGXp5EVYGzu/0YdwZeKmddhcAccYtREJKkPfXkZuCg=="}},"@types/d3-zoom@3.0.8":{"resolution":{"integrity":"sha512-iqMC4/YlFCSlO8+2Ii1GGGliCAY4XdeG748w5vQUbevlbDu0zSjH/+jojorQVBK/se0j6DUFNPBGSqD3YWYnDw=="}},"@types/d3@7.4.3":{"resolution":{"integrity":"sha512-lZXZ9ckh5R8uiFVt8ogUNf+pIrK4EsWrx2Np75WvF/eTpJ0FMHNhjXk8CKEx/+gpHbNQyJWehbFaTvqmHWB3ww=="}},"@types/debug@4.1.12":{"resolution":{"integrity":"sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ=="}},"@types/deep-eql@4.0.2":{"resolution":{"integrity":"sha512-c9h9dVVMigMPc4bwTvC5dxqtqJZwQPePsWjPlpSOnojbor6pGqdk541lfA7AqFQr5pB1BRdq0juY9db81BwyFw=="}},"@types/eslint-scope@3.7.7":{"resolution":{"integrity":"sha512-MzMFlSLBqNF2gcHWO0G1vP/YQyfvrxZ0bF+u7mzUdZ1/xK4A4sru+nraZz5i3iEIk1l1uyicaDVTB4QbbEkAYg=="}},"@types/eslint@9.6.1":{"resolution":{"integrity":"sha512-FXx2pKgId/WyYo2jXw63kk7/+TY7u7AziEJxJAnSFzHlqTAS3Ync6SvgYAN/k4/PQpnnVuzoMuVnByKK2qp0ag=="}},"@types/estree@1.0.8":{"resolution":{"integrity":"sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w=="}},"@types/estree@1.0.9":{"resolution":{"integrity":"sha512-GhdPgy1el4/ImP05X05Uw4cw2/M93BCUmnEvWZNStlCzEKME4Fkk+YpoA5OiHNQmoS7Cafb8Xa3Pya8m1Qrzeg=="}},"@types/geojson@7946.0.15":{"resolution":{"integrity":"sha512-9oSxFzDCT2Rj6DfcHF8G++jxBKS7mBqXl5xrRW+Kbvjry6Uduya2iiwqHPhVXpasAVMBYKkEPGgKhd3+/HZ6xA=="}},"@types/hast@3.0.4":{"resolution":{"integrity":"sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ=="}},"@types/json-schema@7.0.15":{"resolution":{"integrity":"sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA=="}},"@types/mdast@4.0.4":{"resolution":{"integrity":"sha512-kGaNbPh1k7AFzgpud/gMdvIm5xuECykRR+JnWKQno9TAXVa6WIVCGTPvYGekIDL4uwCZQSYbUxNBSb1aUo79oA=="}},"@types/ms@2.1.0":{"resolution":{"integrity":"sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA=="}},"@types/node@12.20.55":{"resolution":{"integrity":"sha512-J8xLz7q2OFulZ2cyGTLE1TbbZcjpno7FaN6zdJNrgAdrJ+DZzh/uFR6YrTb4C+nXakvud8Q4+rbhoIWlYQbUFQ=="}},"@types/node@20.19.39":{"resolution":{"integrity":"sha512-orrrD74MBUyK8jOAD/r0+lfa1I2MO6I+vAkmAWzMYbCcgrN4lCrmK52gRFQq/JRxfYPfonkr4b0jcY7Olqdqbw=="}},"@types/node@22.15.33":{"resolution":{"integrity":"sha512-wzoocdnnpSxZ+6CjW4ADCK1jVmd1S/J3ArNWfn8FDDQtRm8dkDg7TA+mvek2wNrfCgwuZxqEOiB9B1XCJ6+dbw=="}},"@types/node@22.19.17":{"resolution":{"integrity":"sha512-wGdMcf+vPYM6jikpS/qhg6WiqSV/OhG+jeeHT/KlVqxYfD40iYJf9/AE1uQxVWFvU7MipKRkRv8NSHiCGgPr8Q=="}},"@types/node@24.10.2":{"resolution":{"integrity":"sha512-WOhQTZ4G8xZ1tjJTvKOpyEVSGgOTvJAfDK3FNFgELyaTpzhdgHVHeqW8V+UJvzF5BT+/B54T/1S2K6gd9c7bbA=="}},"@types/react-dom@19.2.3":{"resolution":{"integrity":"sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ=="},"peerDependencies":{"@types/react":"^19.2.0"}},"@types/react@19.2.7":{"resolution":{"integrity":"sha512-MWtvHrGZLFttgeEj28VXHxpmwYbor/ATPYbBfSFZEIRK0ecCFLl2Qo55z52Hss+UV9CRN7trSeq1zbgx7YDWWg=="}},"@types/sinonjs__fake-timers@8.1.5":{"resolution":{"integrity":"sha512-mQkU2jY8jJEF7YHjHvsQO8+3ughTL1mcnn96igfhONmR+fUPSKIkefQYpSe8bsly2Ep7oQbn/6VG5/9/0qcArQ=="}},"@types/statuses@2.0.6":{"resolution":{"integrity":"sha512-xMAgYwceFhRA2zY+XbEA7mxYbA093wdiW8Vu6gZPGWy9cmOyU9XesH1tNcEWsKFd5Vzrqx5T3D38PWx1FIIXkA=="}},"@types/tough-cookie@4.0.5":{"resolution":{"integrity":"sha512-/Ad8+nIOV7Rl++6f1BdKxFSMgmoqEoYbHRpPcx3JEfv8VRsQe9Z4mCXeJBzxs7mbHY/XOZZuXlRNfhpVPbs6ZA=="}},"@types/trusted-types@2.0.7":{"resolution":{"integrity":"sha512-ScaPdn1dQczgbl0QFTeTOmVHFULt394XJgOQNoyVhZ6r2vLnMLJfBPd53SB52T/3G36VI1/g2MZaX0cwDuXsfw=="}},"@types/unist@3.0.3":{"resolution":{"integrity":"sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q=="}},"@types/whatwg-mimetype@3.0.2":{"resolution":{"integrity":"sha512-c2AKvDT8ToxLIOUlN51gTiHXflsfIFisS4pO7pDPoKouJCESkhZnEy623gwP9laCy5lnLDAw1vAzu2vM2YLOrA=="}},"@types/which@2.0.2":{"resolution":{"integrity":"sha512-113D3mDkZDjo+EeUEHCFy0qniNc1ZpecGiAU7WSo7YDoSzolZIQKpYFHrPpjkB2nuyahcKfrmLXeQlh7gqJYdw=="}},"@types/ws@8.18.1":{"resolution":{"integrity":"sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg=="}},"@types/yauzl@2.10.3":{"resolution":{"integrity":"sha512-oJoftv0LSuaDZE3Le4DbKX+KS9G36NzOeSap90UIK0yMA/NhKJhqlSGtNDORNRaIbQfzjXDrQa0ytJ6mNRGz/Q=="}},"@ungap/structured-clone@1.2.1":{"resolution":{"integrity":"sha512-fEzPV3hSkSMltkw152tJKNARhOupqbH96MZWyRjNaYZOMIzbrTeQDG+MTc6Mr2pgzFQzFxAfmhGDNP5QK++2ZA=="},"deprecated":"Potential CWE-502 - Update to 1.3.1 or higher"},"@vitejs/plugin-react@6.0.1":{"resolution":{"integrity":"sha512-l9X/E3cDb+xY3SWzlG1MOGt2usfEHGMNIaegaUGFsLkb3RCn/k8/TOXBcab+OndDI4TBtktT8/9BwwW8Vi9KUQ=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"peerDependencies":{"@rolldown/plugin-babel":"^0.1.7 || ^0.2.0","babel-plugin-react-compiler":"^1.0.0","vite":"^8.0.0"},"peerDependenciesMeta":{"@rolldown/plugin-babel":{"optional":true},"babel-plugin-react-compiler":{"optional":true}}},"@vitest/browser@3.2.4":{"resolution":{"integrity":"sha512-tJxiPrWmzH8a+w9nLKlQMzAKX/7VjFs50MWgcAj7p9XQ7AQ9/35fByFYptgPELyLw+0aixTnC4pUWV+APcZ/kw=="},"peerDependencies":{"playwright":"*","safaridriver":"*","vitest":"3.2.4","webdriverio":"^7.0.0 || ^8.0.0 || ^9.0.0"},"peerDependenciesMeta":{"playwright":{"optional":true},"safaridriver":{"optional":true},"webdriverio":{"optional":true}}},"@vitest/browser@4.1.5":{"resolution":{"integrity":"sha512-iCDGI8c4yg+xmjUg2VsygdAUSIIB4x5Rht/P68OXy1hPELKXHDkzh87lkuTcdYmemRChDkEpB426MmDjzC0ziA=="},"peerDependencies":{"vitest":"4.1.5"}},"@vitest/coverage-v8@3.2.4":{"resolution":{"integrity":"sha512-EyF9SXU6kS5Ku/U82E259WSnvg6c8KTjppUncuNdm5QHpe17mwREHnjDzozC8x9MZ0xfBUFSaLkRv4TMA75ALQ=="},"peerDependencies":{"@vitest/browser":"3.2.4","vitest":"3.2.4"},"peerDependenciesMeta":{"@vitest/browser":{"optional":true}}},"@vitest/coverage-v8@4.1.5":{"resolution":{"integrity":"sha512-38C0/Ddb7HcRG0Z4/DUem8x57d2p9jYgp18mkaYswEOQBGsI1CG4f/hjm0ZCeaJfWhSZ4k7jgs29V1Zom7Ki9A=="},"peerDependencies":{"@vitest/browser":"4.1.5","vitest":"4.1.5"},"peerDependenciesMeta":{"@vitest/browser":{"optional":true}}},"@vitest/expect@3.2.4":{"resolution":{"integrity":"sha512-Io0yyORnB6sikFlt8QW5K7slY4OjqNX9jmJQ02QDda8lyM6B5oNgVWoSoKPac8/kgnCUzuHQKrSLtu/uOqqrig=="}},"@vitest/expect@4.0.18":{"resolution":{"integrity":"sha512-8sCWUyckXXYvx4opfzVY03EOiYVxyNrHS5QxX3DAIi5dpJAAkyJezHCP77VMX4HKA2LDT/Jpfo8i2r5BE3GnQQ=="}},"@vitest/expect@4.1.5":{"resolution":{"integrity":"sha512-PWBaRY5JoKuRnHlUHfpV/KohFylaDZTupcXN1H9vYryNLOnitSw60Mw9IAE2r67NbwwzBw/Cc/8q9BK3kIX8Kw=="}},"@vitest/mocker@3.2.4":{"resolution":{"integrity":"sha512-46ryTE9RZO/rfDd7pEqFl7etuyzekzEhUbTW3BvmeO/BcCMEgq59BKhek3dXDWgAj4oMK6OZi+vRr1wPW6qjEQ=="},"peerDependencies":{"msw":"^2.4.9","vite":"^5.0.0 || ^6.0.0 || ^7.0.0-0"},"peerDependenciesMeta":{"msw":{"optional":true},"vite":{"optional":true}}},"@vitest/mocker@4.0.18":{"resolution":{"integrity":"sha512-HhVd0MDnzzsgevnOWCBj5Otnzobjy5wLBe4EdeeFGv8luMsGcYqDuFRMcttKWZA5vVO8RFjexVovXvAM4JoJDQ=="},"peerDependencies":{"msw":"^2.4.9","vite":"^6.0.0 || ^7.0.0-0"},"peerDependenciesMeta":{"msw":{"optional":true},"vite":{"optional":true}}},"@vitest/mocker@4.1.5":{"resolution":{"integrity":"sha512-/x2EmFC4mT4NNzqvC3fmesuV97w5FC903KPmey4gsnJiMQ3Be1IlDKVaDaG8iqaLFHqJ2FVEkxZk5VmeLjIItw=="},"peerDependencies":{"msw":"^2.4.9","vite":"^6.0.0 || ^7.0.0 || ^8.0.0"},"peerDependenciesMeta":{"msw":{"optional":true},"vite":{"optional":true}}},"@vitest/pretty-format@3.2.4":{"resolution":{"integrity":"sha512-IVNZik8IVRJRTr9fxlitMKeJeXFFFN0JaB9PHPGQ8NKQbGpfjlTx9zO4RefN8gp7eqjNy8nyK3NZmBzOPeIxtA=="}},"@vitest/pretty-format@4.0.18":{"resolution":{"integrity":"sha512-P24GK3GulZWC5tz87ux0m8OADrQIUVDPIjjj65vBXYG17ZeU3qD7r+MNZ1RNv4l8CGU2vtTRqixrOi9fYk/yKw=="}},"@vitest/pretty-format@4.1.5":{"resolution":{"integrity":"sha512-7I3q6l5qr03dVfMX2wCo9FxwSJbPdwKjy2uu/YPpU3wfHvIL4QHwVRp57OfGrDFeUJ8/8QdfBKIV12FTtLn00g=="}},"@vitest/runner@3.2.4":{"resolution":{"integrity":"sha512-oukfKT9Mk41LreEW09vt45f8wx7DordoWUZMYdY/cyAk7w5TWkTRCNZYF7sX7n2wB7jyGAl74OxgwhPgKaqDMQ=="}},"@vitest/runner@4.0.18":{"resolution":{"integrity":"sha512-rpk9y12PGa22Jg6g5M3UVVnTS7+zycIGk9ZNGN+m6tZHKQb7jrP7/77WfZy13Y/EUDd52NDsLRQhYKtv7XfPQw=="}},"@vitest/runner@4.1.5":{"resolution":{"integrity":"sha512-2D+o7Pr82IEO46YPpoA/YU0neeyr6FTerQb5Ro7BUnBuv6NQtT/kmVnczngiMEBhzgqz2UZYl5gArejsyERDSQ=="}},"@vitest/snapshot@3.2.4":{"resolution":{"integrity":"sha512-dEYtS7qQP2CjU27QBC5oUOxLE/v5eLkGqPE0ZKEIDGMs4vKWe7IjgLOeauHsR0D5YuuycGRO5oSRXnwnmA78fQ=="}},"@vitest/snapshot@4.0.18":{"resolution":{"integrity":"sha512-PCiV0rcl7jKQjbgYqjtakly6T1uwv/5BQ9SwBLekVg/EaYeQFPiXcgrC2Y7vDMA8dM1SUEAEV82kgSQIlXNMvA=="}},"@vitest/snapshot@4.1.5":{"resolution":{"integrity":"sha512-zypXEt4KH/XgKGPUz4eC2AvErYx0My5hfL8oDb1HzGFpEk1P62bxSohdyOmvz+d9UJwanI68MKwr2EquOaOgMQ=="}},"@vitest/spy@3.2.4":{"resolution":{"integrity":"sha512-vAfasCOe6AIK70iP5UD11Ac4siNUNJ9i/9PZ3NKx07sG6sUxeag1LWdNrMWeKKYBLlzuK+Gn65Yd5nyL6ds+nw=="}},"@vitest/spy@4.0.18":{"resolution":{"integrity":"sha512-cbQt3PTSD7P2OARdVW3qWER5EGq7PHlvE+QfzSC0lbwO+xnt7+XH06ZzFjFRgzUX//JmpxrCu92VdwvEPlWSNw=="}},"@vitest/spy@4.1.5":{"resolution":{"integrity":"sha512-2lNOsh6+R2Idnf1TCZqSwYlKN2E/iDlD8sgU59kYVl+OMDmvldO1VDk39smRfpUNwYpNRVn3w4YfuC7KfbBnkQ=="}},"@vitest/utils@3.2.4":{"resolution":{"integrity":"sha512-fB2V0JFrQSMsCo9HiSq3Ezpdv4iYaXRG1Sx8edX3MwxfyNn83mKiGzOcH+Fkxt4MHxr3y42fQi1oeAInqgX2QA=="}},"@vitest/utils@4.0.18":{"resolution":{"integrity":"sha512-msMRKLMVLWygpK3u2Hybgi4MNjcYJvwTb0Ru09+fOyCXIgT5raYP041DRRdiJiI3k/2U6SEbAETB3YtBrUkCFA=="}},"@vitest/utils@4.1.5":{"resolution":{"integrity":"sha512-76wdkrmfXfqGjueGgnb45ITPyUi1ycZ4IHgC2bhPDUfWHklY/q3MdLOAB+TF1e6xfl8NxNY0ZYaPCFNWSsw3Ug=="}},"@wdio/config@9.1.3":{"resolution":{"integrity":"sha512-fozjb5Jl26QqQoZ2lJc8uZwzK2iKKmIfNIdNvx5JmQt78ybShiPuWWgu/EcHYDvAiZwH76K59R1Gp4lNmmEDew=="},"engines":{"node":">=18.20.0"}},"@wdio/logger@8.38.0":{"resolution":{"integrity":"sha512-kcHL86RmNbcQP+Gq/vQUGlArfU6IIcbbnNp32rRIraitomZow+iEoc519rdQmSVusDozMS5DZthkgDdxK+vz6Q=="},"engines":{"node":"^16.13 || >=18"}},"@wdio/logger@9.1.3":{"resolution":{"integrity":"sha512-cumRMK/gE1uedBUw3WmWXOQ7HtB6DR8EyKQioUz2P0IJtRRpglMBdZV7Svr3b++WWawOuzZHMfbTkJQmaVt8Gw=="},"engines":{"node":">=18.20.0"}},"@wdio/protocols@9.2.0":{"resolution":{"integrity":"sha512-lSdKCwLtqMxSIW+cl8au21GlNkvmLNGgyuGYdV/lFdWflmMYH1zusruM6Km6Kpv2VUlWySjjGknYhe7XVTOeMw=="}},"@wdio/repl@9.0.8":{"resolution":{"integrity":"sha512-3iubjl4JX5zD21aFxZwQghqC3lgu+mSs8c3NaiYYNCC+IT5cI/8QuKlgh9s59bu+N3gG988jqMJeCYlKuUv/iw=="},"engines":{"node":">=18.20.0"}},"@wdio/types@9.1.3":{"resolution":{"integrity":"sha512-oQrzLQBqn/+HXSJJo01NEfeKhzwuDdic7L8PDNxv5ySKezvmLDYVboQfoSDRtpAdfAZCcxuU9L4Jw7iTf6WV3g=="},"engines":{"node":">=18.20.0"}},"@wdio/utils@9.1.3":{"resolution":{"integrity":"sha512-dYeOzq9MTh8jYRZhzo/DYyn+cKrhw7h0/5hgyXkbyk/wHwF/uLjhATPmfaCr9+MARSEdiF7wwU8iRy/V0jfsLg=="},"engines":{"node":">=18.20.0"}},"@webassemblyjs/ast@1.14.1":{"resolution":{"integrity":"sha512-nuBEDgQfm1ccRp/8bCQrx1frohyufl4JlbMMZ4P1wpeOfDhF6FQkxZJ1b/e+PLwr6X1Nhw6OLme5usuBWYBvuQ=="}},"@webassemblyjs/floating-point-hex-parser@1.13.2":{"resolution":{"integrity":"sha512-6oXyTOzbKxGH4steLbLNOu71Oj+C8Lg34n6CqRvqfS2O71BxY6ByfMDRhBytzknj9yGUPVJ1qIKhRlAwO1AovA=="}},"@webassemblyjs/helper-api-error@1.13.2":{"resolution":{"integrity":"sha512-U56GMYxy4ZQCbDZd6JuvvNV/WFildOjsaWD3Tzzvmw/mas3cXzRJPMjP83JqEsgSbyrmaGjBfDtV7KDXV9UzFQ=="}},"@webassemblyjs/helper-buffer@1.14.1":{"resolution":{"integrity":"sha512-jyH7wtcHiKssDtFPRB+iQdxlDf96m0E39yb0k5uJVhFGleZFoNw1c4aeIcVUPPbXUVJ94wwnMOAqUHyzoEPVMA=="}},"@webassemblyjs/helper-numbers@1.13.2":{"resolution":{"integrity":"sha512-FE8aCmS5Q6eQYcV3gI35O4J789wlQA+7JrqTTpJqn5emA4U2hvwJmvFRC0HODS+3Ye6WioDklgd6scJ3+PLnEA=="}},"@webassemblyjs/helper-wasm-bytecode@1.13.2":{"resolution":{"integrity":"sha512-3QbLKy93F0EAIXLh0ogEVR6rOubA9AoZ+WRYhNbFyuB70j3dRdwH9g+qXhLAO0kiYGlg3TxDV+I4rQTr/YNXkA=="}},"@webassemblyjs/helper-wasm-section@1.14.1":{"resolution":{"integrity":"sha512-ds5mXEqTJ6oxRoqjhWDU83OgzAYjwsCV8Lo/N+oRsNDmx/ZDpqalmrtgOMkHwxsG0iI//3BwWAErYRHtgn0dZw=="}},"@webassemblyjs/ieee754@1.13.2":{"resolution":{"integrity":"sha512-4LtOzh58S/5lX4ITKxnAK2USuNEvpdVV9AlgGQb8rJDHaLeHciwG4zlGr0j/SNWlr7x3vO1lDEsuePvtcDNCkw=="}},"@webassemblyjs/leb128@1.13.2":{"resolution":{"integrity":"sha512-Lde1oNoIdzVzdkNEAWZ1dZ5orIbff80YPdHx20mrHwHrVNNTjNr8E3xz9BdpcGqRQbAEa+fkrCb+fRFTl/6sQw=="}},"@webassemblyjs/utf8@1.13.2":{"resolution":{"integrity":"sha512-3NQWGjKTASY1xV5m7Hr0iPeXD9+RDobLll3T9d2AO+g3my8xy5peVyjSag4I50mR1bBSN/Ct12lo+R9tJk0NZQ=="}},"@webassemblyjs/wasm-edit@1.14.1":{"resolution":{"integrity":"sha512-RNJUIQH/J8iA/1NzlE4N7KtyZNHi3w7at7hDjvRNm5rcUXa00z1vRz3glZoULfJ5mpvYhLybmVcwcjGrC1pRrQ=="}},"@webassemblyjs/wasm-gen@1.14.1":{"resolution":{"integrity":"sha512-AmomSIjP8ZbfGQhumkNvgC33AY7qtMCXnN6bL2u2Js4gVCg8fp735aEiMSBbDR7UQIj90n4wKAFUSEd0QN2Ukg=="}},"@webassemblyjs/wasm-opt@1.14.1":{"resolution":{"integrity":"sha512-PTcKLUNvBqnY2U6E5bdOQcSM+oVP/PmrDY9NzowJjislEjwP/C4an2303MCVS2Mg9d3AJpIGdUFIQQWbPds0Sw=="}},"@webassemblyjs/wasm-parser@1.14.1":{"resolution":{"integrity":"sha512-JLBl+KZ0R5qB7mCnud/yyX08jWFw5MsoalJ1pQ4EdFlgj9VdXKGuENGsiCIjegI1W7p91rUlcB/LB5yRJKNTcQ=="}},"@webassemblyjs/wast-printer@1.14.1":{"resolution":{"integrity":"sha512-kPSSXE6De1XOR820C90RIo2ogvZG+c3KiHzqUoO/F34Y2shGzesfqv7o57xrxovZJH/MetF5UjroJ/R/3isoiw=="}},"@xtuc/ieee754@1.2.0":{"resolution":{"integrity":"sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA=="}},"@xtuc/long@4.2.2":{"resolution":{"integrity":"sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ=="}},"@yarnpkg/lockfile@1.1.0":{"resolution":{"integrity":"sha512-GpSwvyXOcOOlV70vbnzjj4fW5xW/FdUF6nQEt1ENy7m4ZCczi1+/buVUPAqmGfqznsORNFzUMjctTIp8a9tuCQ=="}},"@yarnpkg/parsers@3.0.2":{"resolution":{"integrity":"sha512-/HcYgtUSiJiot/XWGLOlGxPYUG65+/31V8oqk17vZLW1xlCoR4PampyePljOxY2n8/3jz9+tIFzICsyGujJZoA=="},"engines":{"node":">=18.12.0"}},"@zip.js/zip.js@2.8.26":{"resolution":{"integrity":"sha512-RQ4h9F6DOiHxpdocUDrOl6xBM+yOtz+LkUol47AVWcfebGBDpZ7w7Xvz9PS24JgXvLGiXXzSAfdCdVy1tPlaFA=="},"engines":{"bun":">=0.7.0","deno":">=1.0.0","node":">=18.0.0"}},"@zkochan/js-yaml@0.0.7":{"resolution":{"integrity":"sha512-nrUSn7hzt7J6JWgWGz78ZYI8wj+gdIJdk0Ynjpp8l+trkn58Uqsf6RYrYkEK+3X18EX+TNdtJI0WxAtc+L84SQ=="},"hasBin":true},"abort-controller@3.0.0":{"resolution":{"integrity":"sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg=="},"engines":{"node":">=6.5"}},"acorn@8.16.0":{"resolution":{"integrity":"sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw=="},"engines":{"node":">=0.4.0"},"hasBin":true},"agent-base@7.1.3":{"resolution":{"integrity":"sha512-jRR5wdylq8CkOe6hei19GGZnxM6rBGwFl3Bg0YItGDimvjGtAvdZk4Pu6Cl4u4Igsws4a1fd1Vq3ezrhn4KmFw=="},"engines":{"node":">= 14"}},"agent-base@7.1.4":{"resolution":{"integrity":"sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ=="},"engines":{"node":">= 14"}},"ajv-formats@2.1.1":{"resolution":{"integrity":"sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA=="},"peerDependencies":{"ajv":"^8.0.0"},"peerDependenciesMeta":{"ajv":{"optional":true}}},"ajv-keywords@5.1.0":{"resolution":{"integrity":"sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw=="},"peerDependencies":{"ajv":"^8.8.2"}},"ajv@8.17.1":{"resolution":{"integrity":"sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g=="}},"ajv@8.20.0":{"resolution":{"integrity":"sha512-Thbli+OlOj+iMPYFBVBfJ3OmCAnaSyNn4M1vz9T6Gka5Jt9ba/HIR56joy65tY6kx/FCF5VXNB819Y7/GUrBGA=="}},"ansi-colors@4.1.3":{"resolution":{"integrity":"sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw=="},"engines":{"node":">=6"}},"ansi-regex@5.0.1":{"resolution":{"integrity":"sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="},"engines":{"node":">=8"}},"ansi-regex@6.1.0":{"resolution":{"integrity":"sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA=="},"engines":{"node":">=12"}},"ansi-regex@6.2.2":{"resolution":{"integrity":"sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg=="},"engines":{"node":">=12"}},"ansi-styles@4.3.0":{"resolution":{"integrity":"sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="},"engines":{"node":">=8"}},"ansi-styles@5.2.0":{"resolution":{"integrity":"sha512-Cxwpt2SfTzTtXcfOlzGEee8O+c+MmUgGrNiBcXnuWxuFJHe6a5Hz7qwhwe5OgaSYI0IJvkLqWX1ASG+cJOkEiA=="},"engines":{"node":">=10"}},"ansi-styles@6.2.1":{"resolution":{"integrity":"sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug=="},"engines":{"node":">=12"}},"ansis@4.1.0":{"resolution":{"integrity":"sha512-BGcItUBWSMRgOCe+SVZJ+S7yTRG0eGt9cXAHev72yuGcY23hnLA7Bky5L/xLyPINoSN95geovfBkqoTlNZYa7w=="},"engines":{"node":">=14"}},"anymatch@3.1.3":{"resolution":{"integrity":"sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw=="},"engines":{"node":">= 8"}},"archiver-utils@5.0.2":{"resolution":{"integrity":"sha512-wuLJMmIBQYCsGZgYLTy5FIB2pF6Lfb6cXMSF8Qywwk3t20zWnAi7zLcQFdKQmIB8wyZpY5ER38x08GbwtR2cLA=="},"engines":{"node":">= 14"}},"archiver@7.0.1":{"resolution":{"integrity":"sha512-ZcbTaIqJOfCc03QwD468Unz/5Ir8ATtvAHsK+FdXbDIbGfihqh9mrvdcYunQzqn4HrvWWaFyaxJhGZagaJJpPQ=="},"engines":{"node":">= 14"}},"argparse@1.0.10":{"resolution":{"integrity":"sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg=="}},"argparse@2.0.1":{"resolution":{"integrity":"sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="}},"aria-query@5.3.0":{"resolution":{"integrity":"sha512-b0P0sZPKtyu8HkeRAfCq0IfURZK+SuwMjY1UXGBU27wpAiTwQAIlq56IbIO+ytk/JjS1fMR14ee5WBBfKi5J6A=="}},"aria-query@5.3.2":{"resolution":{"integrity":"sha512-COROpnaoap1E2F000S62r6A60uHZnmlvomhfyT2DlTcrY1OrBKn2UhH7qn5wTC9zMvD0AY7csdPSNwKP+7WiQw=="},"engines":{"node":">= 0.4"}},"array-union@2.1.0":{"resolution":{"integrity":"sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw=="},"engines":{"node":">=8"}},"assertion-error@2.0.1":{"resolution":{"integrity":"sha512-Izi8RQcffqCeNVgFigKli1ssklIbpHnCYc6AknXGYoB6grJqyeby7jv12JUQgmTAnIDnbck1uxksT4dzN3PWBA=="},"engines":{"node":">=12"}},"ast-types@0.13.4":{"resolution":{"integrity":"sha512-x1FCFnFifvYDDzTaLII71vG5uvDwgtmDTEVWAxrgeiR8VjMONcCXJx7E+USjDtHlwFmt9MysbqgF9b9Vjr6w+w=="},"engines":{"node":">=4"}},"ast-v8-to-istanbul@0.3.4":{"resolution":{"integrity":"sha512-cxrAnZNLBnQwBPByK4CeDaw5sWZtMilJE/Q3iDA0aamgaIVNDF9T6K2/8DfYDZEejZ2jNnDrG9m8MY72HFd0KA=="}},"ast-v8-to-istanbul@1.0.0":{"resolution":{"integrity":"sha512-1fSfIwuDICFA4LKkCzRPO7F0hzFf0B7+Xqrl27ynQaa+Rh0e1Es0v6kWHPott3lU10AyAr7oKHa65OppjLn3Rg=="}},"async@3.2.6":{"resolution":{"integrity":"sha512-htCUDlxyyCLMgaM3xXg0C0LW2xqfuQ6p05pCEIsXuyQ+a1koYKTuBMzRNwmybfLgvJDMd0r1LTn4+E0Ti6C2AA=="}},"asynckit@0.4.0":{"resolution":{"integrity":"sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q=="}},"axios@1.11.0":{"resolution":{"integrity":"sha512-1Lx3WLFQWm3ooKDYZD1eXmoGO9fxYQjrycfHFC8P0sCfQVXyROp0p9PFWBehewBOdCwHc+f/b8I0fMto5eSfwA=="}},"b4a@1.8.1":{"resolution":{"integrity":"sha512-aiqre1Nr0B/6DgE2N5vwTc+2/oQZ4Wh1t4NznYY4E00y8LCt6NqdRv81so00oo27D8MVKTpUa/MwUUtBLXCoDw=="},"peerDependencies":{"react-native-b4a":"*"},"peerDependenciesMeta":{"react-native-b4a":{"optional":true}}},"babel-dead-code-elimination@1.0.12":{"resolution":{"integrity":"sha512-GERT7L2TiYcYDtYk1IpD+ASAYXjKbLTDPhBtYj7X1NuRMDTMtAx9kyBenub1Ev41lo91OHCKdmP+egTDmfQ7Ig=="}},"bail@2.0.2":{"resolution":{"integrity":"sha512-0xO6mYd7JB2YesxDKplafRpsiOzPt9V02ddPCLbY1xYGPOX24NTyN50qnUxgCPcSoYMhKpAuBTjQoRZCAkUDRw=="}},"balanced-match@1.0.2":{"resolution":{"integrity":"sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw=="}},"bare-events@2.8.2":{"resolution":{"integrity":"sha512-riJjyv1/mHLIPX4RwiK+oW9/4c3TEUeORHKefKAKnZ5kyslbN+HXowtbaVEqt4IMUB7OXlfixcs6gsFeo/jhiQ=="},"peerDependencies":{"bare-abort-controller":"*"},"peerDependenciesMeta":{"bare-abort-controller":{"optional":true}}},"bare-fs@4.7.1":{"resolution":{"integrity":"sha512-WDRsyVN52eAx/lBamKD6uyw8H4228h/x0sGGGegOamM2cd7Pag88GfMQalobXI+HaEUxpCkbKQUDOQqt9wawRw=="},"engines":{"bare":">=1.16.0"},"peerDependencies":{"bare-buffer":"*"},"peerDependenciesMeta":{"bare-buffer":{"optional":true}}},"bare-os@3.9.1":{"resolution":{"integrity":"sha512-6M5XjcnsygQNPMCMPXSK379xrJFiZ/AEMNBmFEmQW8d/789VQATvriyi5r0HYTL9TkQ26rn3kgdTG3aisbrXkQ=="},"engines":{"bare":">=1.14.0"}},"bare-path@3.0.0":{"resolution":{"integrity":"sha512-tyfW2cQcB5NN8Saijrhqn0Zh7AnFNsnczRcuWODH0eYAXBsJ5gVxAUuNr7tsHSC6IZ77cA0SitzT+s47kot8Mw=="}},"bare-stream@2.13.1":{"resolution":{"integrity":"sha512-Vp0cnjYyrEC4whYTymQ+YZi6pBpfiICZO3cfRG8sy67ZNWe951urv1x4eW1BKNngw3U+3fPYb5JQvHbCtxH7Ow=="},"peerDependencies":{"bare-abort-controller":"*","bare-buffer":"*","bare-events":"*"},"peerDependenciesMeta":{"bare-abort-controller":{"optional":true},"bare-buffer":{"optional":true},"bare-events":{"optional":true}}},"bare-url@2.4.3":{"resolution":{"integrity":"sha512-Kccpc7ACfXaxfeInfqKcZtW4pT5YBn1mesc4sCsun6sRwtbJ4h+sNOaksUpYEJUKfN65YWC6Bw2OJEFiKxq8nQ=="}},"base64-js@1.5.1":{"resolution":{"integrity":"sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA=="}},"baseline-browser-mapping@2.10.27":{"resolution":{"integrity":"sha512-zEs/ufmZoUd7WftKpKyXaT6RFxpQ5Qm9xytKRHvJfxFV9DFJkZph9RvJ1LcOUi0Z1ZVijMte65JbILeV+8QQEA=="},"engines":{"node":">=6.0.0"},"hasBin":true},"basic-ftp@5.3.1":{"resolution":{"integrity":"sha512-bopVNp6ugyA150DDuZfPFdt1KZ5a94ZDiwX4hMgZDzF+GttD80lEy8kj98kbyhLXnPvhtIo93mdnLIjpCAeeOw=="},"engines":{"node":">=10.0.0"}},"better-path-resolve@1.0.0":{"resolution":{"integrity":"sha512-pbnl5XzGBdrFU/wT4jqmJVPn2B6UHPBOhzMQkY/SPUPB6QtUXtmBHBIwCbXJol93mOpGMnQyP/+BB19q04xj7g=="},"engines":{"node":">=4"}},"better-sqlite3@12.9.0":{"resolution":{"integrity":"sha512-wqUv4Gm3toFpHDQmaKD4QhZm3g1DjUBI0yzS4UBl6lElUmXFYdTQmmEDpAFa5o8FiFiymURypEnfVHzILKaxqQ=="},"engines":{"node":"20.x || 22.x || 23.x || 24.x || 25.x"}},"bidi-js@1.0.3":{"resolution":{"integrity":"sha512-RKshQI1R3YQ+n9YJz2QQ147P66ELpa1FQEg20Dk8oW9t2KgLbpDLLp9aGZ7y8WHSshDknG0bknqGw5/tyCs5tw=="}},"binary-extensions@2.3.0":{"resolution":{"integrity":"sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw=="},"engines":{"node":">=8"}},"bindings@1.5.0":{"resolution":{"integrity":"sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ=="}},"bl@4.1.0":{"resolution":{"integrity":"sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w=="}},"blake3-wasm@2.1.5":{"resolution":{"integrity":"sha512-F1+K8EbfOZE49dtoPtmxUQrpXaBIl3ICvasLh+nJta0xkz+9kF/7uet9fLnwKqhDrmj6g+6K3Tw9yQPUg2ka5g=="}},"boolbase@1.0.0":{"resolution":{"integrity":"sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww=="}},"brace-expansion@2.0.2":{"resolution":{"integrity":"sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="}},"brace-expansion@2.1.0":{"resolution":{"integrity":"sha512-TN1kCZAgdgweJhWWpgKYrQaMNHcDULHkWwQIspdtjV4Y5aurRdZpjAqn6yX3FPqTA9ngHCc4hJxMAMgGfve85w=="}},"braces@3.0.3":{"resolution":{"integrity":"sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA=="},"engines":{"node":">=8"}},"browserslist@4.25.3":{"resolution":{"integrity":"sha512-cDGv1kkDI4/0e5yON9yM5G/0A5u8sf5TnmdX5C9qHzI9PPu++sQ9zjm1k9NiOrf3riY4OkK0zSGqfvJyJsgCBQ=="},"engines":{"node":"^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7"},"hasBin":true},"browserslist@4.28.2":{"resolution":{"integrity":"sha512-48xSriZYYg+8qXna9kwqjIVzuQxi+KYWp2+5nCYnYKPTr0LvD89Jqk2Or5ogxz0NUMfIjhh2lIUX/LyX9B4oIg=="},"engines":{"node":"^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7"},"hasBin":true},"buffer-builder@0.2.0":{"resolution":{"integrity":"sha512-7VPMEPuYznPSoR21NE1zvd2Xna6c/CloiZCfcMXR1Jny6PjX0N4Nsa38zcBFo/FMK+BlA+FLKbJCQ0i2yxp+Xg=="}},"buffer-crc32@0.2.13":{"resolution":{"integrity":"sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ=="}},"buffer-crc32@1.0.0":{"resolution":{"integrity":"sha512-Db1SbgBS/fg/392AblrMJk97KggmvYhr4pB5ZIMTWtaivCPMWLkmb7m21cJvpvgK+J3nsU2CmmixNBZx4vFj/w=="},"engines":{"node":">=8.0.0"}},"buffer-from@1.1.2":{"resolution":{"integrity":"sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="}},"buffer@5.7.1":{"resolution":{"integrity":"sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ=="}},"buffer@6.0.3":{"resolution":{"integrity":"sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA=="}},"cac@6.7.14":{"resolution":{"integrity":"sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ=="},"engines":{"node":">=8"}},"call-bind-apply-helpers@1.0.2":{"resolution":{"integrity":"sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ=="},"engines":{"node":">= 0.4"}},"caniuse-lite@1.0.30001737":{"resolution":{"integrity":"sha512-BiloLiXtQNrY5UyF0+1nSJLXUENuhka2pzy2Fx5pGxqavdrxSCW4U6Pn/PoG3Efspi2frRbHpBV2XsrPE6EDlw=="}},"caniuse-lite@1.0.30001792":{"resolution":{"integrity":"sha512-hVLMUZFgR4JJ6ACt1uEESvQN1/dBVqPAKY0hgrV70eN3391K6juAfTjKZLKvOMsx8PxA7gsY1/tLMMTcfFLLpw=="}},"ccount@2.0.1":{"resolution":{"integrity":"sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg=="}},"chai@5.3.3":{"resolution":{"integrity":"sha512-4zNhdJD/iOjSH0A05ea+Ke6MU5mmpQcbQsSOkgdaUMJ9zTlDTD/GYlwohmIE2u0gaxHYiVHEn1Fw9mZ/ktJWgw=="},"engines":{"node":">=18"}},"chai@6.2.2":{"resolution":{"integrity":"sha512-NUPRluOfOiTKBKvWPtSD4PhFvWCqOi0BGStNWs57X9js7XGTprSmFoz5F0tWhR4WPjNeR9jXqdC7/UpSJTnlRg=="},"engines":{"node":">=18"}},"chalk@4.1.2":{"resolution":{"integrity":"sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="},"engines":{"node":">=10"}},"chalk@5.6.2":{"resolution":{"integrity":"sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA=="},"engines":{"node":"^12.17.0 || ^14.13 || >=16.0.0"}},"character-entities-html4@2.1.0":{"resolution":{"integrity":"sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA=="}},"character-entities-legacy@3.0.0":{"resolution":{"integrity":"sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ=="}},"character-entities@2.0.2":{"resolution":{"integrity":"sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ=="}},"chardet@2.1.0":{"resolution":{"integrity":"sha512-bNFETTG/pM5ryzQ9Ad0lJOTa6HWD/YsScAR3EnCPZRPlQh77JocYktSHOUHelyhm8IARL+o4c4F1bP5KVOjiRA=="}},"check-error@2.1.1":{"resolution":{"integrity":"sha512-OAlb+T7V4Op9OwdkjmguYRqncdlx5JiofwOAUkmTF+jNdHwzTaTs4sRAGpzLF3oOz5xAyDGrPgeIDFQmDOTiJw=="},"engines":{"node":">= 16"}},"cheerio-select@2.1.0":{"resolution":{"integrity":"sha512-9v9kG0LvzrlcungtnJtpGNxY+fzECQKhK4EGJX2vByejiMX84MFNQw4UxPJl3bFbTMw+Dfs37XaIkCwTZfLh4g=="}},"cheerio@1.1.2":{"resolution":{"integrity":"sha512-IkxPpb5rS/d1IiLbHMgfPuS0FgiWTtFIm/Nj+2woXDLTZ7fOT2eqzgYbdMlLweqlHbsZjxEChoVK+7iph7jyQg=="},"engines":{"node":">=20.18.1"}},"cheerio@1.2.0":{"resolution":{"integrity":"sha512-WDrybc/gKFpTYQutKIK6UvfcuxijIZfMfXaYm8NMsPQxSYvf+13fXUJ4rztGGbJcBQ/GF55gvrZ0Bc0bj/mqvg=="},"engines":{"node":">=20.18.1"}},"chevrotain-allstar@0.3.1":{"resolution":{"integrity":"sha512-b7g+y9A0v4mxCW1qUhf3BSVPg+/NvGErk/dOkrDaHA0nQIQGAtrOjlX//9OQtRlSCy+x9rfB5N8yC71lH1nvMw=="},"peerDependencies":{"chevrotain":"^11.0.0"}},"chevrotain@11.0.3":{"resolution":{"integrity":"sha512-ci2iJH6LeIkvP9eJW6gpueU8cnZhv85ELY8w8WiFtNjMHA5ad6pQLaJo9mEly/9qUyCpvqX8/POVUTf18/HFdw=="}},"chokidar@3.6.0":{"resolution":{"integrity":"sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw=="},"engines":{"node":">= 8.10.0"}},"chownr@1.1.4":{"resolution":{"integrity":"sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg=="}},"chownr@2.0.0":{"resolution":{"integrity":"sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ=="},"engines":{"node":">=10"}},"chrome-trace-event@1.0.4":{"resolution":{"integrity":"sha512-rNjApaLzuwaOTjCiT8lSDdGN1APCiqkChLMJxJPWLunPAt5fy8xgU9/jNOchV84wfIxrA0lRQB7oCT8jrn/wrQ=="},"engines":{"node":">=6.0"}},"ci-info@3.9.0":{"resolution":{"integrity":"sha512-NIxF55hv4nSqQswkAeiOi1r83xy8JldOFDTWiug55KBu9Jnblncd2U6ViHmYgHf01TPZS77NJBhBMKdWj9HQMQ=="},"engines":{"node":">=8"}},"cli-cursor@3.1.0":{"resolution":{"integrity":"sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw=="},"engines":{"node":">=8"}},"cli-spinners@2.6.1":{"resolution":{"integrity":"sha512-x/5fWmGMnbKQAaNwN+UZlV79qBLM9JFnJuJ03gIi5whrob0xV0ofNVHy9DhwGdsMJQc2OKv0oGmLzvaqvAVv+g=="},"engines":{"node":">=6"}},"cli-spinners@2.9.2":{"resolution":{"integrity":"sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg=="},"engines":{"node":">=6"}},"cli-width@4.1.0":{"resolution":{"integrity":"sha512-ouuZd4/dm2Sw5Gmqy6bGyNNNe1qt9RpmxveLSO7KcgsTnU7RXfsw+/bukWGo1abgBiMAic068rclZsO4IWmmxQ=="},"engines":{"node":">= 12"}},"cliui@8.0.1":{"resolution":{"integrity":"sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ=="},"engines":{"node":">=12"}},"clone@1.0.4":{"resolution":{"integrity":"sha512-JQHZ2QMW6l3aH/j6xCqQThY/9OH4D/9ls34cgkUBiEeocRTU04tHfKPBsUK1PqZCUQM7GiA0IIXJSuXHI64Kbg=="},"engines":{"node":">=0.8"}},"color-convert@2.0.1":{"resolution":{"integrity":"sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ=="},"engines":{"node":">=7.0.0"}},"color-name@1.1.4":{"resolution":{"integrity":"sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="}},"colorjs.io@0.5.2":{"resolution":{"integrity":"sha512-twmVoizEW7ylZSN32OgKdXRmo1qg+wT5/6C3xu5b9QsWzSFAhHLn2xd8ro0diCsKfCj1RdaTP/nrcW+vAoQPIw=="}},"combined-stream@1.0.8":{"resolution":{"integrity":"sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg=="},"engines":{"node":">= 0.8"}},"comma-separated-tokens@2.0.3":{"resolution":{"integrity":"sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg=="}},"commander@2.20.3":{"resolution":{"integrity":"sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ=="}},"commander@7.2.0":{"resolution":{"integrity":"sha512-QrWXB+ZQSVPmIWIhtEO9H+gwHaMGYiF5ChvoJ+K9ZGHG/sVsa6yiesAD1GC/x46sET00Xlwo1u49RVVVzvcSkw=="},"engines":{"node":">= 10"}},"commander@8.3.0":{"resolution":{"integrity":"sha512-OkTL9umf+He2DZkUq8f8J9of7yL6RJKI24dVITBmNfZBmri9zYZQrKkuXiKhyfPSu8tUhnVBB1iKXevvnlR4Ww=="},"engines":{"node":">= 12"}},"commander@9.5.0":{"resolution":{"integrity":"sha512-KRs7WVDKg86PWiuAqhDrAQnTXZKraVcCc6vFdL14qrZ/DcWwuRo7VoiYXalXO7S5GKpqYiVEwCbgFDfxNHKJBQ=="},"engines":{"node":"^12.20.0 || >=14"}},"compress-commons@6.0.2":{"resolution":{"integrity":"sha512-6FqVXeETqWPoGcfzrXb37E50NP0LXT8kAMu5ooZayhWWdgEY4lBEEcbQNXtkuKQsGduxiIcI4gOTsxTmuq/bSg=="},"engines":{"node":">= 14"}},"confbox@0.1.8":{"resolution":{"integrity":"sha512-RMtmw0iFkeR4YV+fUOSucriAQNb9g8zFR52MWCtl+cCZOFRNL6zeB395vPzFhEjjn4fMxXudmELnl/KF/WrK6w=="}},"confbox@0.2.2":{"resolution":{"integrity":"sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ=="}},"convert-source-map@2.0.0":{"resolution":{"integrity":"sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg=="}},"cookie-es@3.1.1":{"resolution":{"integrity":"sha512-UaXxwISYJPTr9hwQxMFYZ7kNhSXboMXP+Z3TRX6f1/NyaGPfuNUZOWP1pUEb75B2HjfklIYLVRfWiFZJyC6Npg=="}},"cookie@0.7.2":{"resolution":{"integrity":"sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="},"engines":{"node":">= 0.6"}},"cookie@1.0.2":{"resolution":{"integrity":"sha512-9Kr/j4O16ISv8zBBhJoi4bXOYNTkFLOqSL3UDB0njXxCXNezjeyVrJyGOWtgfs/q2km1gwBcfH8q1yEGoMYunA=="},"engines":{"node":">=18"}},"core-js@3.46.0":{"resolution":{"integrity":"sha512-vDMm9B0xnqqZ8uSBpZ8sNtRtOdmfShrvT6h2TuQGLs0Is+cR0DYbj/KWP6ALVNbWPpqA/qPLoOuppJN07humpA=="}},"core-util-is@1.0.3":{"resolution":{"integrity":"sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ=="}},"cose-base@1.0.3":{"resolution":{"integrity":"sha512-s9whTXInMSgAp/NVXVNuVxVKzGH2qck3aQlVHxDCdAEPgtMKwc4Wq6/QKhgdEdgbLSi9rBTAcPoRa6JpiG4ksg=="}},"cose-base@2.2.0":{"resolution":{"integrity":"sha512-AzlgcsCbUMymkADOJtQm3wO9S3ltPfYOFD5033keQn9NJzIbtnZj+UdBJe7DYml/8TdbtHJW3j58SOnKhWY/5g=="}},"crc-32@1.2.2":{"resolution":{"integrity":"sha512-ROmzCKrTnOwybPcJApAA6WBWij23HVfGVNKqqrZpuyZOHqK2CwHSvpGuyt/UNNvaIjEd8X5IFGp4Mh+Ie1IHJQ=="},"engines":{"node":">=0.8"},"hasBin":true},"crc32-stream@6.0.0":{"resolution":{"integrity":"sha512-piICUB6ei4IlTv1+653yq5+KoqfBYmj9bw6LqXoOneTMDXk5nM1qt12mFW1caG3LlJXEKW1Bp0WggEmIfQB34g=="},"engines":{"node":">= 14"}},"cross-spawn@7.0.6":{"resolution":{"integrity":"sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA=="},"engines":{"node":">= 8"}},"css-select@5.1.0":{"resolution":{"integrity":"sha512-nwoRF1rvRRnnCqqY7updORDsuqKzqYJ28+oSMaJMMgOauh3fvwHqMS7EZpIPqK8GL+g9mKxF1vP/ZjSeNjEVHg=="}},"css-shorthand-properties@1.1.2":{"resolution":{"integrity":"sha512-C2AugXIpRGQTxaCW0N7n5jD/p5irUmCrwl03TrnMFBHDbdq44CFWR2zO7rK9xPN4Eo3pUxC4vQzQgbIpzrD1PQ=="}},"css-tree@3.1.0":{"resolution":{"integrity":"sha512-0eW44TGN5SQXU1mWSkKwFstI/22X2bG1nYzZTYMAWjylYURhse752YgbE4Cx46AC+bAvI+/dYTPRk1LqSUnu6w=="},"engines":{"node":"^10 || ^12.20.0 || ^14.13.0 || >=15.0.0"}},"css-value@0.0.1":{"resolution":{"integrity":"sha512-FUV3xaJ63buRLgHrLQVlVgQnQdR4yqdLGaDu7g8CQcWjInDfM9plBTPI9FRfpahju1UBSaMckeb2/46ApS/V1Q=="}},"css-what@6.1.0":{"resolution":{"integrity":"sha512-HTUrgRJ7r4dsZKU6GjmpfRK1O76h97Z8MfS1G0FozR+oF2kG6Vfe8JE6zwrkbxigziPHinCJ+gCPjA9EaBDtRw=="},"engines":{"node":">= 6"}},"cssstyle@4.3.1":{"resolution":{"integrity":"sha512-ZgW+Jgdd7i52AaLYCriF8Mxqft0gD/R9i9wi6RWBhs1pqdPEzPjym7rvRKi397WmQFf3SlyUsszhw+VVCbx79Q=="},"engines":{"node":">=18"}},"cssstyle@5.3.4":{"resolution":{"integrity":"sha512-KyOS/kJMEq5O9GdPnaf82noigg5X5DYn0kZPJTaAsCUaBizp6Xa1y9D4Qoqf/JazEXWuruErHgVXwjN5391ZJw=="},"engines":{"node":">=20"}},"csstype@3.2.3":{"resolution":{"integrity":"sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ=="}},"cytoscape-cose-bilkent@4.1.0":{"resolution":{"integrity":"sha512-wgQlVIUJF13Quxiv5e1gstZ08rnZj2XaLHGoFMYXz7SkNfCDOOteKBE6SYRfA9WxxI/iBc3ajfDoc6hb/MRAHQ=="},"peerDependencies":{"cytoscape":"^3.2.0"}},"cytoscape-fcose@2.2.0":{"resolution":{"integrity":"sha512-ki1/VuRIHFCzxWNrsshHYPs6L7TvLu3DL+TyIGEsRcvVERmxokbf5Gdk7mFxZnTdiGtnA4cfSmjZJMviqSuZrQ=="},"peerDependencies":{"cytoscape":"^3.2.0"}},"cytoscape@3.30.4":{"resolution":{"integrity":"sha512-OxtlZwQl1WbwMmLiyPSEBuzeTIQnwZhJYYWFzZ2PhEHVFwpeaqNIkUzSiso00D98qk60l8Gwon2RP304d3BJ1A=="},"engines":{"node":">=0.10"}},"d3-array@2.12.1":{"resolution":{"integrity":"sha512-B0ErZK/66mHtEsR1TkPEEkwdy+WDesimkM5gpZr5Dsg54BiTA5RXtYW5qTLIAcekaS9xfZrzBLF/OAkB3Qn1YQ=="}},"d3-array@3.2.4":{"resolution":{"integrity":"sha512-tdQAmyA18i4J7wprpYq8ClcxZy3SC31QMeByyCFyRt7BVHdREQZ5lpzoe5mFEYZUWe+oq8HBvk9JjpibyEV4Jg=="},"engines":{"node":">=12"}},"d3-axis@3.0.0":{"resolution":{"integrity":"sha512-IH5tgjV4jE/GhHkRV0HiVYPDtvfjHQlQfJHs0usq7M30XcSBvOotpmH1IgkcXsO/5gEQZD43B//fc7SRT5S+xw=="},"engines":{"node":">=12"}},"d3-brush@3.0.0":{"resolution":{"integrity":"sha512-ALnjWlVYkXsVIGlOsuWH1+3udkYFI48Ljihfnh8FZPF2QS9o+PzGLBslO0PjzVoHLZ2KCVgAM8NVkXPJB2aNnQ=="},"engines":{"node":">=12"}},"d3-chord@3.0.1":{"resolution":{"integrity":"sha512-VE5S6TNa+j8msksl7HwjxMHDM2yNK3XCkusIlpX5kwauBfXuyLAtNg9jCp/iHH61tgI4sb6R/EIMWCqEIdjT/g=="},"engines":{"node":">=12"}},"d3-color@3.1.0":{"resolution":{"integrity":"sha512-zg/chbXyeBtMQ1LbD/WSoW2DpC3I0mpmPdW+ynRTj/x2DAWYrIY7qeZIHidozwV24m4iavr15lNwIwLxRmOxhA=="},"engines":{"node":">=12"}},"d3-contour@4.0.2":{"resolution":{"integrity":"sha512-4EzFTRIikzs47RGmdxbeUvLWtGedDUNkTcmzoeyg4sP/dvCexO47AaQL7VKy/gul85TOxw+IBgA8US2xwbToNA=="},"engines":{"node":">=12"}},"d3-delaunay@6.0.4":{"resolution":{"integrity":"sha512-mdjtIZ1XLAM8bm/hx3WwjfHt6Sggek7qH043O8KEjDXN40xi3vx/6pYSVTwLjEgiXQTbvaouWKynLBiUZ6SK6A=="},"engines":{"node":">=12"}},"d3-dispatch@3.0.1":{"resolution":{"integrity":"sha512-rzUyPU/S7rwUflMyLc1ETDeBj0NRuHKKAcvukozwhshr6g6c5d8zh4c2gQjY2bZ0dXeGLWc1PF174P2tVvKhfg=="},"engines":{"node":">=12"}},"d3-drag@3.0.0":{"resolution":{"integrity":"sha512-pWbUJLdETVA8lQNJecMxoXfH6x+mO2UQo8rSmZ+QqxcbyA3hfeprFgIT//HW2nlHChWeIIMwS2Fq+gEARkhTkg=="},"engines":{"node":">=12"}},"d3-dsv@3.0.1":{"resolution":{"integrity":"sha512-UG6OvdI5afDIFP9w4G0mNq50dSOsXHJaRE8arAS5o9ApWnIElp8GZw1Dun8vP8OyHOZ/QJUKUJwxiiCCnUwm+Q=="},"engines":{"node":">=12"},"hasBin":true},"d3-ease@3.0.1":{"resolution":{"integrity":"sha512-wR/XK3D3XcLIZwpbvQwQ5fK+8Ykds1ip7A2Txe0yxncXSdq1L9skcG7blcedkOX+ZcgxGAmLX1FrRGbADwzi0w=="},"engines":{"node":">=12"}},"d3-fetch@3.0.1":{"resolution":{"integrity":"sha512-kpkQIM20n3oLVBKGg6oHrUchHM3xODkTzjMoj7aWQFq5QEM+R6E4WkzT5+tojDY7yjez8KgCBRoj4aEr99Fdqw=="},"engines":{"node":">=12"}},"d3-force@3.0.0":{"resolution":{"integrity":"sha512-zxV/SsA+U4yte8051P4ECydjD/S+qeYtnaIyAs9tgHCqfguma/aAQDjo85A9Z6EKhBirHRJHXIgJUlffT4wdLg=="},"engines":{"node":">=12"}},"d3-format@3.1.0":{"resolution":{"integrity":"sha512-YyUI6AEuY/Wpt8KWLgZHsIU86atmikuoOmCfommt0LYHiQSPjvX2AcFc38PX0CBpr2RCyZhjex+NS/LPOv6YqA=="},"engines":{"node":">=12"}},"d3-geo@3.1.1":{"resolution":{"integrity":"sha512-637ln3gXKXOwhalDzinUgY83KzNWZRKbYubaG+fGVuc/dxO64RRljtCTnf5ecMyE1RIdtqpkVcq0IbtU2S8j2Q=="},"engines":{"node":">=12"}},"d3-hierarchy@3.1.2":{"resolution":{"integrity":"sha512-FX/9frcub54beBdugHjDCdikxThEqjnR93Qt7PvQTOHxyiNCAlvMrHhclk3cD5VeAaq9fxmfRp+CnWw9rEMBuA=="},"engines":{"node":">=12"}},"d3-interpolate@3.0.1":{"resolution":{"integrity":"sha512-3bYs1rOD33uo8aqJfKP3JWPAibgw8Zm2+L9vBKEHJ2Rg+viTR7o5Mmv5mZcieN+FRYaAOWX5SJATX6k1PWz72g=="},"engines":{"node":">=12"}},"d3-path@1.0.9":{"resolution":{"integrity":"sha512-VLaYcn81dtHVTjEHd8B+pbe9yHWpXKZUC87PzoFmsFrJqgFwDe/qxfp5MlfsfM1V5E/iVt0MmEbWQ7FVIXh/bg=="}},"d3-path@3.1.0":{"resolution":{"integrity":"sha512-p3KP5HCf/bvjBSSKuXid6Zqijx7wIfNW+J/maPs+iwR35at5JCbLUT0LzF1cnjbCHWhqzQTIN2Jpe8pRebIEFQ=="},"engines":{"node":">=12"}},"d3-polygon@3.0.1":{"resolution":{"integrity":"sha512-3vbA7vXYwfe1SYhED++fPUQlWSYTTGmFmQiany/gdbiWgU/iEyQzyymwL9SkJjFFuCS4902BSzewVGsHHmHtXg=="},"engines":{"node":">=12"}},"d3-quadtree@3.0.1":{"resolution":{"integrity":"sha512-04xDrxQTDTCFwP5H6hRhsRcb9xxv2RzkcsygFzmkSIOJy3PeRJP7sNk3VRIbKXcog561P9oU0/rVH6vDROAgUw=="},"engines":{"node":">=12"}},"d3-random@3.0.1":{"resolution":{"integrity":"sha512-FXMe9GfxTxqd5D6jFsQ+DJ8BJS4E/fT5mqqdjovykEB2oFbTMDVdg1MGFxfQW+FBOGoB++k8swBrgwSHT1cUXQ=="},"engines":{"node":">=12"}},"d3-sankey@0.12.3":{"resolution":{"integrity":"sha512-nQhsBRmM19Ax5xEIPLMY9ZmJ/cDvd1BG3UVvt5h3WRxKg5zGRbvnteTyWAbzeSvlh3tW7ZEmq4VwR5mB3tutmQ=="}},"d3-scale-chromatic@3.1.0":{"resolution":{"integrity":"sha512-A3s5PWiZ9YCXFye1o246KoscMWqf8BsD9eRiJ3He7C9OBaxKhAd5TFCdEx/7VbKtxxTsu//1mMJFrEt572cEyQ=="},"engines":{"node":">=12"}},"d3-scale@4.0.2":{"resolution":{"integrity":"sha512-GZW464g1SH7ag3Y7hXjf8RoUuAFIqklOAq3MRl4OaWabTFJY9PN/E1YklhXLh+OQ3fM9yS2nOkCoS+WLZ6kvxQ=="},"engines":{"node":">=12"}},"d3-selection@3.0.0":{"resolution":{"integrity":"sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ=="},"engines":{"node":">=12"}},"d3-shape@1.3.7":{"resolution":{"integrity":"sha512-EUkvKjqPFUAZyOlhY5gzCxCeI0Aep04LwIRpsZ/mLFelJiUfnK56jo5JMDSE7yyP2kLSb6LtF+S5chMk7uqPqw=="}},"d3-shape@3.2.0":{"resolution":{"integrity":"sha512-SaLBuwGm3MOViRq2ABk3eLoxwZELpH6zhl3FbAoJ7Vm1gofKx6El1Ib5z23NUEhF9AsGl7y+dzLe5Cw2AArGTA=="},"engines":{"node":">=12"}},"d3-time-format@4.1.0":{"resolution":{"integrity":"sha512-dJxPBlzC7NugB2PDLwo9Q8JiTR3M3e4/XANkreKSUxF8vvXKqm1Yfq4Q5dl8budlunRVlUUaDUgFt7eA8D6NLg=="},"engines":{"node":">=12"}},"d3-time@3.1.0":{"resolution":{"integrity":"sha512-VqKjzBLejbSMT4IgbmVgDjpkYrNWUYJnbCGo874u7MMKIWsILRX+OpX/gTk8MqjpT1A/c6HY2dCA77ZN0lkQ2Q=="},"engines":{"node":">=12"}},"d3-timer@3.0.1":{"resolution":{"integrity":"sha512-ndfJ/JxxMd3nw31uyKoY2naivF+r29V+Lc0svZxe1JvvIRmi8hUsrMvdOwgS1o6uBHmiz91geQ0ylPP0aj1VUA=="},"engines":{"node":">=12"}},"d3-transition@3.0.1":{"resolution":{"integrity":"sha512-ApKvfjsSR6tg06xrL434C0WydLr7JewBB3V+/39RMHsaXTOG0zmt/OAXeng5M5LBm0ojmxJrpomQVZ1aPvBL4w=="},"engines":{"node":">=12"},"peerDependencies":{"d3-selection":"2 - 3"}},"d3-zoom@3.0.0":{"resolution":{"integrity":"sha512-b8AmV3kfQaqWAuacbPuNbL6vahnOJflOhexLzMMNLga62+/nh0JzvJ0aO/5a5MVgUFGS7Hu1P9P03o3fJkDCyw=="},"engines":{"node":">=12"}},"d3@7.9.0":{"resolution":{"integrity":"sha512-e1U46jVP+w7Iut8Jt8ri1YsPOvFpg46k+K8TpCb0P+zjCkjkPnV7WzfDJzMHy1LnA+wj5pLT1wjO901gLXeEhA=="},"engines":{"node":">=12"}},"dagre-d3-es@7.0.13":{"resolution":{"integrity":"sha512-efEhnxpSuwpYOKRm/L5KbqoZmNNukHa/Flty4Wp62JRvgH2ojwVgPgdYyr4twpieZnyRDdIH7PY2mopX26+j2Q=="}},"data-uri-to-buffer@4.0.1":{"resolution":{"integrity":"sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A=="},"engines":{"node":">= 12"}},"data-uri-to-buffer@6.0.2":{"resolution":{"integrity":"sha512-7hvf7/GW8e86rW0ptuwS3OcBGDjIi6SZva7hCyWC0yYry2cOPmLIjXAUHI6DK2HsnwJd9ifmt57i8eV2n4YNpw=="},"engines":{"node":">= 14"}},"data-urls@5.0.0":{"resolution":{"integrity":"sha512-ZYP5VBHshaDAiVZxjbRVcFJpc+4xGgT0bK3vzy1HLN8jTO975HEbuYzZJcHoQEY5K1a0z8YayJkyVETa08eNTg=="},"engines":{"node":">=18"}},"data-urls@6.0.0":{"resolution":{"integrity":"sha512-BnBS08aLUM+DKamupXs3w2tJJoqU+AkaE/+6vQxi/G/DPmIZFJJp9Dkb1kM03AZx8ADehDUZgsNxju3mPXZYIA=="},"engines":{"node":">=20"}},"dayjs@1.11.19":{"resolution":{"integrity":"sha512-t5EcLVS6QPBNqM2z8fakk/NKel+Xzshgt8FFKAn+qwlD1pzZWxh0nVCrvFK7ZDb6XucZeF9z8C7CBWTRIVApAw=="}},"debug@4.4.1":{"resolution":{"integrity":"sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="},"engines":{"node":">=6.0"},"peerDependencies":{"supports-color":"*"},"peerDependenciesMeta":{"supports-color":{"optional":true}}},"debug@4.4.3":{"resolution":{"integrity":"sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="},"engines":{"node":">=6.0"},"peerDependencies":{"supports-color":"*"},"peerDependenciesMeta":{"supports-color":{"optional":true}}},"decamelize@6.0.1":{"resolution":{"integrity":"sha512-G7Cqgaelq68XHJNGlZ7lrNQyhZGsFqpwtGFexqUv4IQdjKoSYF7ipZ9UuTJZUSQXFj/XaoBLuEVIVqr8EJngEQ=="},"engines":{"node":"^12.20.0 || ^14.13.1 || >=16.0.0"}},"decimal.js@10.6.0":{"resolution":{"integrity":"sha512-YpgQiITW3JXGntzdUmyUR1V812Hn8T1YVXhCu+wO3OpS4eU9l4YdD3qjyiKdV6mvV29zapkMeD390UVEf2lkUg=="}},"decode-named-character-reference@1.0.2":{"resolution":{"integrity":"sha512-O8x12RzrUF8xyVcY0KJowWsmaJxQbmy0/EtnNtHRpsOcT7dFk5W598coHqBVpmWo1oQQfsCqfCmkZN5DJrZVdg=="}},"decompress-response@6.0.0":{"resolution":{"integrity":"sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ=="},"engines":{"node":">=10"}},"deep-eql@5.0.2":{"resolution":{"integrity":"sha512-h5k/5U50IJJFpzfL6nO9jaaumfjO/f2NjK/oYB2Djzm4p9L+3T9qWpZqZ2hAbLPuuYq9wrU08WQyBTL5GbPk5Q=="},"engines":{"node":">=6"}},"deep-extend@0.6.0":{"resolution":{"integrity":"sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA=="},"engines":{"node":">=4.0.0"}},"deepmerge-ts@7.1.5":{"resolution":{"integrity":"sha512-HOJkrhaYsweh+W+e74Yn7YStZOilkoPb6fycpwNLKzSPtruFs48nYis0zy5yJz1+ktUhHxoRDJ27RQAWLIJVJw=="},"engines":{"node":">=16.0.0"}},"defaults@1.0.4":{"resolution":{"integrity":"sha512-eFuaLoy/Rxalv2kr+lqMlUnrDWV+3j4pljOIJgLIhI058IQfWJ7vXhyEIHu+HtC738klGALYxOKDO0bQP3tg8A=="}},"define-lazy-prop@2.0.0":{"resolution":{"integrity":"sha512-Ds09qNh8yw3khSjiJjiUInaGX9xlqZDY7JVryGxdxV7NPeuqQfplOpQ66yJFZut3jLa5zOwkXw1g9EI2uKh4Og=="},"engines":{"node":">=8"}},"degenerator@5.0.1":{"resolution":{"integrity":"sha512-TllpMR/t0M5sqCXfj85i4XaAzxmS5tVA16dqvdkMwGmzI+dXLXnw3J+3Vdv7VKw+ThlTMboK6i9rnZ6Nntj5CQ=="},"engines":{"node":">= 14"}},"delaunator@5.0.1":{"resolution":{"integrity":"sha512-8nvh+XBe96aCESrGOqMp/84b13H9cdKbG5P2ejQCh4d4sK9RL4371qou9drQjMhvnPmhWl5hnmqbEE0fXr9Xnw=="}},"delayed-stream@1.0.0":{"resolution":{"integrity":"sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ=="},"engines":{"node":">=0.4.0"}},"dequal@2.0.3":{"resolution":{"integrity":"sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA=="},"engines":{"node":">=6"}},"detect-indent@6.1.0":{"resolution":{"integrity":"sha512-reYkTUJAZb9gUuZ2RvVCNhVHdg62RHnJ7WJl8ftMi4diZ6NWlciOzQN88pUhSELEwflJht4oQDv0F0BMlwaYtA=="},"engines":{"node":">=8"}},"detect-libc@2.1.2":{"resolution":{"integrity":"sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ=="},"engines":{"node":">=8"}},"devlop@1.1.0":{"resolution":{"integrity":"sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA=="}},"diff@8.0.2":{"resolution":{"integrity":"sha512-sSuxWU5j5SR9QQji/o2qMvqRNYRDOcBTgsJ/DeCf4iSN4gW+gNMXM7wFIP+fdXZxoNiAnHUTGjCr+TSWXdRDKg=="},"engines":{"node":">=0.3.1"}},"dir-glob@3.0.1":{"resolution":{"integrity":"sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA=="},"engines":{"node":">=8"}},"dom-accessibility-api@0.5.16":{"resolution":{"integrity":"sha512-X7BJ2yElsnOJ30pZF4uIIDfBEVgF4XEBxL9Bxhy6dnrm5hkzqmsWHGTiHqRiITNhMyFLyAiWndIJP7Z1NTteDg=="}},"dom-serializer@2.0.0":{"resolution":{"integrity":"sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg=="}},"domelementtype@2.3.0":{"resolution":{"integrity":"sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw=="}},"domhandler@5.0.3":{"resolution":{"integrity":"sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w=="},"engines":{"node":">= 4"}},"dompurify@3.3.1":{"resolution":{"integrity":"sha512-qkdCKzLNtrgPFP1Vo+98FRzJnBRGe4ffyCea9IwHB1fyxPOeNTHpLKYGd4Uk9xvNoH0ZoOjwZxNptyMwqrId1Q=="}},"domutils@3.2.2":{"resolution":{"integrity":"sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw=="}},"dotenv-expand@11.0.7":{"resolution":{"integrity":"sha512-zIHwmZPRshsCdpMDyVsqGmgyP0yT8GAgXUnkdAoJisxvf33k7yO6OuoKmcTGuXPWSsm8Oh88nZicRLA9Y0rUeA=="},"engines":{"node":">=12"}},"dotenv@10.0.0":{"resolution":{"integrity":"sha512-rlBi9d8jpv9Sf1klPjNfFAuWDjKLwTIJJ/VxtoTwIR6hnZxcEOQCZg2oIL3MWBYw5GpUDKOEnND7LXTbIpQ03Q=="},"engines":{"node":">=10"}},"dotenv@16.4.7":{"resolution":{"integrity":"sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ=="},"engines":{"node":">=12"}},"dotenv@16.5.0":{"resolution":{"integrity":"sha512-m/C+AwOAr9/W1UOIZUo232ejMNnJAJtYQjUbHoNTBNTJSvqzzDh7vnrei3o3r3m9blf6ZoDkvcw0VmozNRFJxg=="},"engines":{"node":">=12"}},"dunder-proto@1.0.1":{"resolution":{"integrity":"sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A=="},"engines":{"node":">= 0.4"}},"eastasianwidth@0.2.0":{"resolution":{"integrity":"sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA=="}},"edge-paths@3.0.5":{"resolution":{"integrity":"sha512-sB7vSrDnFa4ezWQk9nZ/n0FdpdUuC6R1EOrlU3DL+bovcNFK28rqu2emmAUjujYEJTWIgQGqgVVWUZXMnc8iWg=="},"engines":{"node":">=14.0.0"}},"edgedriver@5.6.1":{"resolution":{"integrity":"sha512-3Ve9cd5ziLByUdigw6zovVeWJjVs8QHVmqOB0sJ0WNeVPcwf4p18GnxMmVvlFmYRloUwf5suNuorea4QzwBIOA=="},"hasBin":true},"electron-to-chromium@1.5.211":{"resolution":{"integrity":"sha512-IGBvimJkotaLzFnwIVgW9/UD/AOJ2tByUmeOrtqBfACSbAw5b1G0XpvdaieKyc7ULmbwXVx+4e4Be8pOPBrYkw=="}},"electron-to-chromium@1.5.352":{"resolution":{"integrity":"sha512-9wHk8x6dyuimoe18EdiDPWKExNdxYqo4fn4FwOVVper6RxT3cmpBwBkWWfSOCYJjQdIco/nPhJhNLmn4Ufg1Yg=="}},"emoji-regex@8.0.0":{"resolution":{"integrity":"sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="}},"emoji-regex@9.2.2":{"resolution":{"integrity":"sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg=="}},"encoding-sniffer@0.2.1":{"resolution":{"integrity":"sha512-5gvq20T6vfpekVtqrYQsSCFZ1wEg5+wW0/QaZMWkFr6BqD3NfKs0rLCx4rrVlSWJeZb5NBJgVLswK/w2MWU+Gw=="}},"end-of-stream@1.4.5":{"resolution":{"integrity":"sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg=="}},"enhanced-resolve@5.21.0":{"resolution":{"integrity":"sha512-otxSQPw4lkOZWkHpB3zaEQs6gWYEsmX4xQF68ElXC/TWvGxGMSGOvoNbaLXm6/cS/fSfHtsEdw90y20PCd+sCA=="},"engines":{"node":">=10.13.0"}},"enquirer@2.3.6":{"resolution":{"integrity":"sha512-yjNnPr315/FjS4zIsUxYguYUPP2e1NK4d7E7ZOLiyYCcbFBiTMyID+2wvm2w6+pZ/odMA7cRkjhsPbltwBOrLg=="},"engines":{"node":">=8.6"}},"enquirer@2.4.1":{"resolution":{"integrity":"sha512-rRqJg/6gd538VHvR3PSrdRBb/1Vy2YfzHqzvbhGIQpDRKIa4FgV/54b5Q1xYSxOOwKvjXweS26E0Q+nAMwp2pQ=="},"engines":{"node":">=8.6"}},"entities@4.5.0":{"resolution":{"integrity":"sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw=="},"engines":{"node":">=0.12"}},"entities@6.0.1":{"resolution":{"integrity":"sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g=="},"engines":{"node":">=0.12"}},"entities@7.0.1":{"resolution":{"integrity":"sha512-TWrgLOFUQTH994YUyl1yT4uyavY5nNB5muff+RtWaqNVCAK408b5ZnnbNAUEWLTCpum9w6arT70i1XdQ4UeOPA=="},"engines":{"node":">=0.12"}},"error-stack-parser-es@1.0.5":{"resolution":{"integrity":"sha512-5qucVt2XcuGMcEGgWI7i+yZpmpByQ8J1lHhcL7PwqCwu9FPP3VUXzT4ltHe5i2z9dePwEHcDVOAfSnHsOlCXRA=="}},"es-define-property@1.0.1":{"resolution":{"integrity":"sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g=="},"engines":{"node":">= 0.4"}},"es-errors@1.3.0":{"resolution":{"integrity":"sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw=="},"engines":{"node":">= 0.4"}},"es-module-lexer@1.7.0":{"resolution":{"integrity":"sha512-jEQoCwk8hyb2AZziIOLhDqpm5+2ww5uIE6lkO/6jcOCusfk6LhMHpXXfBLXTZ7Ydyt0j4VoUQv6uGNYbdW+kBA=="}},"es-module-lexer@2.1.0":{"resolution":{"integrity":"sha512-n27zTYMjYu1aj4MjCWzSP7G9r75utsaoc8m61weK+W8JMBGGQybd43GstCXZ3WNmSFtGT9wi59qQTW6mhTR5LQ=="}},"es-object-atoms@1.1.1":{"resolution":{"integrity":"sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA=="},"engines":{"node":">= 0.4"}},"es-set-tostringtag@2.1.0":{"resolution":{"integrity":"sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA=="},"engines":{"node":">= 0.4"}},"esbuild@0.25.12":{"resolution":{"integrity":"sha512-bbPBYYrtZbkt6Os6FiTLCTFxvq4tt3JKall1vRwshA3fdVztsLAatFaZobhkBC8/BrPetoa0oksYoKXoG4ryJg=="},"engines":{"node":">=18"},"hasBin":true},"esbuild@0.27.3":{"resolution":{"integrity":"sha512-8VwMnyGCONIs6cWue2IdpHxHnAjzxnw2Zr7MkVxB2vjmQ2ivqGFb4LEG3SMnv0Gb2F/G/2yA8zUaiL1gywDCCg=="},"engines":{"node":">=18"},"hasBin":true},"escalade@3.2.0":{"resolution":{"integrity":"sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA=="},"engines":{"node":">=6"}},"escape-string-regexp@1.0.5":{"resolution":{"integrity":"sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg=="},"engines":{"node":">=0.8.0"}},"escape-string-regexp@5.0.0":{"resolution":{"integrity":"sha512-/veY75JbMK4j1yjvuUxuVsiS/hr/4iHs9FTT6cgTexxdE0Ly/glccBAkloH/DofkjRbZU3bnoj38mOmhkZ0lHw=="},"engines":{"node":">=12"}},"escodegen@2.1.0":{"resolution":{"integrity":"sha512-2NlIDTwUWJN0mRPQOdtQBzbUHvdGY2P1VXSyU83Q3xKxM7WHX2Ql8dKq782Q9TgQUNOLEzEYu9bzLNj1q88I5w=="},"engines":{"node":">=6.0"},"hasBin":true},"eslint-scope@5.1.1":{"resolution":{"integrity":"sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw=="},"engines":{"node":">=8.0.0"}},"esprima@4.0.1":{"resolution":{"integrity":"sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A=="},"engines":{"node":">=4"},"hasBin":true},"esrecurse@4.3.0":{"resolution":{"integrity":"sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag=="},"engines":{"node":">=4.0"}},"estraverse@4.3.0":{"resolution":{"integrity":"sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw=="},"engines":{"node":">=4.0"}},"estraverse@5.3.0":{"resolution":{"integrity":"sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA=="},"engines":{"node":">=4.0"}},"estree-walker@3.0.3":{"resolution":{"integrity":"sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g=="}},"esutils@2.0.3":{"resolution":{"integrity":"sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g=="},"engines":{"node":">=0.10.0"}},"event-target-shim@5.0.1":{"resolution":{"integrity":"sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ=="},"engines":{"node":">=6"}},"events-universal@1.0.1":{"resolution":{"integrity":"sha512-LUd5euvbMLpwOF8m6ivPCbhQeSiYVNb8Vs0fQ8QjXo0JTkEHpz8pxdQf0gStltaPpw0Cca8b39KxvK9cfKRiAw=="}},"events@3.3.0":{"resolution":{"integrity":"sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q=="},"engines":{"node":">=0.8.x"}},"expand-template@2.0.3":{"resolution":{"integrity":"sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg=="},"engines":{"node":">=6"}},"expect-type@1.2.2":{"resolution":{"integrity":"sha512-JhFGDVJ7tmDJItKhYgJCGLOWjuK9vPxiXoUFLwLDc99NlmklilbiQJwoctZtt13+xMw91MCk/REan6MWHqDjyA=="},"engines":{"node":">=12.0.0"}},"expect-type@1.3.0":{"resolution":{"integrity":"sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA=="},"engines":{"node":">=12.0.0"}},"exsolve@1.0.8":{"resolution":{"integrity":"sha512-LmDxfWXwcTArk8fUEnOfSZpHOJ6zOMUJKOtFLFqJLoKJetuQG874Uc7/Kki7zFLzYybmZhp1M7+98pfMqeX8yA=="}},"extend@3.0.2":{"resolution":{"integrity":"sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g=="}},"extendable-error@0.1.7":{"resolution":{"integrity":"sha512-UOiS2in6/Q0FK0R0q6UY9vYpQ21mr/Qn1KOnte7vsACuNJf514WvCCUHSRCPcgjPT2bAhNIJdlE6bVap1GKmeg=="}},"extract-zip@2.0.1":{"resolution":{"integrity":"sha512-GDhU9ntwuKyGXdZBUgTIe+vXnWj0fppUEtMDL0+idd5Sta8TGpHssn/eusA9mrPr9qNDym6SxAYZjNvCn/9RBg=="},"engines":{"node":">= 10.17.0"},"hasBin":true},"fast-deep-equal@2.0.1":{"resolution":{"integrity":"sha512-bCK/2Z4zLidyB4ReuIsvALH6w31YfAQDmXMqMx6FyfHqvBxtjC0eRumeSu4Bs3XtXwpyIywtSTrVT99BxY1f9w=="}},"fast-deep-equal@3.1.3":{"resolution":{"integrity":"sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="}},"fast-fifo@1.3.2":{"resolution":{"integrity":"sha512-/d9sfos4yxzpwkDkuN7k2SqFKtYNmCTzgfEpz82x34IM9/zc8KGxQoXg1liNC/izpRM/MBdt44Nmx41ZWqk+FQ=="}},"fast-glob@3.3.3":{"resolution":{"integrity":"sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg=="},"engines":{"node":">=8.6.0"}},"fast-uri@3.0.3":{"resolution":{"integrity":"sha512-aLrHthzCjH5He4Z2H9YZ+v6Ujb9ocRuW6ZzkJQOrTxleEijANq4v1TsaPaVG1PZcuurEzrLcWRyYBYXD5cEiaw=="}},"fast-uri@3.1.2":{"resolution":{"integrity":"sha512-rVjf7ArG3LTk+FS6Yw81V1DLuZl1bRbNrev6Tmd/9RaroeeRRJhAt7jg/6YFxbvAQXUCavSoZhPPj6oOx+5KjQ=="}},"fast-xml-parser@4.5.6":{"resolution":{"integrity":"sha512-Yd4vkROfJf8AuJrDIVMVmYfULKmIJszVsMv7Vo71aocsKgFxpdlpSHXSaInvyYfgw2PRuObQSW2GFpVMUjxu9A=="},"hasBin":true},"fastq@1.17.1":{"resolution":{"integrity":"sha512-sRVD3lWVIXWg6By68ZN7vho9a1pQcN/WBFaAAsDDFzlJjvoGx0P8z7V1t72grFJfJhu3YPZBuu25f7Kaw2jN1w=="}},"fault@2.0.1":{"resolution":{"integrity":"sha512-WtySTkS4OKev5JtpHXnib4Gxiurzh5NCGvWrFaZ34m6JehfTUhKZvn9njTfw48t6JumVQOmrKqpmGcdwxnhqBQ=="}},"fd-slicer@1.1.0":{"resolution":{"integrity":"sha512-cE1qsB/VwyQozZ+q1dGxR8LBYNZeofhEdUNGSMbQD3Gw2lAzX9Zb3uIU6Ebc/Fmyjo9AWWfnn0AUCHqtevs/8g=="}},"fdir@6.5.0":{"resolution":{"integrity":"sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg=="},"engines":{"node":">=12.0.0"},"peerDependencies":{"picomatch":"^3 || ^4"},"peerDependenciesMeta":{"picomatch":{"optional":true}}},"fetch-blob@3.2.0":{"resolution":{"integrity":"sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ=="},"engines":{"node":"^12.20 || >= 14.13"}},"fetchdts@0.1.7":{"resolution":{"integrity":"sha512-YoZjBdafyLIop9lSxXVI33oLD5kN31q4Td+CasofLLYeLXRFeOsuOw0Uo+XNRi9PZlbfdlN2GmRtm4tCEQ9/KA=="}},"fflate@0.4.8":{"resolution":{"integrity":"sha512-FJqqoDBR00Mdj9ppamLa/Y7vxm+PRmNWA67N846RvsoYVMKB4q3y/de5PA7gUmRMYK/8CMz2GDZQmCRN1wBcWA=="}},"figures@3.2.0":{"resolution":{"integrity":"sha512-yaduQFRKLXYOGgEn6AZau90j3ggSOyiqXU0F9JZfeXYhNa+Jk4X+s45A2zg5jns87GAFa34BBm2kXw4XpNcbdg=="},"engines":{"node":">=8"}},"file-uri-to-path@1.0.0":{"resolution":{"integrity":"sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw=="}},"fill-range@7.1.1":{"resolution":{"integrity":"sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="},"engines":{"node":">=8"}},"find-up@4.1.0":{"resolution":{"integrity":"sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw=="},"engines":{"node":">=8"}},"flat@5.0.2":{"resolution":{"integrity":"sha512-b6suED+5/3rTpUBdG1gupIl8MPFCAMA0QXwmljLhvCUKcUvdE4gWky9zpuGCcXHOsz4J9wPGNWq6OKpmIzz3hQ=="},"hasBin":true},"follow-redirects@1.15.11":{"resolution":{"integrity":"sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ=="},"engines":{"node":">=4.0"},"peerDependencies":{"debug":"*"},"peerDependenciesMeta":{"debug":{"optional":true}}},"foreground-child@3.3.1":{"resolution":{"integrity":"sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw=="},"engines":{"node":">=14"}},"form-data@4.0.4":{"resolution":{"integrity":"sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow=="},"engines":{"node":">= 6"}},"format@0.2.2":{"resolution":{"integrity":"sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww=="},"engines":{"node":">=0.4.x"}},"formdata-polyfill@4.0.10":{"resolution":{"integrity":"sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g=="},"engines":{"node":">=12.20.0"}},"front-matter@4.0.2":{"resolution":{"integrity":"sha512-I8ZuJ/qG92NWX8i5x1Y8qyj3vizhXS31OxjKDu3LKP+7/qBgfIKValiZIEwoVoJKUHlhWtYrktkxV1XsX+pPlg=="}},"fs-constants@1.0.0":{"resolution":{"integrity":"sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow=="}},"fs-extra@11.3.1":{"resolution":{"integrity":"sha512-eXvGGwZ5CL17ZSwHWd3bbgk7UUpF6IFHtP57NYYakPvHOs8GDgDe5KJI36jIJzDkJ6eJjuzRA8eBQb6SkKue0g=="},"engines":{"node":">=14.14"}},"fs-extra@7.0.1":{"resolution":{"integrity":"sha512-YJDaCJZEnBmcbw13fvdAM9AwNOJwOzrE4pqMqBq5nFiEqXUqHwlK4B+3pUw6JNvfSPtX05xFHtYy/1ni01eGCw=="},"engines":{"node":">=6 <7 || >=8"}},"fs-extra@8.1.0":{"resolution":{"integrity":"sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g=="},"engines":{"node":">=6 <7 || >=8"}},"fs-minipass@2.1.0":{"resolution":{"integrity":"sha512-V/JgOLFCS+R6Vcq0slCuaeWEdNC3ouDlJMNIsacH2VtALiu9mV4LPrHc5cDl8k5aw6J8jwgWWpiTo5RYhmIzvg=="},"engines":{"node":">= 8"}},"fsevents@2.3.2":{"resolution":{"integrity":"sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA=="},"engines":{"node":"^8.16.0 || ^10.6.0 || >=11.0.0"},"os":["darwin"]},"fsevents@2.3.3":{"resolution":{"integrity":"sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw=="},"engines":{"node":"^8.16.0 || ^10.6.0 || >=11.0.0"},"os":["darwin"]},"function-bind@1.1.2":{"resolution":{"integrity":"sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA=="}},"geckodriver@4.5.1":{"resolution":{"integrity":"sha512-lGCRqPMuzbRNDWJOQcUqhNqPvNsIFu6yzXF8J/6K3WCYFd2r5ckbeF7h1cxsnjA7YLSEiWzERCt6/gjZ3tW0ug=="},"engines":{"node":"^16.13 || >=18 || >=20"},"hasBin":true},"gensync@1.0.0-beta.2":{"resolution":{"integrity":"sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg=="},"engines":{"node":">=6.9.0"}},"get-caller-file@2.0.5":{"resolution":{"integrity":"sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="},"engines":{"node":"6.* || 8.* || >= 10.*"}},"get-intrinsic@1.3.0":{"resolution":{"integrity":"sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ=="},"engines":{"node":">= 0.4"}},"get-port@7.2.0":{"resolution":{"integrity":"sha512-afP4W205ONCuMoPBqcR6PSXnzX35KTcJygfJfcp+QY+uwm3p20p1YczWXhlICIzGMCxYBQcySEcOgsJcrkyobg=="},"engines":{"node":">=16"}},"get-proto@1.0.1":{"resolution":{"integrity":"sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g=="},"engines":{"node":">= 0.4"}},"get-stream@5.2.0":{"resolution":{"integrity":"sha512-nBF+F1rAZVCu/p7rjzgA+Yb4lfYXrpl7a6VmJrU8wF9I1CKvP/QwPNZHnOlwbTkY6dvtFIzFMSyQXbLoTQPRpA=="},"engines":{"node":">=8"}},"get-tsconfig@4.14.0":{"resolution":{"integrity":"sha512-yTb+8DXzDREzgvYmh6s9vHsSVCHeC0G3PI5bEXNBHtmshPnO+S5O7qgLEOn0I5QvMy6kpZN8K1NKGyilLb93wA=="}},"get-uri@6.0.5":{"resolution":{"integrity":"sha512-b1O07XYq8eRuVzBNgJLstU6FYc1tS6wnMtF1I1D9lE8LxZSOGZ7LhxN54yPP6mGw5f2CkXY2BQUL9Fx41qvcIg=="},"engines":{"node":">= 14"}},"github-from-package@0.0.0":{"resolution":{"integrity":"sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw=="}},"github-slugger@2.0.0":{"resolution":{"integrity":"sha512-IaOQ9puYtjrkq7Y0Ygl9KDZnrf/aiUJYUpVf89y8kyaxbRG7Y1SrX/jaumrv81vc61+kiMempujsM3Yw7w5qcw=="}},"glob-parent@5.1.2":{"resolution":{"integrity":"sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="},"engines":{"node":">= 6"}},"glob-to-regexp@0.4.1":{"resolution":{"integrity":"sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw=="}},"glob@10.4.5":{"resolution":{"integrity":"sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg=="},"deprecated":"Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me","hasBin":true},"glob@10.5.0":{"resolution":{"integrity":"sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="},"deprecated":"Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me","hasBin":true},"globals@15.15.0":{"resolution":{"integrity":"sha512-7ACyT3wmyp3I61S4fG682L0VA2RGD9otkqGJIwNUMF1SWUombIIk+af1unuDYgMm082aHYwD+mzJvv9Iu8dsgg=="},"engines":{"node":">=18"}},"globby@11.1.0":{"resolution":{"integrity":"sha512-jhIXaOzy1sb8IyocaruWSn1TjmnBVs8Ayhcy83rmxNJ8q2uWKCAj3CnJY+KpGSXCueAPc0i05kVvVKtP1t9S3g=="},"engines":{"node":">=10"}},"gopd@1.2.0":{"resolution":{"integrity":"sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg=="},"engines":{"node":">= 0.4"}},"graceful-fs@4.2.11":{"resolution":{"integrity":"sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ=="}},"grapheme-splitter@1.0.4":{"resolution":{"integrity":"sha512-bzh50DW9kTPM00T8y4o8vQg89Di9oLJVLW/KaOGIXJWP/iqCN6WKYkbNOF04vFLJhwcpYUh9ydh/+5vpOqV4YQ=="}},"graphql@16.14.0":{"resolution":{"integrity":"sha512-BBvQ/406p+4CZbTpCbVPSxfzrZrbnuWSP1ELYgyS6B+hNeKzgrdB4JczCa5VZUBQrDa9hUngm0KnexY6pJRN5Q=="},"engines":{"node":"^12.22.0 || ^14.16.0 || ^16.0.0 || >=17.0.0"}},"h3@2.0.1-rc.20":{"resolution":{"integrity":"sha512-28ljodXuUp0fZovdiSRq4G9OgrxCztrJe5VdYzXAB7ueRvI7pIUqLU14Xi3XqdYJ/khXjfpUOOD2EQa6CmBgsg=="},"engines":{"node":">=20.11.1"},"hasBin":true,"peerDependencies":{"crossws":"^0.4.1"},"peerDependenciesMeta":{"crossws":{"optional":true}}},"hachure-fill@0.5.2":{"resolution":{"integrity":"sha512-3GKBOn+m2LX9iq+JC1064cSFprJY4jL1jCXTcpnfER5HYE2l/4EfWSGzkPa/ZDBmYI0ZOEj5VHV/eKnPGkHuOg=="}},"happy-dom@18.0.1":{"resolution":{"integrity":"sha512-qn+rKOW7KWpVTtgIUi6RVmTBZJSe2k0Db0vh1f7CWrWclkkc7/Q+FrOfkZIb2eiErLyqu5AXEzE7XthO9JVxRA=="},"engines":{"node":">=20.0.0"}},"has-flag@4.0.0":{"resolution":{"integrity":"sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="},"engines":{"node":">=8"}},"has-symbols@1.1.0":{"resolution":{"integrity":"sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ=="},"engines":{"node":">= 0.4"}},"has-tostringtag@1.0.2":{"resolution":{"integrity":"sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw=="},"engines":{"node":">= 0.4"}},"hasown@2.0.2":{"resolution":{"integrity":"sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="},"engines":{"node":">= 0.4"}},"hast-util-embedded@3.0.0":{"resolution":{"integrity":"sha512-naH8sld4Pe2ep03qqULEtvYr7EjrLK2QHY8KJR6RJkTUjPGObe1vnx585uzem2hGra+s1q08DZZpfgDVYRbaXA=="}},"hast-util-from-html@2.0.3":{"resolution":{"integrity":"sha512-CUSRHXyKjzHov8yKsQjGOElXy/3EKpyX56ELnkHH34vDVw1N1XSQ1ZcAvTyAPtGqLTuKP/uxM+aLkSPqF/EtMw=="}},"hast-util-from-parse5@8.0.3":{"resolution":{"integrity":"sha512-3kxEVkEKt0zvcZ3hCRYI8rqrgwtlIOFMWkbclACvjlDw8Li9S2hk/d51OI0nr/gIpdMHNepwgOKqZ/sy0Clpyg=="}},"hast-util-has-property@3.0.0":{"resolution":{"integrity":"sha512-MNilsvEKLFpV604hwfhVStK0usFY/QmM5zX16bo7EjnAEGofr5YyI37kzopBlZJkHD4t887i+q/C8/tr5Q94cA=="}},"hast-util-heading-rank@3.0.0":{"resolution":{"integrity":"sha512-EJKb8oMUXVHcWZTDepnr+WNbfnXKFNf9duMesmr4S8SXTJBJ9M4Yok08pu9vxdJwdlGRhVumk9mEhkEvKGifwA=="}},"hast-util-is-body-ok-link@3.0.1":{"resolution":{"integrity":"sha512-0qpnzOBLztXHbHQenVB8uNuxTnm/QBFUOmdOSsEn7GnBtyY07+ENTWVFBAnXd/zEgd9/SUG3lRY7hSIBWRgGpQ=="}},"hast-util-is-element@3.0.0":{"resolution":{"integrity":"sha512-Val9mnv2IWpLbNPqc/pUem+a7Ipj2aHacCwgNfTiK0vJKl0LF+4Ba4+v1oPHFpf3bLYmreq0/l3Gud9S5OH42g=="}},"hast-util-minify-whitespace@1.0.1":{"resolution":{"integrity":"sha512-L96fPOVpnclQE0xzdWb/D12VT5FabA7SnZOUMtL1DbXmYiHJMXZvFkIZfiMmTCNJHUeO2K9UYNXoVyfz+QHuOw=="}},"hast-util-parse-selector@4.0.0":{"resolution":{"integrity":"sha512-wkQCkSYoOGCRKERFWcxMVMOcYE2K1AaNLU8DXS9arxnLOUEWbOXKXiJUNzEpqZ3JOKpnha3jkFrumEjVliDe7A=="}},"hast-util-phrasing@3.0.1":{"resolution":{"integrity":"sha512-6h60VfI3uBQUxHqTyMymMZnEbNl1XmEGtOxxKYL7stY2o601COo62AWAYBQR9lZbYXYSBoxag8UpPRXK+9fqSQ=="}},"hast-util-raw@9.1.0":{"resolution":{"integrity":"sha512-Y8/SBAHkZGoNkpzqqfCldijcuUKh7/su31kEBp67cFY09Wy0mTRgtsLYsiIxMJxlu0f6AA5SUTbDR8K0rxnbUw=="}},"hast-util-sanitize@5.0.2":{"resolution":{"integrity":"sha512-3yTWghByc50aGS7JlGhk61SPenfE/p1oaFeNwkOOyrscaOkMGrcW9+Cy/QAIOBpZxP1yqDIzFMR0+Np0i0+usg=="}},"hast-util-to-html@9.0.5":{"resolution":{"integrity":"sha512-OguPdidb+fbHQSU4Q4ZiLKnzWo8Wwsf5bZfbvu7//a9oTYoqD/fWpe96NuHkoS9h0ccGOTe0C4NGXdtS0iObOw=="}},"hast-util-to-mdast@10.1.2":{"resolution":{"integrity":"sha512-FiCRI7NmOvM4y+f5w32jPRzcxDIz+PUqDwEqn1A+1q2cdp3B8Gx7aVrXORdOKjMNDQsD1ogOr896+0jJHW1EFQ=="}},"hast-util-to-parse5@8.0.0":{"resolution":{"integrity":"sha512-3KKrV5ZVI8if87DVSi1vDeByYrkGzg4mEfeu4alwgmmIeARiBLKCZS2uw5Gb6nU9x9Yufyj3iudm6i7nl52PFw=="}},"hast-util-to-string@3.0.1":{"resolution":{"integrity":"sha512-XelQVTDWvqcl3axRfI0xSeoVKzyIFPwsAGSLIsKdJKQMXDYJS4WYrBNF/8J7RdhIcFI2BOHgAifggsvsxp/3+A=="}},"hast-util-to-text@4.0.2":{"resolution":{"integrity":"sha512-KK6y/BN8lbaq654j7JgBydev7wuNMcID54lkRav1P0CaE1e47P72AWWPiGKXTJU271ooYzcvTAn/Zt0REnvc7A=="}},"hast-util-whitespace@3.0.0":{"resolution":{"integrity":"sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw=="}},"hastscript@9.0.1":{"resolution":{"integrity":"sha512-g7df9rMFX/SPi34tyGCyUBREQoKkapwdY/T04Qn9TDWfHhAYt4/I0gMVirzK5wEzeUqIjEB+LXC/ypb7Aqno5w=="}},"headers-polyfill@4.0.3":{"resolution":{"integrity":"sha512-IScLbePpkvO846sIwOtOTDjutRMWdXdJmXdMvk6gCBHxFO8d+QKOQedyZSxFTTFYRSmlgSTDtXqqq4pcenBXLQ=="}},"highlight.js@11.11.1":{"resolution":{"integrity":"sha512-Xwwo44whKBVCYoliBQwaPvtd/2tYFkRQtXDWj1nackaV2JPXx3L0+Jvd8/qCJ2p+ML0/XVkJ2q+Mr+UVdpJK5w=="},"engines":{"node":">=12.0.0"}},"html-encoding-sniffer@4.0.0":{"resolution":{"integrity":"sha512-Y22oTqIU4uuPgEemfz7NDJz6OeKf12Lsu+QC+s3BVpda64lTiMYCyGwg5ki4vFxkMwQdeZDl2adZoqUgdFuTgQ=="},"engines":{"node":">=18"}},"html-escaper@2.0.2":{"resolution":{"integrity":"sha512-H2iMtd0I4Mt5eYiapRdIDjp+XzelXQ0tFE4JS7YFwFevXXMmOp9myNrUvCg0D6ws8iqkRPBfKHgbwig1SmlLfg=="}},"html-void-elements@3.0.0":{"resolution":{"integrity":"sha512-bEqo66MRXsUGxWHV5IP0PUiAWwoEjba4VCzg0LjFJBpchPaTfyfCKTG6bc5F8ucKec3q5y6qOdGyYTSBEvhCrg=="}},"htmlfy@0.3.2":{"resolution":{"integrity":"sha512-FsxzfpeDYRqn1emox9VpxMPfGjADoUmmup8D604q497R0VNxiXs4ZZTN2QzkaMA5C9aHGUoe1iQRVSm+HK9xuA=="}},"htmlparser2@10.0.0":{"resolution":{"integrity":"sha512-TwAZM+zE5Tq3lrEHvOlvwgj1XLWQCtaaibSN11Q+gGBAS7Y1uZSWwXXRe4iF6OXnaq1riyQAPFOBtYc77Mxq0g=="}},"htmlparser2@10.1.0":{"resolution":{"integrity":"sha512-VTZkM9GWRAtEpveh7MSF6SjjrpNVNNVJfFup7xTY3UpFtm67foy9HDVXneLtFVt4pMz5kZtgNcvCniNFb1hlEQ=="}},"http-proxy-agent@7.0.2":{"resolution":{"integrity":"sha512-T1gkAiYYDWYx3V5Bmyu7HcfcvL7mUrTWiM6yOfa3PIphViJ/gFPbvidQ+veqSOHci/PxBcDabeUNCzpOODJZig=="},"engines":{"node":">= 14"}},"https-proxy-agent@7.0.2":{"resolution":{"integrity":"sha512-NmLNjm6ucYwtcUmL7JQC1ZQ57LmHP4lT15FQ8D61nak1rO6DH+fz5qNK2Ap5UN4ZapYICE3/0KodcLYSPsPbaA=="},"engines":{"node":">= 14"}},"https-proxy-agent@7.0.6":{"resolution":{"integrity":"sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw=="},"engines":{"node":">= 14"}},"human-id@4.1.1":{"resolution":{"integrity":"sha512-3gKm/gCSUipeLsRYZbbdA1BD83lBoWUkZ7G9VFrhWPAU76KwYo5KR8V28bpoPm/ygy0x5/GCbpRQdY7VLYCoIg=="},"hasBin":true},"iconv-lite@0.6.3":{"resolution":{"integrity":"sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw=="},"engines":{"node":">=0.10.0"}},"ieee754@1.2.1":{"resolution":{"integrity":"sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA=="}},"ignore@5.3.2":{"resolution":{"integrity":"sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="},"engines":{"node":">= 4"}},"immediate@3.0.6":{"resolution":{"integrity":"sha512-XXOFtyqDjNDAQxVfYxuF7g9Il/IbWmmlQg2MYKOH8ExIT1qg6xc4zyS3HaEEATgs1btfzxq15ciUiY7gjSXRGQ=="}},"immutable@5.1.5":{"resolution":{"integrity":"sha512-t7xcm2siw+hlUM68I+UEOK+z84RzmN59as9DZ7P1l0994DKUWV7UXBMQZVxaoMSRQ+PBZbHCOoBt7a2wxOMt+A=="}},"import-meta-resolve@4.2.0":{"resolution":{"integrity":"sha512-Iqv2fzaTQN28s/FwZAoFq0ZSs/7hMAHJVX+w8PZl3cY19Pxk6jFFalxQoIfW2826i/fDLXv8IiEZRIT0lDuWcg=="}},"inherits@2.0.4":{"resolution":{"integrity":"sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="}},"ini@1.3.8":{"resolution":{"integrity":"sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew=="}},"ini@4.1.3":{"resolution":{"integrity":"sha512-X7rqawQBvfdjS10YU1y1YVreA3SsLrW9dX2CewP2EbBJM4ypVNLDkO5y04gejPwKIY9lR+7r9gn3rFPt/kmWFg=="},"engines":{"node":"^14.17.0 || ^16.13.0 || >=18.0.0"}},"internmap@1.0.1":{"resolution":{"integrity":"sha512-lDB5YccMydFBtasVtxnZ3MRBHuaoE8GKsppq+EchKL2U4nK/DmEpPHNH8MZe5HkMtpSiTSOZwfN0tzYjO/lJEw=="}},"internmap@2.0.3":{"resolution":{"integrity":"sha512-5Hh7Y1wQbvY5ooGgPbDaL5iYLAPzMTUrjMulskHLH6wnv/A+1q5rgEaiuqEjB+oxGXIVZs1FF+R/KPN3ZSQYYg=="},"engines":{"node":">=12"}},"ip-address@10.2.0":{"resolution":{"integrity":"sha512-/+S6j4E9AHvW9SWMSEY9Xfy66O5PWvVEJ08O0y5JGyEKQpojb0K0GKpz/v5HJ/G0vi3D2sjGK78119oXZeE0qA=="},"engines":{"node":">= 12"}},"is-binary-path@2.1.0":{"resolution":{"integrity":"sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw=="},"engines":{"node":">=8"}},"is-docker@2.2.1":{"resolution":{"integrity":"sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ=="},"engines":{"node":">=8"},"hasBin":true},"is-extglob@2.1.1":{"resolution":{"integrity":"sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="},"engines":{"node":">=0.10.0"}},"is-fullwidth-code-point@3.0.0":{"resolution":{"integrity":"sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="},"engines":{"node":">=8"}},"is-glob@4.0.3":{"resolution":{"integrity":"sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg=="},"engines":{"node":">=0.10.0"}},"is-interactive@1.0.0":{"resolution":{"integrity":"sha512-2HvIEKRoqS62guEC+qBjpvRubdX910WCMuJTZ+I9yvqKU2/12eSL549HMwtabb4oupdj2sMP50k+XJfB/8JE6w=="},"engines":{"node":">=8"}},"is-node-process@1.2.0":{"resolution":{"integrity":"sha512-Vg4o6/fqPxIjtxgUH5QLJhwZ7gW5diGCVlXpuUfELC62CuxM1iHcRe51f2W1FDy04Ai4KJkagKjx3XaqyfRKXw=="}},"is-number@7.0.0":{"resolution":{"integrity":"sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="},"engines":{"node":">=0.12.0"}},"is-plain-obj@4.1.0":{"resolution":{"integrity":"sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg=="},"engines":{"node":">=12"}},"is-potential-custom-element-name@1.0.1":{"resolution":{"integrity":"sha512-bCYeRA2rVibKZd+s2625gGnGF/t7DSqDs4dP7CrLA1m7jKWz6pps0LpYLJN8Q64HtmPKJ1hrN3nzPNKFEKOUiQ=="}},"is-stream@2.0.1":{"resolution":{"integrity":"sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg=="},"engines":{"node":">=8"}},"is-subdir@1.2.0":{"resolution":{"integrity":"sha512-2AT6j+gXe/1ueqbW6fLZJiIw3F8iXGJtt0yDrZaBhAZEG1raiTxKWU+IPqMCzQAXOUCKdA4UDMgacKH25XG2Cw=="},"engines":{"node":">=4"}},"is-unicode-supported@0.1.0":{"resolution":{"integrity":"sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw=="},"engines":{"node":">=10"}},"is-windows@1.0.2":{"resolution":{"integrity":"sha512-eXK1UInq2bPmjyX6e3VHIzMLobc4J94i4AWn+Hpq3OU5KkrRC96OAcR3PRJ/pGu6m8TRnBHP9dkXQVsT/COVIA=="},"engines":{"node":">=0.10.0"}},"is-wsl@2.2.0":{"resolution":{"integrity":"sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww=="},"engines":{"node":">=8"}},"isarray@1.0.0":{"resolution":{"integrity":"sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ=="}},"isbot@5.1.28":{"resolution":{"integrity":"sha512-qrOp4g3xj8YNse4biorv6O5ZShwsJM0trsoda4y7j/Su7ZtTTfVXFzbKkpgcSoDrHS8FcTuUwcU04YimZlZOxw=="},"engines":{"node":">=18"}},"isexe@2.0.0":{"resolution":{"integrity":"sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw=="}},"isexe@3.1.5":{"resolution":{"integrity":"sha512-6B3tLtFqtQS4ekarvLVMZ+X+VlvQekbe4taUkf/rhVO3d/h0M2rfARm/pXLcPEsjjMsFgrFgSrhQIxcSVrBz8w=="},"engines":{"node":">=18"}},"istanbul-lib-coverage@3.2.2":{"resolution":{"integrity":"sha512-O8dpsF+r0WV/8MNRKfnmrtCWhuKjxrq2w+jpzBL5UZKTi2LeVWnWOmWRxFlesJONmc+wLAGvKQZEOanko0LFTg=="},"engines":{"node":">=8"}},"istanbul-lib-report@3.0.1":{"resolution":{"integrity":"sha512-GCfE1mtsHGOELCU8e/Z7YWzpmybrx/+dSTfLrvY8qRmaY6zXTKWn6WQIjaAFw069icm6GVMNkgu0NzI4iPZUNw=="},"engines":{"node":">=10"}},"istanbul-lib-source-maps@5.0.6":{"resolution":{"integrity":"sha512-yg2d+Em4KizZC5niWhQaIomgf5WlL4vOOjZ5xGCmF8SnPE/mDWWXgvRExdcpCgh9lLRRa1/fSYp2ymmbJ1pI+A=="},"engines":{"node":">=10"}},"istanbul-reports@3.2.0":{"resolution":{"integrity":"sha512-HGYWWS/ehqTV3xN10i23tkPkpH46MLCIMFNCaaKNavAXTF1RkqxawEPtnjnGZ6XKSInBKkiOA5BKS+aZiY3AvA=="},"engines":{"node":">=8"}},"jackspeak@3.4.3":{"resolution":{"integrity":"sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw=="}},"jest-diff@30.1.1":{"resolution":{"integrity":"sha512-LUU2Gx8EhYxpdzTR6BmjL1ifgOAQJQELTHOiPv9KITaKjZvJ9Jmgigx01tuZ49id37LorpGc9dPBPlXTboXScw=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"jest-worker@27.5.1":{"resolution":{"integrity":"sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg=="},"engines":{"node":">= 10.13.0"}},"jiti@2.6.1":{"resolution":{"integrity":"sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ=="},"hasBin":true},"js-tokens@10.0.0":{"resolution":{"integrity":"sha512-lM/UBzQmfJRo9ABXbPWemivdCW8V2G8FHaHdypQaIy523snUjog0W71ayWXTjiR+ixeMyVHN2XcpnTd/liPg/Q=="}},"js-tokens@4.0.0":{"resolution":{"integrity":"sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ=="}},"js-tokens@9.0.1":{"resolution":{"integrity":"sha512-mxa9E9ITFOt0ban3j6L5MpjwegGz6lBQmM1IJkWeBZGcMxto50+eWdjC/52xDbS2vy0k7vIMK0Fe2wfL9OQSpQ=="}},"js-yaml@3.14.1":{"resolution":{"integrity":"sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g=="},"hasBin":true},"js-yaml@4.1.1":{"resolution":{"integrity":"sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA=="},"hasBin":true},"jsdom@26.1.0":{"resolution":{"integrity":"sha512-Cvc9WUhxSMEo4McES3P7oK3QaXldCfNWp7pl2NNeiIFlCoLr3kfq9kb1fxftiwk1FLV7CvpvDfonxtzUDeSOPg=="},"engines":{"node":">=18"},"peerDependencies":{"canvas":"^3.0.0"},"peerDependenciesMeta":{"canvas":{"optional":true}}},"jsdom@27.3.0":{"resolution":{"integrity":"sha512-GtldT42B8+jefDUC4yUKAvsaOrH7PDHmZxZXNgF2xMmymjUbRYJvpAybZAKEmXDGTM0mCsz8duOa4vTm5AY2Kg=="},"engines":{"node":"^20.19.0 || ^22.12.0 || >=24.0.0"},"peerDependencies":{"canvas":"^3.0.0"},"peerDependenciesMeta":{"canvas":{"optional":true}}},"jsesc@3.1.0":{"resolution":{"integrity":"sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA=="},"engines":{"node":">=6"},"hasBin":true},"json-parse-even-better-errors@2.3.1":{"resolution":{"integrity":"sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w=="}},"json-schema-to-ts@3.1.1":{"resolution":{"integrity":"sha512-+DWg8jCJG2TEnpy7kOm/7/AxaYoaRbjVB4LFZLySZlWn8exGs3A4OLJR966cVvU26N7X9TWxl+Jsw7dzAqKT6g=="},"engines":{"node":">=16"}},"json-schema-traverse@1.0.0":{"resolution":{"integrity":"sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug=="}},"json5@2.2.3":{"resolution":{"integrity":"sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg=="},"engines":{"node":">=6"},"hasBin":true},"jsonc-parser@3.2.0":{"resolution":{"integrity":"sha512-gfFQZrcTc8CnKXp6Y4/CBT3fTc0OVuDofpre4aEeEpSBPV5X5v4+Vmx+8snU7RLPrNHPKSgLxGo9YuQzz20o+w=="}},"jsonfile@4.0.0":{"resolution":{"integrity":"sha512-m6F1R3z8jjlf2imQHS2Qez5sjKWQzbuuhuJ/FKYFRZvPE3PuHcSMVZzfsLhGVOkfd20obL5SWEBew5ShlquNxg=="}},"jsonfile@6.2.0":{"resolution":{"integrity":"sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg=="}},"jszip@3.10.1":{"resolution":{"integrity":"sha512-xXDvecyTpGLrqFrvkrUSoxxfJI5AH7U8zxxtVclpsUtMCq4JQ290LY8AW5c7Ggnr/Y/oK+bQMbqK2qmtk3pN4g=="}},"katex@0.16.22":{"resolution":{"integrity":"sha512-XCHRdUw4lf3SKBaJe4EvgqIuWwkPSo9XoeO8GjQW94Bp7TWv9hNhzZjZ+OH9yf1UmLygb7DIT5GSFQiyt16zYg=="},"hasBin":true},"khroma@2.1.0":{"resolution":{"integrity":"sha512-Ls993zuzfayK269Svk9hzpeGUKob/sIgZzyHYdjQoAdQetRKpOLj+k/QQQ/6Qi0Yz65mlROrfd+Ev+1+7dz9Kw=="}},"kleur@4.1.5":{"resolution":{"integrity":"sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ=="},"engines":{"node":">=6"}},"kolorist@1.8.0":{"resolution":{"integrity":"sha512-Y+60/zizpJ3HRH8DCss+q95yr6145JXZo46OTpFvDZWLfRCE4qChOyk1b26nMaNpfHHgxagk9dXT5OP0Tfe+dQ=="}},"kysely@0.28.7":{"resolution":{"integrity":"sha512-u/cAuTL4DRIiO2/g4vNGRgklEKNIj5Q3CG7RoUB5DV5SfEC2hMvPxKi0GWPmnzwL2ryIeud2VTcEEmqzTzEPNw=="},"engines":{"node":">=20.0.0"}},"langium@3.3.1":{"resolution":{"integrity":"sha512-QJv/h939gDpvT+9SiLVlY7tZC3xB2qK57v0J04Sh9wpMb6MP1q8gB21L3WIo8T5P1MSMg3Ep14L7KkDCFG3y4w=="},"engines":{"node":">=16.0.0"}},"layout-base@1.0.2":{"resolution":{"integrity":"sha512-8h2oVEZNktL4BH2JCOI90iD1yXwL6iNW7KcCKT2QZgQJR2vbqDsldCTPRU9NifTCqHZci57XvQQ15YTu+sTYPg=="}},"layout-base@2.0.1":{"resolution":{"integrity":"sha512-dp3s92+uNI1hWIpPGH3jK2kxE2lMjdXdr+DH8ynZHpd6PUlH6x6cbuXnoMmiNumznqaNO31xu9e79F0uuZ0JFg=="}},"lazystream@1.0.1":{"resolution":{"integrity":"sha512-b94GiNHQNy6JNTrt5w6zNyffMrNkXZb3KTkCZJb2V1xaEGCk093vkZ2jk3tpaeP33/OiXC+WvK9AxUebnf5nbw=="},"engines":{"node":">= 0.6.3"}},"lie@3.3.0":{"resolution":{"integrity":"sha512-UaiMJzeWRlEujzAuw5LokY1L5ecNQYZKfmyZ9L7wDHb/p5etKaxXhohBcrw0EYby+G/NA52vRSN4N39dxHAIwQ=="}},"lightningcss-android-arm64@1.32.0":{"resolution":{"integrity":"sha512-YK7/ClTt4kAK0vo6w3X+Pnm0D2cf2vPHbhOXdoNti1Ga0al1P4TBZhwjATvjNwLEBCnKvjJc2jQgHXH0NEwlAg=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["android"]},"lightningcss-darwin-arm64@1.32.0":{"resolution":{"integrity":"sha512-RzeG9Ju5bag2Bv1/lwlVJvBE3q6TtXskdZLLCyfg5pt+HLz9BqlICO7LZM7VHNTTn/5PRhHFBSjk5lc4cmscPQ=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["darwin"]},"lightningcss-darwin-x64@1.32.0":{"resolution":{"integrity":"sha512-U+QsBp2m/s2wqpUYT/6wnlagdZbtZdndSmut/NJqlCcMLTWp5muCrID+K5UJ6jqD2BFshejCYXniPDbNh73V8w=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["darwin"]},"lightningcss-freebsd-x64@1.32.0":{"resolution":{"integrity":"sha512-JCTigedEksZk3tHTTthnMdVfGf61Fky8Ji2E4YjUTEQX14xiy/lTzXnu1vwiZe3bYe0q+SpsSH/CTeDXK6WHig=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["freebsd"]},"lightningcss-linux-arm-gnueabihf@1.32.0":{"resolution":{"integrity":"sha512-x6rnnpRa2GL0zQOkt6rts3YDPzduLpWvwAF6EMhXFVZXD4tPrBkEFqzGowzCsIWsPjqSK+tyNEODUBXeeVHSkw=="},"engines":{"node":">= 12.0.0"},"cpu":["arm"],"os":["linux"]},"lightningcss-linux-arm64-gnu@1.32.0":{"resolution":{"integrity":"sha512-0nnMyoyOLRJXfbMOilaSRcLH3Jw5z9HDNGfT/gwCPgaDjnx0i8w7vBzFLFR1f6CMLKF8gVbebmkUN3fa/kQJpQ=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["linux"]},"lightningcss-linux-arm64-musl@1.32.0":{"resolution":{"integrity":"sha512-UpQkoenr4UJEzgVIYpI80lDFvRmPVg6oqboNHfoH4CQIfNA+HOrZ7Mo7KZP02dC6LjghPQJeBsvXhJod/wnIBg=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["linux"]},"lightningcss-linux-x64-gnu@1.32.0":{"resolution":{"integrity":"sha512-V7Qr52IhZmdKPVr+Vtw8o+WLsQJYCTd8loIfpDaMRWGUZfBOYEJeyJIkqGIDMZPwPx24pUMfwSxxI8phr/MbOA=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["linux"]},"lightningcss-linux-x64-musl@1.32.0":{"resolution":{"integrity":"sha512-bYcLp+Vb0awsiXg/80uCRezCYHNg1/l3mt0gzHnWV9XP1W5sKa5/TCdGWaR/zBM2PeF/HbsQv/j2URNOiVuxWg=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["linux"]},"lightningcss-win32-arm64-msvc@1.32.0":{"resolution":{"integrity":"sha512-8SbC8BR40pS6baCM8sbtYDSwEVQd4JlFTOlaD3gWGHfThTcABnNDBda6eTZeqbofalIJhFx0qKzgHJmcPTnGdw=="},"engines":{"node":">= 12.0.0"},"cpu":["arm64"],"os":["win32"]},"lightningcss-win32-x64-msvc@1.32.0":{"resolution":{"integrity":"sha512-Amq9B/SoZYdDi1kFrojnoqPLxYhQ4Wo5XiL8EVJrVsB8ARoC1PWW6VGtT0WKCemjy8aC+louJnjS7U18x3b06Q=="},"engines":{"node":">= 12.0.0"},"cpu":["x64"],"os":["win32"]},"lightningcss@1.32.0":{"resolution":{"integrity":"sha512-NXYBzinNrblfraPGyrbPoD19C1h9lfI/1mzgWYvXUTe414Gz/X1FD2XBZSZM7rRTrMA8JL3OtAaGifrIKhQ5yQ=="},"engines":{"node":">= 12.0.0"}},"lines-and-columns@2.0.3":{"resolution":{"integrity":"sha512-cNOjgCnLB+FnvWWtyRTzmB3POJ+cXxTA81LoW7u8JdmhfXzriropYwpjShnz1QLLWsQwY7nIxoDmcPTwphDK9w=="},"engines":{"node":"^12.20.0 || ^14.13.1 || >=16.0.0"}},"loader-runner@4.3.2":{"resolution":{"integrity":"sha512-DFEqQ3ihfS9blba08cLfYf1NRAIEm+dDjic073DRDc3/JspI/8wYmtDsHwd3+4hwvdxSK7PGaElfTmm0awWJ4w=="},"engines":{"node":">=6.11.5"}},"local-pkg@1.1.2":{"resolution":{"integrity":"sha512-arhlxbFRmoQHl33a0Zkle/YWlmNwoyt6QNZEIJcqNbdrsix5Lvc4HyyI3EnwxTYlZYc32EbYrQ8SzEZ7dqgg9A=="},"engines":{"node":">=14"}},"locate-app@2.5.0":{"resolution":{"integrity":"sha512-xIqbzPMBYArJRmPGUZD9CzV9wOqmVtQnaAn3wrj3s6WYW0bQvPI7x+sPYUGmDTYMHefVK//zc6HEYZ1qnxIK+Q=="}},"locate-path@5.0.0":{"resolution":{"integrity":"sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g=="},"engines":{"node":">=8"}},"lodash-es@4.17.21":{"resolution":{"integrity":"sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw=="}},"lodash.clonedeep@4.5.0":{"resolution":{"integrity":"sha512-H5ZhCF25riFd9uB5UCkVKo61m3S/xZk1x4wA6yp/L3RFP6Z/eHH1ymQcGLo7J3GMPfm0V/7m1tryHuGVxpqEBQ=="}},"lodash.startcase@4.4.0":{"resolution":{"integrity":"sha512-+WKqsK294HMSc2jEbNgpHpd0JfIBhp7rEV4aqXWqFr6AlXov+SlcgB1Fv01y2kGe3Gc8nMW7VA0SrGuSkRfIEg=="}},"lodash.zip@4.2.0":{"resolution":{"integrity":"sha512-C7IOaBBK/0gMORRBd8OETNx3kmOkgIWIPvyDpZSCTwUrpYmgZwJkjZeOD8ww4xbOUOs4/attY+pciKvadNfFbg=="}},"lodash@4.18.1":{"resolution":{"integrity":"sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q=="}},"log-symbols@4.1.0":{"resolution":{"integrity":"sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg=="},"engines":{"node":">=10"}},"loglevel-plugin-prefix@0.8.4":{"resolution":{"integrity":"sha512-WpG9CcFAOjz/FtNht+QJeGpvVl/cdR6P0z6OcXSkr8wFJOsV2GRj2j10JLfjuA4aYkcKCNIEqRGCyTife9R8/g=="}},"loglevel@1.9.2":{"resolution":{"integrity":"sha512-HgMmCqIJSAKqo68l0rS2AanEWfkxaZ5wNiEFb5ggm08lDs9Xl2KxBlX3PTcaD2chBM1gXAYf491/M2Rv8Jwayg=="},"engines":{"node":">= 0.6.0"}},"long@5.3.2":{"resolution":{"integrity":"sha512-mNAgZ1GmyNhD7AuqnTG3/VQ26o760+ZYBPKjPvugO8+nLbYfX6TVpJPseBvopbdY+qpZ/lKUnmEc1LeZYS3QAA=="}},"longest-streak@3.1.0":{"resolution":{"integrity":"sha512-9Ri+o0JYgehTaVBBDoMqIl8GXtbWg711O3srftcHhZ0dqnETqLaoIK0x17fUw9rFSlK/0NlsKe0Ahhyl5pXE2g=="}},"loupe@3.2.1":{"resolution":{"integrity":"sha512-CdzqowRJCeLU72bHvWqwRBBlLcMEtIvGrlvef74kMnV2AolS9Y8xUv1I0U/MNAWMhBlKIoyuEgoJ0t/bbwHbLQ=="}},"lowlight@3.3.0":{"resolution":{"integrity":"sha512-0JNhgFoPvP6U6lE/UdVsSq99tn6DhjjpAj5MxG49ewd2mOBVtwWYIT8ClyABhq198aXXODMU6Ox8DrGy/CpTZQ=="}},"lru-cache@10.4.3":{"resolution":{"integrity":"sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="}},"lru-cache@11.2.4":{"resolution":{"integrity":"sha512-B5Y16Jr9LB9dHVkh6ZevG+vAbOsNOYCX+sXvFWFu7B3Iz5mijW3zdbMyhsh8ANd2mSWBYdJgnqi+mL7/LrOPYg=="},"engines":{"node":"20 || >=22"}},"lru-cache@5.1.1":{"resolution":{"integrity":"sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w=="}},"lru-cache@7.18.3":{"resolution":{"integrity":"sha512-jumlc0BIUrS3qJGgIkWZsyfAM7NCWiBcCDhnd+3NNM5KbBmLTgHVfWBcg6W+rLUsIpzpERPsvwUP7CckAQSOoA=="},"engines":{"node":">=12"}},"lucide-react@0.544.0":{"resolution":{"integrity":"sha512-t5tS44bqd825zAW45UQxpG2CvcC4urOwn2TrwSH8u+MjeE+1NnWl6QqeQ/6NdjMqdOygyiT9p3Ev0p1NJykxjw=="},"peerDependencies":{"react":"^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0"}},"lz-string@1.5.0":{"resolution":{"integrity":"sha512-h5bgJWpxJNswbU7qCrV0tIKQCaS3blPDrqKWx+QxzuzL1zGUzij9XCWLrSLsJPu5t+eWA/ycetzYAO5IOMcWAQ=="},"hasBin":true},"magic-string@0.30.18":{"resolution":{"integrity":"sha512-yi8swmWbO17qHhwIBNeeZxTceJMeBvWJaId6dyvTSOwTipqeHhMhOrz6513r1sOKnpvQ7zkhlG8tPrpilwTxHQ=="}},"magic-string@0.30.21":{"resolution":{"integrity":"sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ=="}},"magicast@0.3.5":{"resolution":{"integrity":"sha512-L0WhttDl+2BOsybvEOLK7fW3UA0OQ0IQ2d6Zl2x/a6vVRs3bAY0ECOSHHeL5jD+SbOpOCUEi0y1DgHEn9Qn1AQ=="}},"magicast@0.5.2":{"resolution":{"integrity":"sha512-E3ZJh4J3S9KfwdjZhe2afj6R9lGIN5Pher1pF39UGrXRqq/VDaGVIGN13BjHd2u8B61hArAGOnso7nBOouW3TQ=="}},"make-dir@4.0.0":{"resolution":{"integrity":"sha512-hXdUTZYIVOt1Ex//jAQi+wTZZpUpwBj/0QsOzqegb3rGMMeJiSEu5xLHnYfBrRV4RH2+OCSOO95Is/7x1WJ4bw=="},"engines":{"node":">=10"}},"markdown-table@3.0.4":{"resolution":{"integrity":"sha512-wiYz4+JrLyb/DqW2hkFJxP7Vd7JuTDm77fvbM8VfEQdmSMqcImWeeRbHwZjBjIFki/VaMK2BhFi7oUUZeM5bqw=="}},"marked@16.4.2":{"resolution":{"integrity":"sha512-TI3V8YYWvkVf3KJe1dRkpnjs68JUPyEa5vjKrp1XEEJUAOaQc+Qj+L1qWbPd0SJuAdQkFU0h73sXXqwDYxsiDA=="},"engines":{"node":">= 20"},"hasBin":true},"math-intrinsics@1.1.0":{"resolution":{"integrity":"sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g=="},"engines":{"node":">= 0.4"}},"mdast-util-find-and-replace@3.0.2":{"resolution":{"integrity":"sha512-Tmd1Vg/m3Xz43afeNxDIhWRtFZgM2VLyaf4vSTYwudTyeuTneoL3qtWMA5jeLyz/O1vDJmmV4QuScFCA2tBPwg=="}},"mdast-util-from-markdown@2.0.2":{"resolution":{"integrity":"sha512-uZhTV/8NBuw0WHkPTrCqDOl0zVe1BIng5ZtHoDk49ME1qqcjYmmLmOf0gELgcRMxN4w2iuIeVso5/6QymSrgmA=="}},"mdast-util-frontmatter@2.0.1":{"resolution":{"integrity":"sha512-LRqI9+wdgC25P0URIJY9vwocIzCcksduHQ9OF2joxQoyTNVduwLAFUzjoopuRJbJAReaKrNQKAZKL3uCMugWJA=="}},"mdast-util-gfm-autolink-literal@2.0.1":{"resolution":{"integrity":"sha512-5HVP2MKaP6L+G6YaxPNjuL0BPrq9orG3TsrZ9YXbA3vDw/ACI4MEsnoDpn6ZNm7GnZgtAcONJyPhOP8tNJQavQ=="}},"mdast-util-gfm-footnote@2.0.0":{"resolution":{"integrity":"sha512-5jOT2boTSVkMnQ7LTrd6n/18kqwjmuYqo7JUPe+tRCY6O7dAuTFMtTPauYYrMPpox9hlN0uOx/FL8XvEfG9/mQ=="}},"mdast-util-gfm-strikethrough@2.0.0":{"resolution":{"integrity":"sha512-mKKb915TF+OC5ptj5bJ7WFRPdYtuHv0yTRxK2tJvi+BDqbkiG7h7u/9SI89nRAYcmap2xHQL9D+QG/6wSrTtXg=="}},"mdast-util-gfm-table@2.0.0":{"resolution":{"integrity":"sha512-78UEvebzz/rJIxLvE7ZtDd/vIQ0RHv+3Mh5DR96p7cS7HsBhYIICDBCu8csTNWNO6tBWfqXPWekRuj2FNOGOZg=="}},"mdast-util-gfm-task-list-item@2.0.0":{"resolution":{"integrity":"sha512-IrtvNvjxC1o06taBAVJznEnkiHxLFTzgonUdy8hzFVeDun0uTjxxrRGVaNFqkU1wJR3RBPEfsxmU6jDWPofrTQ=="}},"mdast-util-gfm@3.0.0":{"resolution":{"integrity":"sha512-dgQEX5Amaq+DuUqf26jJqSK9qgixgd6rYDHAv4aTBuA92cTknZlKpPfa86Z/s8Dj8xsAQpFfBmPUHWJBWqS4Bw=="}},"mdast-util-phrasing@4.1.0":{"resolution":{"integrity":"sha512-TqICwyvJJpBwvGAMZjj4J2n0X8QWp21b9l0o7eXyVJ25YNWYbJDVIyD1bZXE6WtV6RmKJVYmQAKWa0zWOABz2w=="}},"mdast-util-to-hast@13.2.0":{"resolution":{"integrity":"sha512-QGYKEuUsYT9ykKBCMOEDLsU5JRObWQusAolFMeko/tYPufNkRffBAQjIE+99jbA87xv6FgmjLtwjh9wBWajwAA=="}},"mdast-util-to-markdown@2.1.2":{"resolution":{"integrity":"sha512-xj68wMTvGXVOKonmog6LwyJKrYXZPvlwabaryTjLh9LuvovB/KAH+kvi8Gjj+7rJjsFi23nkUxRQv1KqSroMqA=="}},"mdast-util-to-string@4.0.0":{"resolution":{"integrity":"sha512-0H44vDimn51F0YwvxSJSm0eCDOJTRlmN0R1yBh4HLj9wiV1Dn0QoXGbvFAWj2hSItVTlCmBF1hqKlIyUBVFLPg=="}},"mdn-data@2.12.2":{"resolution":{"integrity":"sha512-IEn+pegP1aManZuckezWCO+XZQDplx1366JoVhTpMpBB1sPey/SbveZQUosKiKiGYjg1wH4pMlNgXbCiYgihQA=="}},"merge-stream@2.0.0":{"resolution":{"integrity":"sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w=="}},"merge2@1.4.1":{"resolution":{"integrity":"sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="},"engines":{"node":">= 8"}},"mermaid@11.12.1":{"resolution":{"integrity":"sha512-UlIZrRariB11TY1RtTgUWp65tphtBv4CSq7vyS2ZZ2TgoMjs2nloq+wFqxiwcxlhHUvs7DPGgMjs2aeQxz5h9g=="}},"micromark-core-commonmark@2.0.2":{"resolution":{"integrity":"sha512-FKjQKbxd1cibWMM1P9N+H8TwlgGgSkWZMmfuVucLCHaYqeSvJ0hFeHsIa65pA2nYbes0f8LDHPMrd9X7Ujxg9w=="}},"micromark-extension-frontmatter@2.0.0":{"resolution":{"integrity":"sha512-C4AkuM3dA58cgZha7zVnuVxBhDsbttIMiytjgsM2XbHAB2faRVaHRle40558FBN+DJcrLNCoqG5mlrpdU4cRtg=="}},"micromark-extension-gfm-autolink-literal@2.1.0":{"resolution":{"integrity":"sha512-oOg7knzhicgQ3t4QCjCWgTmfNhvQbDDnJeVu9v81r7NltNCVmhPy1fJRX27pISafdjL+SVc4d3l48Gb6pbRypw=="}},"micromark-extension-gfm-footnote@2.1.0":{"resolution":{"integrity":"sha512-/yPhxI1ntnDNsiHtzLKYnE3vf9JZ6cAisqVDauhp4CEHxlb4uoOTxOCJ+9s51bIB8U1N1FJ1RXOKTIlD5B/gqw=="}},"micromark-extension-gfm-strikethrough@2.1.0":{"resolution":{"integrity":"sha512-ADVjpOOkjz1hhkZLlBiYA9cR2Anf8F4HqZUO6e5eDcPQd0Txw5fxLzzxnEkSkfnD0wziSGiv7sYhk/ktvbf1uw=="}},"micromark-extension-gfm-table@2.1.1":{"resolution":{"integrity":"sha512-t2OU/dXXioARrC6yWfJ4hqB7rct14e8f7m0cbI5hUmDyyIlwv5vEtooptH8INkbLzOatzKuVbQmAYcbWoyz6Dg=="}},"micromark-extension-gfm-tagfilter@2.0.0":{"resolution":{"integrity":"sha512-xHlTOmuCSotIA8TW1mDIM6X2O1SiX5P9IuDtqGonFhEK0qgRI4yeC6vMxEV2dgyr2TiD+2PQ10o+cOhdVAcwfg=="}},"micromark-extension-gfm-task-list-item@2.1.0":{"resolution":{"integrity":"sha512-qIBZhqxqI6fjLDYFTBIa4eivDMnP+OZqsNwmQ3xNLE4Cxwc+zfQEfbs6tzAo2Hjq+bh6q5F+Z8/cksrLFYWQQw=="}},"micromark-extension-gfm@3.0.0":{"resolution":{"integrity":"sha512-vsKArQsicm7t0z2GugkCKtZehqUm31oeGBV/KVSorWSy8ZlNAv7ytjFhvaryUiCUJYqs+NoE6AFhpQvBTM6Q4w=="}},"micromark-factory-destination@2.0.1":{"resolution":{"integrity":"sha512-Xe6rDdJlkmbFRExpTOmRj9N3MaWmbAgdpSrBQvCFqhezUn4AHqJHbaEnfbVYYiexVSs//tqOdY/DxhjdCiJnIA=="}},"micromark-factory-label@2.0.1":{"resolution":{"integrity":"sha512-VFMekyQExqIW7xIChcXn4ok29YE3rnuyveW3wZQWWqF4Nv9Wk5rgJ99KzPvHjkmPXF93FXIbBp6YdW3t71/7Vg=="}},"micromark-factory-space@2.0.1":{"resolution":{"integrity":"sha512-zRkxjtBxxLd2Sc0d+fbnEunsTj46SWXgXciZmHq0kDYGnck/ZSGj9/wULTV95uoeYiK5hRXP2mJ98Uo4cq/LQg=="}},"micromark-factory-title@2.0.1":{"resolution":{"integrity":"sha512-5bZ+3CjhAd9eChYTHsjy6TGxpOFSKgKKJPJxr293jTbfry2KDoWkhBb6TcPVB4NmzaPhMs1Frm9AZH7OD4Cjzw=="}},"micromark-factory-whitespace@2.0.1":{"resolution":{"integrity":"sha512-Ob0nuZ3PKt/n0hORHyvoD9uZhr+Za8sFoP+OnMcnWK5lngSzALgQYKMr9RJVOWLqQYuyn6ulqGWSXdwf6F80lQ=="}},"micromark-util-character@2.1.1":{"resolution":{"integrity":"sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q=="}},"micromark-util-chunked@2.0.1":{"resolution":{"integrity":"sha512-QUNFEOPELfmvv+4xiNg2sRYeS/P84pTW0TCgP5zc9FpXetHY0ab7SxKyAQCNCc1eK0459uoLI1y5oO5Vc1dbhA=="}},"micromark-util-classify-character@2.0.1":{"resolution":{"integrity":"sha512-K0kHzM6afW/MbeWYWLjoHQv1sgg2Q9EccHEDzSkxiP/EaagNzCm7T/WMKZ3rjMbvIpvBiZgwR3dKMygtA4mG1Q=="}},"micromark-util-combine-extensions@2.0.1":{"resolution":{"integrity":"sha512-OnAnH8Ujmy59JcyZw8JSbK9cGpdVY44NKgSM7E9Eh7DiLS2E9RNQf0dONaGDzEG9yjEl5hcqeIsj4hfRkLH/Bg=="}},"micromark-util-decode-numeric-character-reference@2.0.2":{"resolution":{"integrity":"sha512-ccUbYk6CwVdkmCQMyr64dXz42EfHGkPQlBj5p7YVGzq8I7CtjXZJrubAYezf7Rp+bjPseiROqe7G6foFd+lEuw=="}},"micromark-util-decode-string@2.0.1":{"resolution":{"integrity":"sha512-nDV/77Fj6eH1ynwscYTOsbK7rR//Uj0bZXBwJZRfaLEJ1iGBR6kIfNmlNqaqJf649EP0F3NWNdeJi03elllNUQ=="}},"micromark-util-encode@2.0.1":{"resolution":{"integrity":"sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw=="}},"micromark-util-html-tag-name@2.0.1":{"resolution":{"integrity":"sha512-2cNEiYDhCWKI+Gs9T0Tiysk136SnR13hhO8yW6BGNyhOC4qYFnwF1nKfD3HFAIXA5c45RrIG1ub11GiXeYd1xA=="}},"micromark-util-normalize-identifier@2.0.1":{"resolution":{"integrity":"sha512-sxPqmo70LyARJs0w2UclACPUUEqltCkJ6PhKdMIDuJ3gSf/Q+/GIe3WKl0Ijb/GyH9lOpUkRAO2wp0GVkLvS9Q=="}},"micromark-util-resolve-all@2.0.1":{"resolution":{"integrity":"sha512-VdQyxFWFT2/FGJgwQnJYbe1jjQoNTS4RjglmSjTUlpUMa95Htx9NHeYW4rGDJzbjvCsl9eLjMQwGeElsqmzcHg=="}},"micromark-util-sanitize-uri@2.0.1":{"resolution":{"integrity":"sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ=="}},"micromark-util-subtokenize@2.0.3":{"resolution":{"integrity":"sha512-VXJJuNxYWSoYL6AJ6OQECCFGhIU2GGHMw8tahogePBrjkG8aCCas3ibkp7RnVOSTClg2is05/R7maAhF1XyQMg=="}},"micromark-util-symbol@2.0.1":{"resolution":{"integrity":"sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q=="}},"micromark-util-types@2.0.1":{"resolution":{"integrity":"sha512-534m2WhVTddrcKVepwmVEVnUAmtrx9bfIjNoQHRqfnvdaHQiFytEhJoTgpWJvDEXCO5gLTQh3wYC1PgOJA4NSQ=="}},"micromark@4.0.1":{"resolution":{"integrity":"sha512-eBPdkcoCNvYcxQOAKAlceo5SNdzZWfF+FcSupREAzdAh9rRmE239CEQAiTwIgblwnoM8zzj35sZ5ZwvSEOF6Kw=="}},"micromatch@4.0.8":{"resolution":{"integrity":"sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA=="},"engines":{"node":">=8.6"}},"mime-db@1.52.0":{"resolution":{"integrity":"sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="},"engines":{"node":">= 0.6"}},"mime-types@2.1.35":{"resolution":{"integrity":"sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="},"engines":{"node":">= 0.6"}},"mimic-fn@2.1.0":{"resolution":{"integrity":"sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="},"engines":{"node":">=6"}},"mimic-response@3.1.0":{"resolution":{"integrity":"sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ=="},"engines":{"node":">=10"}},"miniflare@4.20260504.0":{"resolution":{"integrity":"sha512-HeI/HLx+rbeo/UB4qb6NsNcFdUVD7xDzyCexZJTVtFMlfpfexUKEDmdeTRRpzeHrJseZFGua+v9JO1kfPublUw=="},"engines":{"node":">=22.0.0"},"hasBin":true},"minimatch@5.1.9":{"resolution":{"integrity":"sha512-7o1wEA2RyMP7Iu7GNba9vc0RWWGACJOCZBJX2GJWip0ikV+wcOsgVuY9uE8CPiyQhkGFSlhuSkZPavN7u1c2Fw=="},"engines":{"node":">=10"}},"minimatch@9.0.3":{"resolution":{"integrity":"sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="},"engines":{"node":">=16 || 14 >=14.17"}},"minimatch@9.0.5":{"resolution":{"integrity":"sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow=="},"engines":{"node":">=16 || 14 >=14.17"}},"minimatch@9.0.9":{"resolution":{"integrity":"sha512-OBwBN9AL4dqmETlpS2zasx+vTeWclWzkblfZk7KTA5j3jeOONz/tRCnZomUyvNg83wL5Zv9Ss6HMJXAgL8R2Yg=="},"engines":{"node":">=16 || 14 >=14.17"}},"minimist@1.2.8":{"resolution":{"integrity":"sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA=="}},"minipass@3.3.6":{"resolution":{"integrity":"sha512-DxiNidxSEK+tHG6zOIklvNOwm3hvCrbUrdtzY74U6HKTJxvIDfOUL5W5P2Ghd3DTkhhKPYGqeNUIh5qcM4YBfw=="},"engines":{"node":">=8"}},"minipass@5.0.0":{"resolution":{"integrity":"sha512-3FnjYuehv9k6ovOEbyOswadCDPX1piCfhV8ncmYtHOjuPwylVWsghTLo7rabjC3Rx5xD4HDx8Wm1xnMF7S5qFQ=="},"engines":{"node":">=8"}},"minipass@7.1.2":{"resolution":{"integrity":"sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw=="},"engines":{"node":">=16 || 14 >=14.17"}},"minipass@7.1.3":{"resolution":{"integrity":"sha512-tEBHqDnIoM/1rXME1zgka9g6Q2lcoCkxHLuc7ODJ5BxbP5d4c2Z5cGgtXAku59200Cx7diuHTOYfSBD8n6mm8A=="},"engines":{"node":">=16 || 14 >=14.17"}},"minizlib@2.1.2":{"resolution":{"integrity":"sha512-bAxsR8BVfj60DWXHE3u30oHzfl4G7khkSuPW+qvpd7jFRHm7dLxOjUk1EHACJ/hxLY8phGJ0YhYHZo7jil7Qdg=="},"engines":{"node":">= 8"}},"mkdirp-classic@0.5.3":{"resolution":{"integrity":"sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A=="}},"mkdirp@1.0.4":{"resolution":{"integrity":"sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw=="},"engines":{"node":">=10"},"hasBin":true},"mlly@1.8.0":{"resolution":{"integrity":"sha512-l8D9ODSRWLe2KHJSifWGwBqpTZXIXTeo8mlKjY+E2HAakaTeNpqAyBZ8GSqLzHgw4XmHmC8whvpjJNMbFZN7/g=="}},"mri@1.2.0":{"resolution":{"integrity":"sha512-tzzskb3bG8LvYGFF/mDTpq3jpI6Q9wc3LEmBaghu+DdCssd1FakN7Bc0hVNmEyGq1bq3RgfkCb3cmQLpNPOroA=="},"engines":{"node":">=4"}},"mrmime@2.0.1":{"resolution":{"integrity":"sha512-Y3wQdFg2Va6etvQ5I82yUhGdsKrcYox6p7FfL1LbK2J4V01F9TGlepTIhnK24t7koZibmg82KGglhA1XK5IsLQ=="},"engines":{"node":">=10"}},"ms@2.1.3":{"resolution":{"integrity":"sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="}},"msw@2.10.2":{"resolution":{"integrity":"sha512-RCKM6IZseZQCWcSWlutdf590M8nVfRHG1ImwzOtwz8IYxgT4zhUO0rfTcTvDGiaFE0Rhcc+h43lcF3Jc9gFtwQ=="},"engines":{"node":">=18"},"hasBin":true,"peerDependencies":{"typescript":">= 4.8.x"},"peerDependenciesMeta":{"typescript":{"optional":true}}},"mute-stream@2.0.0":{"resolution":{"integrity":"sha512-WWdIxpyjEn+FhQJQQv9aQAYlHoNVdzIzUySNV1gHUPDSdZJ3yZn7pAAbQcV7B56Mvu881q9FZV+0Vx2xC44VWA=="},"engines":{"node":"^18.17.0 || >=20.5.0"}},"nanoid@3.3.11":{"resolution":{"integrity":"sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w=="},"engines":{"node":"^10 || ^12 || ^13.7 || ^14 || >=15.0.1"},"hasBin":true},"napi-build-utils@2.0.0":{"resolution":{"integrity":"sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA=="}},"neo-async@2.6.2":{"resolution":{"integrity":"sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw=="}},"netmask@2.1.1":{"resolution":{"integrity":"sha512-eonl3sLUha+S1GzTPxychyhnUzKyeQkZ7jLjKrBagJgPla13F+uQ71HgpFefyHgqrjEbCPkDArxYsjY8/+gLKA=="},"engines":{"node":">= 0.4.0"}},"node-abi@3.89.0":{"resolution":{"integrity":"sha512-6u9UwL0HlAl21+agMN3YAMXcKByMqwGx+pq+P76vii5f7hTPtKDp08/H9py6DY+cfDw7kQNTGEj/rly3IgbNQA=="},"engines":{"node":">=10"}},"node-domexception@1.0.0":{"resolution":{"integrity":"sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ=="},"engines":{"node":">=10.5.0"},"deprecated":"Use your platform's native DOMException instead"},"node-fetch@3.3.2":{"resolution":{"integrity":"sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA=="},"engines":{"node":"^12.20.0 || ^14.13.1 || >=16.0.0"}},"node-machine-id@1.1.12":{"resolution":{"integrity":"sha512-QNABxbrPa3qEIfrE6GOJ7BYIuignnJw7iQ2YPbc3Nla1HzRJjXzZOiikfF8m7eAMfichLt3M4VgLOetqgDmgGQ=="}},"node-releases@2.0.19":{"resolution":{"integrity":"sha512-xxOWJsBKtzAq7DY0J+DTzuz58K8e7sJbdgwkbMWQe8UYB6ekmsQ45q0M/tJDsGaZmbC+l7n57UV8Hl5tHxO9uw=="}},"node-releases@2.0.38":{"resolution":{"integrity":"sha512-3qT/88Y3FbH/Kx4szpQQ4HzUbVrHPKTLVpVocKiLfoYvw9XSGOX2FmD2d6DrXbVYyAQTF2HeF6My8jmzx7/CRw=="}},"normalize-path@3.0.0":{"resolution":{"integrity":"sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA=="},"engines":{"node":">=0.10.0"}},"npm-run-path@4.0.1":{"resolution":{"integrity":"sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw=="},"engines":{"node":">=8"}},"nth-check@2.1.1":{"resolution":{"integrity":"sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w=="}},"nwsapi@2.2.20":{"resolution":{"integrity":"sha512-/ieB+mDe4MrrKMT8z+mQL8klXydZWGR5Dowt4RAGKbJ3kIGEx3X4ljUo+6V73IXtUPWgfOlU5B9MlGxFO5T+cA=="}},"nx-cloud@19.1.0":{"resolution":{"integrity":"sha512-f24vd5/57/MFSXNMfkerdDiK0EvScGOKO71iOWgJNgI1xVweDRmOA/EfjnPMRd5m+pnoPs/4A7DzuwSW0jZVyw=="},"hasBin":true},"nx@21.4.1":{"resolution":{"integrity":"sha512-nD8NjJGYk5wcqiATzlsLauvyrSHV2S2YmM2HBIKqTTwVP2sey07MF3wDB9U2BwxIjboahiITQ6pfqFgB79TF2A=="},"hasBin":true,"peerDependencies":{"@swc-node/register":"^1.8.0","@swc/core":"^1.3.85"},"peerDependenciesMeta":{"@swc-node/register":{"optional":true},"@swc/core":{"optional":true}}},"obug@2.1.1":{"resolution":{"integrity":"sha512-uTqF9MuPraAQ+IsnPf366RG4cP9RtUi7MLO1N3KEc+wb0a6yKpeL0lmk2IB1jY5KHPAlTc6T/JRdC/YqxHNwkQ=="}},"once@1.4.0":{"resolution":{"integrity":"sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w=="}},"onetime@5.1.2":{"resolution":{"integrity":"sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg=="},"engines":{"node":">=6"}},"oniguruma-parser@0.12.1":{"resolution":{"integrity":"sha512-8Unqkvk1RYc6yq2WBYRj4hdnsAxVze8i7iPfQr8e4uSP3tRv0rpZcbGUDvxfQQcdwHt/e9PrMvGCsa8OqG9X3w=="}},"oniguruma-to-es@4.3.3":{"resolution":{"integrity":"sha512-rPiZhzC3wXwE59YQMRDodUwwT9FZ9nNBwQQfsd1wfdtlKEyCdRV0avrTcSZ5xlIvGRVPd/cx6ZN45ECmS39xvg=="}},"open@8.4.2":{"resolution":{"integrity":"sha512-7x81NCL719oNbsq/3mh+hVrAWmFuEYUqrq/Iw3kUzH8ReypT9QQ0BLoJS7/G9k6N81XjW4qHWtjWwe/9eLy1EQ=="},"engines":{"node":">=12"}},"ora@5.3.0":{"resolution":{"integrity":"sha512-zAKMgGXUim0Jyd6CXK9lraBnD3H5yPGBPPOkC23a2BG6hsm4Zu6OQSjQuEtV0BHDf4aKHcUFvJiGRrFuW3MG8g=="},"engines":{"node":">=10"}},"outdent@0.5.0":{"resolution":{"integrity":"sha512-/jHxFIzoMXdqPzTaCpFzAAWhpkSjZPF4Vsn6jAfNpmbH/ymsmd7Qc6VE9BGn0L6YMj6uwpQLxCECpus4ukKS9Q=="}},"outvariant@1.4.3":{"resolution":{"integrity":"sha512-+Sl2UErvtsoajRDKCE5/dBz4DIvHXQQnAxtQTF04OJxY0+DyZXSo5P5Bb7XYWOh81syohlYL24hbDwxedPUJCA=="}},"oxlint@1.26.0":{"resolution":{"integrity":"sha512-KRpL+SMi07JQyggv5ldIF+wt2pnrKm8NLW0B+8bK+0HZsLmH9/qGA+qMWie5Vf7lnlMBllJmsuzHaKFEGY3rIA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"hasBin":true,"peerDependencies":{"oxlint-tsgolint":">=0.4.0"},"peerDependenciesMeta":{"oxlint-tsgolint":{"optional":true}}},"p-filter@2.1.0":{"resolution":{"integrity":"sha512-ZBxxZ5sL2HghephhpGAQdoskxplTwr7ICaehZwLIlfL6acuVgZPm8yBNuRAFBGEqtD/hmUeq9eqLg2ys9Xr/yw=="},"engines":{"node":">=8"}},"p-limit@2.3.0":{"resolution":{"integrity":"sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w=="},"engines":{"node":">=6"}},"p-locate@4.1.0":{"resolution":{"integrity":"sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A=="},"engines":{"node":">=8"}},"p-map@2.1.0":{"resolution":{"integrity":"sha512-y3b8Kpd8OAN444hxfBbFfj1FY/RjtTd8tzYwhUqNYXx0fXx2iX4maP4Qr6qhIKbQXI02wTLAda4fYUbDagTUFw=="},"engines":{"node":">=6"}},"p-map@7.0.4":{"resolution":{"integrity":"sha512-tkAQEw8ysMzmkhgw8k+1U/iPhWNhykKnSk4Rd5zLoPJCuJaGRPo6YposrZgaxHKzDHdDWWZvE/Sk7hsL2X/CpQ=="},"engines":{"node":">=18"}},"p-try@2.2.0":{"resolution":{"integrity":"sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ=="},"engines":{"node":">=6"}},"pac-proxy-agent@7.2.0":{"resolution":{"integrity":"sha512-TEB8ESquiLMc0lV8vcd5Ql/JAKAoyzHFXaStwjkzpOpC5Yv+pIzLfHvjTSdf3vpa2bMiUQrg9i6276yn8666aA=="},"engines":{"node":">= 14"}},"pac-resolver@7.0.1":{"resolution":{"integrity":"sha512-5NPgf87AT2STgwa2ntRMr45jTKrYBGkVU36yT0ig/n/GMAa3oPqhZfIQ2kMEimReg0+t9kZViDVZ83qfVUlckg=="},"engines":{"node":">= 14"}},"package-json-from-dist@1.0.1":{"resolution":{"integrity":"sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw=="}},"package-manager-detector@0.2.11":{"resolution":{"integrity":"sha512-BEnLolu+yuz22S56CU1SUKq3XC3PkwD5wv4ikR4MfGvnRVcmzXR9DwSlW2fEamyTPyXHomBJRzgapeuBvRNzJQ=="}},"package-manager-detector@1.5.0":{"resolution":{"integrity":"sha512-uBj69dVlYe/+wxj8JOpr97XfsxH/eumMt6HqjNTmJDf/6NO9s+0uxeOneIz3AsPt2m6y9PqzDzd3ATcU17MNfw=="}},"pako@1.0.11":{"resolution":{"integrity":"sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw=="}},"parse5-htmlparser2-tree-adapter@7.1.0":{"resolution":{"integrity":"sha512-ruw5xyKs6lrpo9x9rCZqZZnIUntICjQAd0Wsmp396Ul9lN/h+ifgVV1x1gZHi8euej6wTfpqX8j+BFQxF0NS/g=="}},"parse5-parser-stream@7.1.2":{"resolution":{"integrity":"sha512-JyeQc9iwFLn5TbvvqACIF/VXG6abODeB3Fwmv/TGdLk2LfbWkaySGY72at4+Ty7EkPZj854u4CrICqNk2qIbow=="}},"parse5@7.3.0":{"resolution":{"integrity":"sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw=="}},"parse5@8.0.0":{"resolution":{"integrity":"sha512-9m4m5GSgXjL4AjumKzq1Fgfp3Z8rsvjRNbnkVwfu2ImRqE5D0LnY2QfDen18FSY9C573YU5XxSapdHZTZ2WolA=="}},"path-data-parser@0.1.0":{"resolution":{"integrity":"sha512-NOnmBpt5Y2RWbuv0LMzsayp3lVylAHLPUTut412ZA3l+C4uw4ZVkQbjShYCQ8TCpUMdPapr4YjUqLYD6v68j+w=="}},"path-exists@4.0.0":{"resolution":{"integrity":"sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w=="},"engines":{"node":">=8"}},"path-key@3.1.1":{"resolution":{"integrity":"sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q=="},"engines":{"node":">=8"}},"path-scurry@1.11.1":{"resolution":{"integrity":"sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA=="},"engines":{"node":">=16 || 14 >=14.18"}},"path-to-regexp@6.3.0":{"resolution":{"integrity":"sha512-Yhpw4T9C6hPpgPeA28us07OJeqZ5EzQTkbfwuhsUg0c237RomFoETJgmp2sa3F/41gfLE6G5cqcYwznmeEeOlQ=="}},"path-type@4.0.0":{"resolution":{"integrity":"sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw=="},"engines":{"node":">=8"}},"pathe@2.0.3":{"resolution":{"integrity":"sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w=="}},"pathval@2.0.1":{"resolution":{"integrity":"sha512-//nshmD55c46FuFw26xV/xFAaB5HF9Xdap7HJBBnrKdAd6/GxDBaNA1870O79+9ueg61cZLSVc+OaFlfmObYVQ=="},"engines":{"node":">= 14.16"}},"pend@1.2.0":{"resolution":{"integrity":"sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg=="}},"picocolors@1.1.1":{"resolution":{"integrity":"sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA=="}},"picomatch@2.3.1":{"resolution":{"integrity":"sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="},"engines":{"node":">=8.6"}},"picomatch@4.0.3":{"resolution":{"integrity":"sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q=="},"engines":{"node":">=12"}},"picomatch@4.0.4":{"resolution":{"integrity":"sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A=="},"engines":{"node":">=12"}},"pify@4.0.1":{"resolution":{"integrity":"sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g=="},"engines":{"node":">=6"}},"pkg-types@1.3.1":{"resolution":{"integrity":"sha512-/Jm5M4RvtBFVkKWRu2BLUTNP8/M2a+UwuAX+ae4770q1qVGtfjG+WTCupoZixokjmHiry8uI+dlY8KXYV5HVVQ=="}},"pkg-types@2.3.0":{"resolution":{"integrity":"sha512-SIqCzDRg0s9npO5XQ3tNZioRY1uK06lA41ynBC1YmFTmnY6FjUjVt6s4LoADmwoig1qqD0oK8h1p/8mlMx8Oig=="}},"playwright-core@1.55.0":{"resolution":{"integrity":"sha512-GvZs4vU3U5ro2nZpeiwyb0zuFaqb9sUiAJuyrWpcGouD8y9/HLgGbNRjIph7zU9D3hnPaisMl9zG9CgFi/biIg=="},"engines":{"node":">=18"},"hasBin":true},"playwright@1.55.0":{"resolution":{"integrity":"sha512-sdCWStblvV1YU909Xqx0DhOjPZE4/5lJsIS84IfN9dAZfcl/CIZ5O8l3o0j7hPMjDvqoTF8ZUcc+i/GL5erstA=="},"engines":{"node":">=18"},"hasBin":true},"pngjs@7.0.0":{"resolution":{"integrity":"sha512-LKWqWJRhstyYo9pGvgor/ivk2w94eSjE3RGVuzLGlr3NmD8bf7RcYGze1mNdEHRP6TRP6rMuDHk5t44hnTRyow=="},"engines":{"node":">=14.19.0"}},"points-on-curve@0.2.0":{"resolution":{"integrity":"sha512-0mYKnYYe9ZcqMCWhUjItv/oHjvgEsfKvnUTg8sAtnHr3GVy7rGkXCb6d5cSyqrWqL4k81b9CPg3urd+T7aop3A=="}},"points-on-path@0.2.1":{"resolution":{"integrity":"sha512-25ClnWWuw7JbWZcgqY/gJ4FQWadKxGWk+3kR/7kD0tCaDtPPMj7oHu2ToLaVhfpnHrZzYby2w6tUA0eOIuUg8g=="}},"postcss@8.5.14":{"resolution":{"integrity":"sha512-SoSL4+OSEtR99LHFZQiJLkT59C5B1amGO1NzTwj7TT1qCUgUO6hxOvzkOYxD+vMrXBM3XJIKzokoERdqQq/Zmg=="},"engines":{"node":"^10 || ^12 || >=14"}},"postcss@8.5.6":{"resolution":{"integrity":"sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg=="},"engines":{"node":"^10 || ^12 || >=14"}},"posthog-js@1.321.2":{"resolution":{"integrity":"sha512-h5852d9lYmSNjKWvjDkrmO9/awUU3jayNBEoEBUuMAdfDPc4yYYdxBJeDBxYnCFm6RjCLy4O+vmcwuCRC67EXA=="}},"preact@10.28.2":{"resolution":{"integrity":"sha512-lbteaWGzGHdlIuiJ0l2Jq454m6kcpI1zNje6d8MlGAFlYvP2GO4ibnat7P74Esfz4sPTdM6UxtTwh/d3pwM9JA=="}},"prebuild-install@7.1.3":{"resolution":{"integrity":"sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug=="},"engines":{"node":">=10"},"deprecated":"No longer maintained. Please contact the author of the relevant native addon; alternatives are available.","hasBin":true},"prettier@2.8.8":{"resolution":{"integrity":"sha512-tdN8qQGvNjw4CHbY+XXk0JgCXn9QiF21a55rBe5LJAU+kDyC4WQn4+awm2Xfk2lQMk5fKup9XgzTZtGkjBdP9Q=="},"engines":{"node":">=10.13.0"},"hasBin":true},"prettier@3.6.2":{"resolution":{"integrity":"sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ=="},"engines":{"node":">=14"},"hasBin":true},"pretty-format@27.5.1":{"resolution":{"integrity":"sha512-Qb1gy5OrP5+zDf2Bvnzdl3jsTf1qXVMazbvCoKhtKqVs4/YK4ozX4gKQJJVyNe+cajNPn0KoC0MC3FUmaHWEmQ=="},"engines":{"node":"^10.13.0 || ^12.13.0 || ^14.15.0 || >=15.0.0"}},"pretty-format@30.0.5":{"resolution":{"integrity":"sha512-D1tKtYvByrBkFLe2wHJl2bwMJIiT8rW+XA+TiataH79/FszLQMrpGEvzUVkzPau7OCO0Qnrhpe87PqtOAIB8Yw=="},"engines":{"node":"^18.14.0 || ^20.0.0 || ^22.0.0 || >=24.0.0"}},"process-nextick-args@2.0.1":{"resolution":{"integrity":"sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag=="}},"process@0.11.10":{"resolution":{"integrity":"sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A=="},"engines":{"node":">= 0.6.0"}},"progress@2.0.3":{"resolution":{"integrity":"sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA=="},"engines":{"node":">=0.4.0"}},"property-information@6.5.0":{"resolution":{"integrity":"sha512-PgTgs/BlvHxOu8QuEN7wi5A0OmXaBcHpmCSTehcs6Uuu9IkDIEo13Hy7n898RHfrQ49vKCoGeWZSaAK01nwVig=="}},"property-information@7.1.0":{"resolution":{"integrity":"sha512-TwEZ+X+yCJmYfL7TPUOcvBZ4QfoT5YenQiJuX//0th53DE6w0xxLEtfK3iyryQFddXuvkIk51EEgrJQ0WJkOmQ=="}},"protobufjs@7.5.4":{"resolution":{"integrity":"sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg=="},"engines":{"node":">=12.0.0"}},"proxy-agent@6.5.0":{"resolution":{"integrity":"sha512-TmatMXdr2KlRiA2CyDu8GqR8EjahTG3aY3nXjdzFyoZbmB8hrBsTyMezhULIXKnC0jpfjlmiZ3+EaCzoInSu/A=="},"engines":{"node":">= 14"}},"proxy-from-env@1.1.0":{"resolution":{"integrity":"sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg=="}},"psl@1.15.0":{"resolution":{"integrity":"sha512-JZd3gMVBAVQkSs6HdNZo9Sdo0LNcQeMNP3CozBJb3JYC/QUYZTnKxP+f8oWRX4rHP5EurWxqAHTSwUCjlNKa1w=="}},"pump@3.0.3":{"resolution":{"integrity":"sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA=="}},"pump@3.0.4":{"resolution":{"integrity":"sha512-VS7sjc6KR7e1ukRFhQSY5LM2uBWAUPiOPa/A3mkKmiMwSmRFUITt0xuj+/lesgnCv+dPIEYlkzrcyXgquIHMcA=="}},"punycode@2.3.1":{"resolution":{"integrity":"sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg=="},"engines":{"node":">=6"}},"quansync@0.2.11":{"resolution":{"integrity":"sha512-AifT7QEbW9Nri4tAwR5M/uzpBuqfZf+zwaEM/QkzEjj7NBuFD2rBuy0K3dE+8wltbezDV7JMA0WfnCPYRSYbXA=="}},"query-selector-shadow-dom@1.0.1":{"resolution":{"integrity":"sha512-lT5yCqEBgfoMYpf3F2xQRK7zEr1rhIIZuceDK6+xRkJQ4NMbHTwXqk4NkwDwQMNqXgG9r9fyHnzwNVs6zV5KRw=="}},"querystringify@2.2.0":{"resolution":{"integrity":"sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ=="}},"queue-microtask@1.2.3":{"resolution":{"integrity":"sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A=="}},"rc@1.2.8":{"resolution":{"integrity":"sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw=="},"hasBin":true},"react-dom@19.2.0":{"resolution":{"integrity":"sha512-UlbRu4cAiGaIewkPyiRGJk0imDN2T3JjieT6spoL2UeSf5od4n5LB/mQ4ejmxhCFT1tYe8IvaFulzynWovsEFQ=="},"peerDependencies":{"react":"^19.2.0"}},"react-is@17.0.2":{"resolution":{"integrity":"sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w=="}},"react-is@18.3.1":{"resolution":{"integrity":"sha512-/LLMVyas0ljjAtoYiPqYiL8VWXzUUdThrmU5+n20DZv+a+ClRoevUzw5JxU+Ieh5/c87ytoTBV9G1FiKfNJdmg=="}},"react@19.2.0":{"resolution":{"integrity":"sha512-tmbWg6W31tQLeB5cdIBOicJDJRR2KzXsV7uSK9iNfLWQ5bIZfxuPEHp7M8wiHyHnn0DD1i7w3Zmin0FtkrwoCQ=="},"engines":{"node":">=0.10.0"}},"read-yaml-file@1.1.0":{"resolution":{"integrity":"sha512-VIMnQi/Z4HT2Fxuwg5KrY174U1VdUIASQVWXXyqtNRtxSr9IYkn1rsI6Tb6HsrHCmB7gVpNwX6JxPTHcH6IoTA=="},"engines":{"node":">=6"}},"readable-stream@2.3.8":{"resolution":{"integrity":"sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA=="}},"readable-stream@3.6.2":{"resolution":{"integrity":"sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="},"engines":{"node":">= 6"}},"readable-stream@4.7.0":{"resolution":{"integrity":"sha512-oIGGmcpTLwPga8Bn6/Z75SVaH1z5dUut2ibSyAMVhmUggWpmDn2dapB0n7f8nwaSiRtepAsfJyfXIO5DCVAODg=="},"engines":{"node":"^12.22.0 || ^14.17.0 || >=16.0.0"}},"readdir-glob@1.1.3":{"resolution":{"integrity":"sha512-v05I2k7xN8zXvPD9N+z/uhXPaj0sUFCe2rcWZIpBsqxfP7xXFQ0tipAd/wjj1YxWyWtUS5IDJpOG82JKt2EAVA=="}},"readdirp@3.6.0":{"resolution":{"integrity":"sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA=="},"engines":{"node":">=8.10.0"}},"regex-recursion@6.0.2":{"resolution":{"integrity":"sha512-0YCaSCq2VRIebiaUviZNs0cBz1kg5kVS2UKUfNIx8YVs1cN3AV7NTctO5FOKBA+UT2BPJIWZauYHPqJODG50cg=="}},"regex-utilities@2.3.0":{"resolution":{"integrity":"sha512-8VhliFJAWRaUiVvREIiW2NXXTmHs4vMNnSzuJVhscgmGav3g9VDxLrQndI3dZZVVdp0ZO/5v0xmX516/7M9cng=="}},"regex@6.0.1":{"resolution":{"integrity":"sha512-uorlqlzAKjKQZ5P+kTJr3eeJGSVroLKoHmquUj4zHWuR+hEyNqlXsSKlYYF5F4NI6nl7tWCs0apKJ0lmfsXAPA=="}},"rehype-autolink-headings@7.1.0":{"resolution":{"integrity":"sha512-rItO/pSdvnvsP4QRB1pmPiNHUskikqtPojZKJPPPAVx9Hj8i8TwMBhofrrAYRhYOOBZH9tgmG5lPqDLuIWPWmw=="}},"rehype-highlight@7.0.2":{"resolution":{"integrity":"sha512-k158pK7wdC2qL3M5NcZROZ2tR/l7zOzjxXd5VGdcfIyoijjQqpHd3JKtYSBDpDZ38UI2WJWuFAtkMDxmx5kstA=="}},"rehype-minify-whitespace@6.0.2":{"resolution":{"integrity":"sha512-Zk0pyQ06A3Lyxhe9vGtOtzz3Z0+qZ5+7icZ/PL/2x1SHPbKao5oB/g/rlc6BCTajqBb33JcOe71Ye1oFsuYbnw=="}},"rehype-parse@9.0.1":{"resolution":{"integrity":"sha512-ksCzCD0Fgfh7trPDxr2rSylbwq9iYDkSn8TCDmEJ49ljEUBxDVCzCHv7QNzZOfODanX4+bWQ4WZqLCRWYLfhag=="}},"rehype-raw@7.0.0":{"resolution":{"integrity":"sha512-/aE8hCfKlQeA8LmyeyQvQF3eBiLRGNlfBJEvWH7ivp9sBqs7TNqBL5X3v157rM4IFETqDnIOO+z5M/biZbo9Ww=="}},"rehype-remark@10.0.1":{"resolution":{"integrity":"sha512-EmDndlb5NVwXGfUa4c9GPK+lXeItTilLhE6ADSaQuHr4JUlKw9MidzGzx4HpqZrNCt6vnHmEifXQiiA+CEnjYQ=="}},"rehype-sanitize@6.0.0":{"resolution":{"integrity":"sha512-CsnhKNsyI8Tub6L4sm5ZFsme4puGfc6pYylvXo1AeqaGbjOYyzNv3qZPwvs0oMJ39eryyeOdmxwUIo94IpEhqg=="}},"rehype-slug@6.0.0":{"resolution":{"integrity":"sha512-lWyvf/jwu+oS5+hL5eClVd3hNdmwM1kAC0BUvEGD19pajQMIzcNUd/k9GsfQ+FfECvX+JE+e9/btsKH0EjJT6A=="}},"rehype-stringify@10.0.1":{"resolution":{"integrity":"sha512-k9ecfXHmIPuFVI61B9DeLPN0qFHfawM6RsuX48hoqlaKSF61RskNjSm1lI8PhBEM0MRdLxVVm4WmTqJQccH9mA=="}},"remark-frontmatter@5.0.0":{"resolution":{"integrity":"sha512-XTFYvNASMe5iPN0719nPrdItC9aU0ssC4v14mH1BCi1u0n1gAocqcujWUrByftZTbLhRtiKRyjYTSIOcr69UVQ=="}},"remark-gfm@4.0.1":{"resolution":{"integrity":"sha512-1quofZ2RQ9EWdeN34S79+KExV1764+wCUGop5CPL1WGdD0ocPpu91lzPGbwWMECpEpd42kJGQwzRfyov9j4yNg=="}},"remark-parse@11.0.0":{"resolution":{"integrity":"sha512-FCxlKLNGknS5ba/1lmpYijMUzX2esxW5xQqjWxw2eHFfS2MSdaHVINFmhjo+qN1WhZhNimq0dZATN9pH0IDrpA=="}},"remark-rehype@11.1.2":{"resolution":{"integrity":"sha512-Dh7l57ianaEoIpzbp0PC9UKAdCSVklD8E5Rpw7ETfbTl3FqcOOgq5q2LVDhgGCkaBv7p24JXikPdvhhmHvKMsw=="}},"remark-stringify@11.0.0":{"resolution":{"integrity":"sha512-1OSmLd3awB/t8qdoEOMazZkNsfVTeY4fTsgzcQFdXNq8ToTN4ZGwrMnlda4K6smTFKD+GRV6O48i6Z4iKgPPpw=="}},"require-directory@2.1.1":{"resolution":{"integrity":"sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q=="},"engines":{"node":">=0.10.0"}},"require-from-string@2.0.2":{"resolution":{"integrity":"sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw=="},"engines":{"node":">=0.10.0"}},"requires-port@1.0.0":{"resolution":{"integrity":"sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ=="}},"resolve-from@5.0.0":{"resolution":{"integrity":"sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw=="},"engines":{"node":">=8"}},"resolve-pkg-maps@1.0.0":{"resolution":{"integrity":"sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw=="}},"resolve.exports@2.0.3":{"resolution":{"integrity":"sha512-OcXjMsGdhL4XnbShKpAcSqPMzQoYkYyhbEaeSko47MjRP9NfEQMhZkXL1DoFlt9LWQn4YttrdnV6X2OiyzBi+A=="},"engines":{"node":">=10"}},"resq@1.11.0":{"resolution":{"integrity":"sha512-G10EBz+zAAy3zUd/CDoBbXRL6ia9kOo3xRHrMDsHljI0GDkhYlyjwoCx5+3eCC4swi1uCoZQhskuJkj7Gp57Bw=="}},"restore-cursor@3.1.0":{"resolution":{"integrity":"sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA=="},"engines":{"node":">=8"}},"reusify@1.0.4":{"resolution":{"integrity":"sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw=="},"engines":{"iojs":">=1.0.0","node":">=0.10.0"}},"rgb2hex@0.2.5":{"resolution":{"integrity":"sha512-22MOP1Rh7sAo1BZpDG6R5RFYzR2lYEgwq7HEmyW2qcsOqR2lQKmn+O//xV3YG/0rrhMC6KVX2hU+ZXuaw9a5bw=="}},"robust-predicates@3.0.2":{"resolution":{"integrity":"sha512-IXgzBWvWQwE6PrDI05OvmXUIruQTcoMDzRsOd5CDvHCVLcLHMTSYvOK5Cm46kWqlV3yAbuSpBZdJ5oP5OUoStg=="}},"rolldown@1.0.0-rc.17":{"resolution":{"integrity":"sha512-ZrT53oAKrtA4+YtBWPQbtPOxIbVDbxT0orcYERKd63VJTF13zPcgXTvD4843L8pcsI7M6MErt8QtON6lrB9tyA=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"hasBin":true},"rollup@4.53.2":{"resolution":{"integrity":"sha512-MHngMYwGJVi6Fmnk6ISmnk7JAHRNF0UkuucA0CUW3N3a4KnONPEZz+vUanQP/ZC/iY1Qkf3bwPWzyY84wEks1g=="},"engines":{"node":">=18.0.0","npm":">=8.0.0"},"hasBin":true},"rou3@0.8.1":{"resolution":{"integrity":"sha512-ePa+XGk00/3HuCqrEnK3LxJW7I0SdNg6EFzKUJG73hMAdDcOUC/i/aSz7LSDwLrGr33kal/rqOGydzwl6U7zBA=="}},"roughjs@4.6.6":{"resolution":{"integrity":"sha512-ZUz/69+SYpFN/g/lUlo2FXcIjRkSu3nDarreVdGGndHEBJ6cXPdKguS8JGxwj5HA5xIbVKSmLgr5b3AWxtRfvQ=="}},"rrweb-cssom@0.8.0":{"resolution":{"integrity":"sha512-guoltQEx+9aMf2gDZ0s62EcV8lsXR+0w8915TC3ITdn2YueuNjdAYh/levpU9nFaoChh9RUS5ZdQMrKfVEN9tw=="}},"run-parallel@1.2.0":{"resolution":{"integrity":"sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA=="}},"rw@1.3.3":{"resolution":{"integrity":"sha512-PdhdWy89SiZogBLaw42zdeqtRJ//zFd2PgQavcICDUgJT5oW10QCRKbJ6bg4r0/UY2M6BWd5tkxuGFRvCkgfHQ=="}},"rxjs@7.8.2":{"resolution":{"integrity":"sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA=="}},"safaridriver@0.1.2":{"resolution":{"integrity":"sha512-4R309+gWflJktzPXBQCobbWEHlzC4aK3a+Ov3tz2Ib2aBxiwd11phkdIBH1l0EO22x24CJMUQkpKFumRriCSRg=="}},"safe-buffer@5.1.2":{"resolution":{"integrity":"sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g=="}},"safe-buffer@5.2.1":{"resolution":{"integrity":"sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="}},"safer-buffer@2.1.2":{"resolution":{"integrity":"sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg=="}},"sass-embedded-android-arm64@1.89.2":{"resolution":{"integrity":"sha512-+pq7a7AUpItNyPu61sRlP6G2A8pSPpyazASb+8AK2pVlFayCSPAEgpwpCE9A2/Xj86xJZeMizzKUHxM2CBCUxA=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["android"]},"sass-embedded-android-arm@1.89.2":{"resolution":{"integrity":"sha512-oHAPTboBHRZlDBhyRB6dvDKh4KvFs+DZibDHXbkSI6dBZxMTT+Yb2ivocHnctVGucKTLQeT7+OM5DjWHyynL/A=="},"engines":{"node":">=14.0.0"},"cpu":["arm"],"os":["android"]},"sass-embedded-android-riscv64@1.89.2":{"resolution":{"integrity":"sha512-HfJJWp/S6XSYvlGAqNdakeEMPOdhBkj2s2lN6SHnON54rahKem+z9pUbCriUJfM65Z90lakdGuOfidY61R9TYg=="},"engines":{"node":">=14.0.0"},"cpu":["riscv64"],"os":["android"]},"sass-embedded-android-x64@1.89.2":{"resolution":{"integrity":"sha512-BGPzq53VH5z5HN8de6jfMqJjnRe1E6sfnCWFd4pK+CAiuM7iw5Fx6BQZu3ikfI1l2GY0y6pRXzsVLdp/j4EKEA=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["android"]},"sass-embedded-darwin-arm64@1.89.2":{"resolution":{"integrity":"sha512-UCm3RL/tzMpG7DsubARsvGUNXC5pgfQvP+RRFJo9XPIi6elopY5B6H4m9dRYDpHA+scjVthdiDwkPYr9+S/KGw=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["darwin"]},"sass-embedded-darwin-x64@1.89.2":{"resolution":{"integrity":"sha512-D9WxtDY5VYtMApXRuhQK9VkPHB8R79NIIR6xxVlN2MIdEid/TZWi1MHNweieETXhWGrKhRKglwnHxxyKdJYMnA=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["darwin"]},"sass-embedded-linux-arm64@1.89.2":{"resolution":{"integrity":"sha512-2N4WW5LLsbtrWUJ7iTpjvhajGIbmDR18ZzYRywHdMLpfdPApuHPMDF5CYzHbS+LLx2UAx7CFKBnj5LLjY6eFgQ=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["linux"]},"sass-embedded-linux-arm@1.89.2":{"resolution":{"integrity":"sha512-leP0t5U4r95dc90o8TCWfxNXwMAsQhpWxTkdtySDpngoqtTy3miMd7EYNYd1znI0FN1CBaUvbdCMbnbPwygDlA=="},"engines":{"node":">=14.0.0"},"cpu":["arm"],"os":["linux"]},"sass-embedded-linux-musl-arm64@1.89.2":{"resolution":{"integrity":"sha512-nTyuaBX6U1A/cG7WJh0pKD1gY8hbg1m2SnzsyoFG+exQ0lBX/lwTLHq3nyhF+0atv7YYhYKbmfz+sjPP8CZ9lw=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["linux"]},"sass-embedded-linux-musl-arm@1.89.2":{"resolution":{"integrity":"sha512-Z6gG2FiVEEdxYHRi2sS5VIYBmp17351bWtOCUZ/thBM66+e70yiN6Eyqjz80DjL8haRUegNQgy9ZJqsLAAmr9g=="},"engines":{"node":">=14.0.0"},"cpu":["arm"],"os":["linux"]},"sass-embedded-linux-musl-riscv64@1.89.2":{"resolution":{"integrity":"sha512-N6oul+qALO0SwGY8JW7H/Vs0oZIMrRMBM4GqX3AjM/6y8JsJRxkAwnfd0fDyK+aICMFarDqQonQNIx99gdTZqw=="},"engines":{"node":">=14.0.0"},"cpu":["riscv64"],"os":["linux"]},"sass-embedded-linux-musl-x64@1.89.2":{"resolution":{"integrity":"sha512-K+FmWcdj/uyP8GiG9foxOCPfb5OAZG0uSVq80DKgVSC0U44AdGjvAvVZkrgFEcZ6cCqlNC2JfYmslB5iqdL7tg=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["linux"]},"sass-embedded-linux-riscv64@1.89.2":{"resolution":{"integrity":"sha512-g9nTbnD/3yhOaskeqeBQETbtfDQWRgsjHok6bn7DdAuwBsyrR3JlSFyqKc46pn9Xxd9SQQZU8AzM4IR+sY0A0w=="},"engines":{"node":">=14.0.0"},"cpu":["riscv64"],"os":["linux"]},"sass-embedded-linux-x64@1.89.2":{"resolution":{"integrity":"sha512-Ax7dKvzncyQzIl4r7012KCMBvJzOz4uwSNoyoM5IV6y5I1f5hEwI25+U4WfuTqdkv42taCMgpjZbh9ERr6JVMQ=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["linux"]},"sass-embedded-win32-arm64@1.89.2":{"resolution":{"integrity":"sha512-j96iJni50ZUsfD6tRxDQE2QSYQ2WrfHxeiyAXf41Kw0V4w5KYR/Sf6rCZQLMTUOHnD16qTMVpQi20LQSqf4WGg=="},"engines":{"node":">=14.0.0"},"cpu":["arm64"],"os":["win32"]},"sass-embedded-win32-x64@1.89.2":{"resolution":{"integrity":"sha512-cS2j5ljdkQsb4PaORiClaVYynE9OAPZG/XjbOMxpQmjRIf7UroY4PEIH+Waf+y47PfXFX9SyxhYuw2NIKGbEng=="},"engines":{"node":">=14.0.0"},"cpu":["x64"],"os":["win32"]},"sass-embedded@1.89.2":{"resolution":{"integrity":"sha512-Ack2K8rc57kCFcYlf3HXpZEJFNUX8xd8DILldksREmYXQkRHI879yy8q4mRDJgrojkySMZqmmmW1NxrFxMsYaA=="},"engines":{"node":">=16.0.0"},"hasBin":true},"saxes@6.0.0":{"resolution":{"integrity":"sha512-xAg7SOnEhrm5zI3puOOKyy1OMcMlIJZYNJY7xLBwSze0UjhPLnWfj2GF2EpT0jmzaJKIWKHLsaSSajf35bcYnA=="},"engines":{"node":">=v12.22.7"}},"scheduler@0.27.0":{"resolution":{"integrity":"sha512-eNv+WrVbKu1f3vbYJT/xtiF5syA5HPIMtf9IgY/nKg0sWqzAUEvqY/xm7OcZc/qafLx/iO9FgOmeSAp4v5ti/Q=="}},"schema-utils@4.3.3":{"resolution":{"integrity":"sha512-eflK8wEtyOE6+hsaRVPxvUKYCpRgzLqDTb8krvAsRIwOGlHoSgYLgBXoubGgLd2fT41/OUYdb48v4k4WWHQurA=="},"engines":{"node":">= 10.13.0"}},"semver@6.3.1":{"resolution":{"integrity":"sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA=="},"hasBin":true},"semver@7.7.2":{"resolution":{"integrity":"sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA=="},"engines":{"node":">=10"},"hasBin":true},"semver@7.7.3":{"resolution":{"integrity":"sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="},"engines":{"node":">=10"},"hasBin":true},"semver@7.7.4":{"resolution":{"integrity":"sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA=="},"engines":{"node":">=10"},"hasBin":true},"serialize-error@11.0.3":{"resolution":{"integrity":"sha512-2G2y++21dhj2R7iHAdd0FIzjGwuKZld+7Pl/bTU6YIkrC2ZMbVUjm+luj6A6V34Rv9XfKJDKpTWu9W4Gse1D9g=="},"engines":{"node":">=14.16"}},"seroval-plugins@1.5.4":{"resolution":{"integrity":"sha512-S0xQPhUTefAhNvNWFg0c1J8qJArHt5KdtJ/cFAofo06KD1MVSeFWyl4iiu+ApDIuw0WhjpOfCdgConOfAnLgkw=="},"engines":{"node":">=10"},"peerDependencies":{"seroval":"^1.0"}},"seroval@1.5.4":{"resolution":{"integrity":"sha512-46uFvgrXTVxZcUorgSSRZ4y+ieqLLQRMlG4bnCZKW3qI6BZm7Rg4ntMW4p1mILEEBZWrFlcpp0AyIIlM6jD9iw=="},"engines":{"node":">=10"}},"setimmediate@1.0.5":{"resolution":{"integrity":"sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA=="}},"sharp@0.34.5":{"resolution":{"integrity":"sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg=="},"engines":{"node":"^18.17.0 || ^20.3.0 || >=21.0.0"}},"shebang-command@2.0.0":{"resolution":{"integrity":"sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA=="},"engines":{"node":">=8"}},"shebang-regex@3.0.0":{"resolution":{"integrity":"sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A=="},"engines":{"node":">=8"}},"shiki@3.15.0":{"resolution":{"integrity":"sha512-kLdkY6iV3dYbtPwS9KXU7mjfmDm25f5m0IPNFnaXO7TBPcvbUOY72PYXSuSqDzwp+vlH/d7MXpHlKO/x+QoLXw=="}},"siginfo@2.0.0":{"resolution":{"integrity":"sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g=="}},"signal-exit@3.0.7":{"resolution":{"integrity":"sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="}},"signal-exit@4.1.0":{"resolution":{"integrity":"sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="},"engines":{"node":">=14"}},"simple-concat@1.0.1":{"resolution":{"integrity":"sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q=="}},"simple-get@4.0.1":{"resolution":{"integrity":"sha512-brv7p5WgH0jmQJr1ZDDfKDOSeWWg+OVypG99A/5vYGPqJ6pxiaHLy8nxtFjBA7oMa01ebA9gfh1uMCFqOuXxvA=="}},"sirv@3.0.2":{"resolution":{"integrity":"sha512-2wcC/oGxHis/BoHkkPwldgiPSYcpZK3JU28WoMVv55yHJgcZ8rlXvuG9iZggz+sU1d4bRgIGASwyWqjxu3FM0g=="},"engines":{"node":">=18"}},"slash@3.0.0":{"resolution":{"integrity":"sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q=="},"engines":{"node":">=8"}},"smart-buffer@4.2.0":{"resolution":{"integrity":"sha512-94hK0Hh8rPqQl2xXc3HsaBoOXKV20MToPkcXvwbISWLEs+64sBq5kFgn2kJDHb1Pry9yrP0dxrCI9RRci7RXKg=="},"engines":{"node":">= 6.0.0","npm":">= 3.0.0"}},"socks-proxy-agent@8.0.5":{"resolution":{"integrity":"sha512-HehCEsotFqbPW9sJ8WVYB6UbmIMv7kUUORIF2Nncq4VQvBfNBLibW9YZR5dlYCSUhwcD628pRllm7n+E+YTzJw=="},"engines":{"node":">= 14"}},"socks@2.8.8":{"resolution":{"integrity":"sha512-NlGELfPrgX2f1TAAcz0WawlLn+0r3FyhhCRpFFK2CemXenPYvzMWWZINv3eDNo9ucdwme7oCHRY0Jnbs4aIkog=="},"engines":{"node":">= 10.0.0","npm":">= 3.0.0"}},"source-map-js@1.2.1":{"resolution":{"integrity":"sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA=="},"engines":{"node":">=0.10.0"}},"source-map-support@0.5.21":{"resolution":{"integrity":"sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w=="}},"source-map@0.6.1":{"resolution":{"integrity":"sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="},"engines":{"node":">=0.10.0"}},"source-map@0.7.6":{"resolution":{"integrity":"sha512-i5uvt8C3ikiWeNZSVZNWcfZPItFQOsYTUAOkcUPGd8DqDy1uOUikjt5dG+uRlwyvR108Fb9DOd4GvXfT0N2/uQ=="},"engines":{"node":">= 12"}},"space-separated-tokens@2.0.2":{"resolution":{"integrity":"sha512-PEGlAwrG8yXGXRjW32fGbg66JAlOAwbObuqVoJpv/mRgoWDQfgH1wDPvtzWyUSNAXBGSk8h755YDbbcEy3SH2Q=="}},"spacetrim@0.11.59":{"resolution":{"integrity":"sha512-lLYsktklSRKprreOm7NXReW8YiX2VBjbgmXYEziOoGf/qsJqAEACaDvoTtUOycwjpaSh+bT8eu0KrJn7UNxiCg=="}},"spawndamnit@3.0.1":{"resolution":{"integrity":"sha512-MmnduQUuHCoFckZoWnXsTg7JaiLBJrKFj9UI2MbRPGaJeVpsLcVBu6P/IGZovziM/YBsellCmsprgNA+w0CzVg=="}},"split2@4.2.0":{"resolution":{"integrity":"sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg=="},"engines":{"node":">= 10.x"}},"sprintf-js@1.0.3":{"resolution":{"integrity":"sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g=="}},"srvx@0.11.15":{"resolution":{"integrity":"sha512-iXsux0UcOjdvs0LCMa2Ws3WwcDUozA3JN3BquNXkaFPP7TpRqgunKdEgoZ/uwb1J6xaYHfxtz9Twlh6yzwM6Tg=="},"engines":{"node":">=20.16.0"},"hasBin":true},"stackback@0.0.2":{"resolution":{"integrity":"sha512-1XMJE5fQo1jGH6Y/7ebnwPOBEkIEnT4QF32d5R1+VXdXveM0IBMJt8zfaxX1P3QhVwrYe+576+jkANtSS2mBbw=="}},"statuses@2.0.2":{"resolution":{"integrity":"sha512-DvEy55V3DB7uknRo+4iOGT5fP1slR8wQohVdknigZPMpMstaKJQWhwiYBACJE3Ul2pTnATihhBYnRhZQHGBiRw=="},"engines":{"node":">= 0.8"}},"std-env@3.10.0":{"resolution":{"integrity":"sha512-5GS12FdOZNliM5mAOxFRg7Ir0pWz8MdpYm6AY6VPkGpbA7ZzmbzNcBJQ0GPvvyWgcY7QAhCgf9Uy89I03faLkg=="}},"std-env@3.9.0":{"resolution":{"integrity":"sha512-UGvjygr6F6tpH7o2qyqR6QYpwraIjKSdtzyBdyytFOHmPZY917kwdwLG0RbOjWOnKmnm3PeHjaoLLMie7kPLQw=="}},"std-env@4.1.0":{"resolution":{"integrity":"sha512-Rq7ybcX2RuC55r9oaPVEW7/xu3tj8u4GeBYHBWCychFtzMIr86A7e3PPEBPT37sHStKX3+TiX/Fr/ACmJLVlLQ=="}},"streamx@2.25.0":{"resolution":{"integrity":"sha512-0nQuG6jf1w+wddNEEXCF4nTg3LtufWINB5eFEN+5TNZW7KWJp6x87+JFL43vaAUPyCfH1wID+mNVyW6OHtFamg=="}},"strict-event-emitter@0.5.1":{"resolution":{"integrity":"sha512-vMgjE/GGEPEFnhFub6pa4FmJBRBVOLpIII2hvCZ8Kzb7K0hlHo7mQv6xYrBvCL2LtAIBwFUK8wvuJgTVSQ5MFQ=="}},"string-width@4.2.3":{"resolution":{"integrity":"sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="},"engines":{"node":">=8"}},"string-width@5.1.2":{"resolution":{"integrity":"sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA=="},"engines":{"node":">=12"}},"string_decoder@1.1.1":{"resolution":{"integrity":"sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg=="}},"string_decoder@1.3.0":{"resolution":{"integrity":"sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA=="}},"stringify-entities@4.0.4":{"resolution":{"integrity":"sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg=="}},"strip-ansi@6.0.1":{"resolution":{"integrity":"sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A=="},"engines":{"node":">=8"}},"strip-ansi@7.1.2":{"resolution":{"integrity":"sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA=="},"engines":{"node":">=12"}},"strip-ansi@7.2.0":{"resolution":{"integrity":"sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w=="},"engines":{"node":">=12"}},"strip-bom@3.0.0":{"resolution":{"integrity":"sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA=="},"engines":{"node":">=4"}},"strip-json-comments@2.0.1":{"resolution":{"integrity":"sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ=="},"engines":{"node":">=0.10.0"}},"strip-literal@3.0.0":{"resolution":{"integrity":"sha512-TcccoMhJOM3OebGhSBEmp3UZ2SfDMZUEBdRA/9ynfLi8yYajyWX3JiXArcJt4Umh4vISpspkQIY8ZZoCqjbviA=="}},"strnum@1.1.2":{"resolution":{"integrity":"sha512-vrN+B7DBIoTTZjnPNewwhx6cBA/H+IS7rfW68n7XxC1y7uoiGQBxaKzqucGUgavX15dJgiGztLJ8vxuEzwqBdA=="}},"stylis@4.3.6":{"resolution":{"integrity":"sha512-yQ3rwFWRfwNUY7H5vpU0wfdkNSnvnJinhF9830Swlaxl03zsOjCfmX0ugac+3LtK0lYSgwL/KXc8oYL3mG4YFQ=="}},"supports-color@10.2.2":{"resolution":{"integrity":"sha512-SS+jx45GF1QjgEXQx4NJZV9ImqmO2NPz5FNsIHrsDjh2YsHnawpan7SNQ1o8NuhrbHZy9AZhIoCUiCeaW/C80g=="},"engines":{"node":">=18"}},"supports-color@7.2.0":{"resolution":{"integrity":"sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw=="},"engines":{"node":">=8"}},"supports-color@8.1.1":{"resolution":{"integrity":"sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q=="},"engines":{"node":">=10"}},"symbol-tree@3.2.4":{"resolution":{"integrity":"sha512-9QNk5KwDF+Bvz+PyObkmSYjI5ksVUYtjW7AU22r2NKcfLJcXp96hkDWU3+XndOsUb+AQ9QhfzfCT2O+CNWT5Tw=="}},"sync-child-process@1.0.2":{"resolution":{"integrity":"sha512-8lD+t2KrrScJ/7KXCSyfhT3/hRq78rC0wBFqNJXv3mZyn6hW2ypM05JmlSvtqRbeq6jqA94oHbxAr2vYsJ8vDA=="},"engines":{"node":">=16.0.0"}},"sync-message-port@1.2.0":{"resolution":{"integrity":"sha512-gAQ9qrUN/UCypHtGFbbe7Rc/f9bzO88IwrG8TDo/aMKAApKyD6E3W4Cm0EfhfBb6Z6SKt59tTCTfD+n1xmAvMg=="},"engines":{"node":">=16.0.0"}},"tailwindcss@4.2.4":{"resolution":{"integrity":"sha512-HhKppgO81FQof5m6TEnuBWCZGgfRAWbaeOaGT00KOy/Pf/j6oUihdvBpA7ltCeAvZpFhW3j0PTclkxsd4IXYDA=="}},"tapable@2.3.3":{"resolution":{"integrity":"sha512-uxc/zpqFg6x7C8vOE7lh6Lbda8eEL9zmVm/PLeTPBRhh1xCgdWaQ+J1CUieGpIfm2HdtsUpRv+HshiasBMcc6A=="},"engines":{"node":">=6"}},"tar-fs@2.1.4":{"resolution":{"integrity":"sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ=="}},"tar-fs@3.1.2":{"resolution":{"integrity":"sha512-QGxxTxxyleAdyM3kpFs14ymbYmNFrfY+pHj7Z8FgtbZ7w2//VAgLMac7sT6nRpIHjppXO2AwwEOg0bPFVRcmXw=="}},"tar-stream@2.2.0":{"resolution":{"integrity":"sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ=="},"engines":{"node":">=6"}},"tar-stream@3.2.0":{"resolution":{"integrity":"sha512-ojzvCvVaNp6aOTFmG7jaRD0meowIAuPc3cMMhSgKiVWws1GyHbGd/xvnyuRKcKlMpt3qvxx6r0hreCNITP9hIg=="}},"tar@6.2.1":{"resolution":{"integrity":"sha512-DZ4yORTwrbTj/7MZYq2w+/ZFdI6OZ/f9SFHR+71gIVUZhOQPHzVCLpvRnPgyaMpfWxxk/4ONva3GQSyNIKRv6A=="},"engines":{"node":">=10"},"deprecated":"Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me"},"teex@1.0.1":{"resolution":{"integrity":"sha512-eYE6iEI62Ni1H8oIa7KlDU6uQBtqr4Eajni3wX7rpfXD8ysFx8z0+dri+KWEPWpBsxXfxu58x/0jvTVT1ekOSg=="}},"term-size@2.2.1":{"resolution":{"integrity":"sha512-wK0Ri4fOGjv/XPy8SBHZChl8CM7uMc5VML7SqiQ0zG7+J5Vr+RMQDoHa2CNT6KHUnTGIXH34UDMkPzAUyapBZg=="},"engines":{"node":">=8"}},"terser-webpack-plugin@5.5.0":{"resolution":{"integrity":"sha512-UYhptBwhWvfIjKd/UuFo6D8uq9xpGLDK+z8EDsj/zWhrTaH34cKEbrkMKfV5YWqGBvAYA3tlzZbs2R+qYrbQJA=="},"engines":{"node":">= 10.13.0"},"peerDependencies":{"@swc/core":"*","esbuild":"*","uglify-js":"*","webpack":"^5.1.0"},"peerDependenciesMeta":{"@swc/core":{"optional":true},"esbuild":{"optional":true},"uglify-js":{"optional":true}}},"terser@5.36.0":{"resolution":{"integrity":"sha512-IYV9eNMuFAV4THUspIRXkLakHnV6XO7FEdtKjf/mDyrnqUg9LnlOn6/RwRvM9SZjR4GUq8Nk8zj67FzVARr74w=="},"engines":{"node":">=10"},"hasBin":true},"test-exclude@7.0.1":{"resolution":{"integrity":"sha512-pFYqmTw68LXVjeWJMST4+borgQP2AyMNbg1BpZh9LbyhUeNkeaPF9gzfPGUAnSMV3qPYdWUwDIjjCLiSDOl7vg=="},"engines":{"node":">=18"}},"text-decoder@1.2.7":{"resolution":{"integrity":"sha512-vlLytXkeP4xvEq2otHeJfSQIRyWxo/oZGEbXrtEEF9Hnmrdly59sUbzZ/QgyWuLYHctCHxFF4tRQZNQ9k60ExQ=="}},"tinybench@2.9.0":{"resolution":{"integrity":"sha512-0+DUvqWMValLmha6lr4kD8iAMK1HzV0/aKnCtWb9v9641TnP/MFb7Pc2bxoxQjTXAErryXVgUOfv2YqNllqGeg=="}},"tinyexec@0.3.2":{"resolution":{"integrity":"sha512-KQQR9yN7R5+OSwaK0XQoj22pwHoTlgYqmUscPYoknOoWCWfj/5/ABTMRi69FrKU5ffPVh5QcFikpWJI/P1ocHA=="}},"tinyexec@1.0.2":{"resolution":{"integrity":"sha512-W/KYk+NFhkmsYpuHq5JykngiOCnxeVL8v8dFnqxSD8qEEdRfXk1SDM6JzNqcERbcGYj9tMrDQBYV9cjgnunFIg=="},"engines":{"node":">=18"}},"tinyglobby@0.2.14":{"resolution":{"integrity":"sha512-tX5e7OM1HnYr2+a2C/4V0htOcSQcoSTH9KgJnVvNm5zm/cyEWKJ7j7YutsH9CxMdtOkkLFy2AHrMci9IM8IPZQ=="},"engines":{"node":">=12.0.0"}},"tinyglobby@0.2.15":{"resolution":{"integrity":"sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ=="},"engines":{"node":">=12.0.0"}},"tinyglobby@0.2.16":{"resolution":{"integrity":"sha512-pn99VhoACYR8nFHhxqix+uvsbXineAasWm5ojXoN8xEwK5Kd3/TrhNn1wByuD52UxWRLy8pu+kRMniEi6Eq9Zg=="},"engines":{"node":">=12.0.0"}},"tinypool@1.1.1":{"resolution":{"integrity":"sha512-Zba82s87IFq9A9XmjiX5uZA/ARWDrB03OHlq+Vw1fSdt0I+4/Kutwy8BP4Y/y/aORMo61FQ0vIb5j44vSo5Pkg=="},"engines":{"node":"^18.0.0 || >=20.0.0"}},"tinyrainbow@2.0.0":{"resolution":{"integrity":"sha512-op4nsTR47R6p0vMUUoYl/a+ljLFVtlfaXkLQmqfLR1qHma1h/ysYk4hEXZ880bf2CYgTskvTa/e196Vd5dDQXw=="},"engines":{"node":">=14.0.0"}},"tinyrainbow@3.0.3":{"resolution":{"integrity":"sha512-PSkbLUoxOFRzJYjjxHJt9xro7D+iilgMX/C9lawzVuYiIdcihh9DXmVibBe8lmcFrRi/VzlPjBxbN7rH24q8/Q=="},"engines":{"node":">=14.0.0"}},"tinyrainbow@3.1.0":{"resolution":{"integrity":"sha512-Bf+ILmBgretUrdJxzXM0SgXLZ3XfiaUuOj/IKQHuTXip+05Xn+uyEYdVg0kYDipTBcLrCVyUzAPz7QmArb0mmw=="},"engines":{"node":">=14.0.0"}},"tinyspy@4.0.3":{"resolution":{"integrity":"sha512-t2T/WLB2WRgZ9EpE4jgPJ9w+i66UZfDc8wHh0xrwiRNN+UwH98GIJkTeZqX9rg0i0ptwzqW+uYeIF0T4F8LR7A=="},"engines":{"node":">=14.0.0"}},"tldts-core@6.1.52":{"resolution":{"integrity":"sha512-j4OxQI5rc1Ve/4m/9o2WhWSC4jGc4uVbCINdOEJRAraCi0YqTqgMcxUx7DbmuP0G3PCixoof/RZB0Q5Kh9tagw=="}},"tldts-core@7.0.19":{"resolution":{"integrity":"sha512-lJX2dEWx0SGH4O6p+7FPwYmJ/bu1JbcGJ8RLaG9b7liIgZ85itUVEPbMtWRVrde/0fnDPEPHW10ZsKW3kVsE9A=="}},"tldts@6.1.52":{"resolution":{"integrity":"sha512-fgrDJXDjbAverY6XnIt0lNfv8A0cf7maTEaZxNykLGsLG7XP+5xhjBTrt/ieAsFjAlZ+G5nmXomLcZDkxXnDzw=="},"hasBin":true},"tldts@7.0.19":{"resolution":{"integrity":"sha512-8PWx8tvC4jDB39BQw1m4x8y5MH1BcQ5xHeL2n7UVFulMPH/3Q0uiamahFJ3lXA0zO2SUyRXuVVbWSDmstlt9YA=="},"hasBin":true},"tmp@0.2.5":{"resolution":{"integrity":"sha512-voyz6MApa1rQGUxT3E+BK7/ROe8itEx7vD8/HEvt4xwXucvQ5G5oeEiHkmHZJuBO21RpOf+YYm9MOivj709jow=="},"engines":{"node":">=14.14"}},"to-regex-range@5.0.1":{"resolution":{"integrity":"sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ=="},"engines":{"node":">=8.0"}},"totalist@3.0.1":{"resolution":{"integrity":"sha512-sf4i37nQ2LBx4m3wB74y+ubopq6W/dIzXg0FDGjsYnZHVa1Da8FH853wlL2gtUhg+xJXjfk3kUZS3BRoQeoQBQ=="},"engines":{"node":">=6"}},"tough-cookie@4.1.4":{"resolution":{"integrity":"sha512-Loo5UUvLD9ScZ6jh8beX1T6sO1w2/MpCRpEP7V280GKMVUQ0Jzar2U3UJPsrdbziLEMMhu3Ujnq//rhiFuIeag=="},"engines":{"node":">=6"}},"tough-cookie@5.1.2":{"resolution":{"integrity":"sha512-FVDYdxtnj0G6Qm/DhNPSb8Ju59ULcup3tuJxkFb5K8Bv2pUXILbf0xZWU8PX8Ov19OXljbUyveOFwRMwkXzO+A=="},"engines":{"node":">=16"}},"tough-cookie@6.0.0":{"resolution":{"integrity":"sha512-kXuRi1mtaKMrsLUxz3sQYvVl37B0Ns6MzfrtV5DvJceE9bPyspOqk9xxv7XbZWcfLWbFmm997vl83qUWVJA64w=="},"engines":{"node":">=16"}},"tr46@5.1.1":{"resolution":{"integrity":"sha512-hdF5ZgjTqgAntKkklYw0R03MG2x/bSzTtkxmIRw/sTNV8YXsCJ1tfLAX23lhxhHJlEf3CRCOCGGWw3vI3GaSPw=="},"engines":{"node":">=18"}},"tr46@6.0.0":{"resolution":{"integrity":"sha512-bLVMLPtstlZ4iMQHpFHTR7GAGj2jxi8Dg0s2h2MafAE4uSWF98FC/3MomU51iQAMf8/qDUbKWf5GxuvvVcXEhw=="},"engines":{"node":">=20"}},"tree-kill@1.2.2":{"resolution":{"integrity":"sha512-L0Orpi8qGpRG//Nd+H90vFB+3iHnue1zSSGmNOOCh1GLJ7rUKVwV2HvijphGQS2UmhUZewS9VgvxYIdgr+fG1A=="},"hasBin":true},"trim-lines@3.0.1":{"resolution":{"integrity":"sha512-kRj8B+YHZCc9kQYdWfJB2/oUl9rA99qbowYYBtr4ui4mZyAQ2JpvVBd/6U2YloATfqBhBTSMhTpgBHtU0Mf3Rg=="}},"trim-trailing-lines@2.1.0":{"resolution":{"integrity":"sha512-5UR5Biq4VlVOtzqkm2AZlgvSlDJtME46uV0br0gENbwN4l5+mMKT4b9gJKqWtuL2zAIqajGJGuvbCbcAJUZqBg=="}},"trough@2.2.0":{"resolution":{"integrity":"sha512-tmMpK00BjZiUyVyvrBK7knerNgmgvcV/KLVyuma/SC+TQN167GrMRciANTz09+k3zW8L8t60jWO1GpfkZdjTaw=="}},"ts-algebra@2.0.0":{"resolution":{"integrity":"sha512-FPAhNPFMrkwz76P7cdjdmiShwMynZYN6SgOujD1urY4oNm80Ou9oMdmbR45LotcKOXoy7wSmHkRFE6Mxbrhefw=="}},"ts-dedent@2.2.0":{"resolution":{"integrity":"sha512-q5W7tVM71e2xjHZTlgfTDoPF/SmqKG5hddq9SzR49CH2hayqRKJtQ4mtRlSxKaJlR/+9rEM+mnBHf7I2/BQcpQ=="},"engines":{"node":">=6.10"}},"tsconfig-paths@4.2.0":{"resolution":{"integrity":"sha512-NoZ4roiN7LnbKn9QqE1amc9DJfzvZXxF4xDavcOWt1BPkdx+m+0gJuPM+S0vCe7zTJMYUP0R8pO2XMr+Y8oLIg=="},"engines":{"node":">=6"}},"tslib@2.8.1":{"resolution":{"integrity":"sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="}},"tsx@4.20.5":{"resolution":{"integrity":"sha512-+wKjMNU9w/EaQayHXb7WA7ZaHY6hN8WgfvHNQ3t1PnU91/7O8TcTnIhCDYTZwnt8JsO9IBqZ30Ln1r7pPF52Aw=="},"engines":{"node":">=18.0.0"},"hasBin":true},"tunnel-agent@0.6.0":{"resolution":{"integrity":"sha512-McnNiV1l8RYeY8tBgEpuodCC1mLUdbSN+CYBL7kJsJNInOP8UjDDEwdk6Mw60vdLLrr5NHKZhMAOSrR2NZuQ+w=="}},"type-fest@2.19.0":{"resolution":{"integrity":"sha512-RAH822pAdBgcNMAfWnCBU3CFZcfZ/i1eZjwFU/dsLKumyuuP3niueg2UAukXYF0E2AAoc82ZSSf9J0WQBinzHA=="},"engines":{"node":">=12.20"}},"type-fest@4.26.0":{"resolution":{"integrity":"sha512-OduNjVJsFbifKb57UqZ2EMP1i4u64Xwow3NYXUtBbD4vIwJdQd4+xl8YDou1dlm4DVrtwT/7Ky8z8WyCULVfxw=="},"engines":{"node":">=16"}},"type-fest@4.41.0":{"resolution":{"integrity":"sha512-TeTSQ6H5YHvpqVwBRcnLDCBnDOHWYu7IvGbHT6N8AOymcr9PJGjc1GTtiWZTYg0NCgYwvnYWEkVChQAr9bjfwA=="},"engines":{"node":">=16"}},"typescript@5.8.3":{"resolution":{"integrity":"sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ=="},"engines":{"node":">=14.17"},"hasBin":true},"typescript@5.9.3":{"resolution":{"integrity":"sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="},"engines":{"node":">=14.17"},"hasBin":true},"ufo@1.6.1":{"resolution":{"integrity":"sha512-9a4/uxlTWJ4+a5i0ooc1rU7C7YOw3wT+UGqdeNNHWnOF9qcMBgLRS+4IYUqbczewFx4mLEig6gawh7X6mFlEkA=="}},"undici-types@6.21.0":{"resolution":{"integrity":"sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ=="}},"undici-types@7.16.0":{"resolution":{"integrity":"sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="}},"undici@7.16.0":{"resolution":{"integrity":"sha512-QEg3HPMll0o3t2ourKwOeUAZ159Kn9mx5pnzHRQO8+Wixmh88YdZRiIwat0iNzNNXn0yoEtXJqFpyW7eM8BV7g=="},"engines":{"node":">=20.18.1"}},"undici@7.24.8":{"resolution":{"integrity":"sha512-6KQ/+QxK49Z/p3HO6E5ZCZWNnCasyZLa5ExaVYyvPxUwKtbCPMKELJOqh7EqOle0t9cH/7d2TaaTRRa6Nhs4YQ=="},"engines":{"node":">=20.18.1"}},"undici@7.25.0":{"resolution":{"integrity":"sha512-xXnp4kTyor2Zq+J1FfPI6Eq3ew5h6Vl0F/8d9XU5zZQf1tX9s2Su1/3PiMmUANFULpmksxkClamIZcaUqryHsQ=="},"engines":{"node":">=20.18.1"}},"unenv@2.0.0-rc.24":{"resolution":{"integrity":"sha512-i7qRCmY42zmCwnYlh9H2SvLEypEFGye5iRmEMKjcGi7zk9UquigRjFtTLz0TYqr0ZGLZhaMHl/foy1bZR+Cwlw=="}},"unified@11.0.5":{"resolution":{"integrity":"sha512-xKvGhPWw3k84Qjh8bI3ZeJjqnyadK+GEFtazSfZv/rKeTkTjOJho6mFqh2SM96iIcZokxiOpg78GazTSg8+KHA=="}},"unist-util-find-after@5.0.0":{"resolution":{"integrity":"sha512-amQa0Ep2m6hE2g72AugUItjbuM8X8cGQnFoHk0pGfrFeT9GZhzN5SW8nRsiGKK7Aif4CrACPENkA6P/Lw6fHGQ=="}},"unist-util-is@6.0.0":{"resolution":{"integrity":"sha512-2qCTHimwdxLfz+YzdGfkqNlH0tLi9xjTnHddPmJwtIG9MGsdbutfTc4P+haPD7l7Cjxf/WZj+we5qfVPvvxfYw=="}},"unist-util-position@5.0.0":{"resolution":{"integrity":"sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA=="}},"unist-util-stringify-position@4.0.0":{"resolution":{"integrity":"sha512-0ASV06AAoKCDkS2+xw5RXJywruurpbC4JZSm7nr7MOt1ojAzvyyaO+UxZf18j8FCF6kmzCZKcAgN/yu2gm2XgQ=="}},"unist-util-visit-parents@6.0.1":{"resolution":{"integrity":"sha512-L/PqWzfTP9lzzEa6CKs0k2nARxTdZduw3zyh8d2NVBnsyvHjSX4TWse388YrrQKbvI8w20fGjGlhgT96WwKykw=="}},"unist-util-visit@5.0.0":{"resolution":{"integrity":"sha512-MR04uvD+07cwl/yhVuVWAtw+3GOR/knlL55Nd/wAdblk27GCVt3lqpTivy/tkJcZoNPzTwS1Y+KMojlLDhoTzg=="}},"universalify@0.1.2":{"resolution":{"integrity":"sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg=="},"engines":{"node":">= 4.0.0"}},"universalify@0.2.0":{"resolution":{"integrity":"sha512-CJ1QgKmNg3CwvAv/kOFmtnEN05f0D/cn9QntgNOQlQF9dgvVTHj3t+8JPdjqawCHk7V/KA+fbUqzZ9XWhcqPUg=="},"engines":{"node":">= 4.0.0"}},"universalify@2.0.1":{"resolution":{"integrity":"sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw=="},"engines":{"node":">= 10.0.0"}},"unplugin@3.0.0":{"resolution":{"integrity":"sha512-0Mqk3AT2TZCXWKdcoaufeXNukv2mTrEZExeXlHIOZXdqYoHHr4n51pymnwV8x2BOVxwXbK2HLlI7usrqMpycdg=="},"engines":{"node":"^20.19.0 || >=22.12.0"}},"update-browserslist-db@1.1.3":{"resolution":{"integrity":"sha512-UxhIZQ+QInVdunkDAaiazvvT/+fXL5Osr0JZlJulepYu6Jd7qJtDZjlur0emRlT71EN3ScPoE7gvsuIKKNavKw=="},"hasBin":true,"peerDependencies":{"browserslist":">= 4.21.0"}},"update-browserslist-db@1.2.3":{"resolution":{"integrity":"sha512-Js0m9cx+qOgDxo0eMiFGEueWztz+d4+M3rGlmKPT+T4IS/jP4ylw3Nwpu6cpTTP8R1MAC1kF4VbdLt3ARf209w=="},"hasBin":true,"peerDependencies":{"browserslist":">= 4.21.0"}},"url-parse@1.5.10":{"resolution":{"integrity":"sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ=="}},"urlpattern-polyfill@10.1.0":{"resolution":{"integrity":"sha512-IGjKp/o0NL3Bso1PymYURCJxMPNAf/ILOpendP9f5B6e1rTJgdgiOvgfoT8VxCAdY+Wisb9uhGaJJf3yZ2V9nw=="}},"use-sync-external-store@1.6.0":{"resolution":{"integrity":"sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w=="},"peerDependencies":{"react":"^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"}},"userhome@1.0.1":{"resolution":{"integrity":"sha512-5cnLm4gseXjAclKowC4IjByaGsjtAoV6PrOQOljplNB54ReUYJP8HdAFq2muHinSDAh09PPX/uXDPfdxRHvuSA=="},"engines":{"node":">= 0.8.0"}},"util-deprecate@1.0.2":{"resolution":{"integrity":"sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw=="}},"uuid@11.1.0":{"resolution":{"integrity":"sha512-0/A9rDy9P7cJ+8w1c9WD9V//9Wj15Ce2MPz8Ri6032usz+NfePxx5AcN3bN+r6ZL6jEo066/yNYB3tn4pQEx+A=="},"hasBin":true},"varint@6.0.0":{"resolution":{"integrity":"sha512-cXEIW6cfr15lFv563k4GuVuW/fiwjknytD37jIOLSdSWuOI6WnO/oKwmP2FQTU2l01LP8/M5TSAJpzUaGe3uWg=="}},"vfile-location@5.0.3":{"resolution":{"integrity":"sha512-5yXvWDEgqeiYiBe1lbxYF7UMAIm/IcopxMHrMQDq3nvKcjPKIhZklUKL+AE7J7uApI4kwe2snsK+eI6UTj9EHg=="}},"vfile-message@4.0.2":{"resolution":{"integrity":"sha512-jRDZ1IMLttGj41KcZvlrYAaI3CfqpLpfpf+Mfig13viT6NKvRzWZ+lXz0Y5D60w6uJIBAOGq9mSHf0gktF0duw=="}},"vfile@6.0.3":{"resolution":{"integrity":"sha512-KzIbH/9tXat2u30jf+smMwFCsno4wHVdNmzFyL+T/L3UGqqk6JKfVqOFOZEpZSHADH1k40ab6NUIXZq422ov3Q=="}},"vite-node@3.2.4":{"resolution":{"integrity":"sha512-EbKSKh+bh1E1IFxeO0pg1n4dvoOTt0UDiXMd/qn++r98+jPO1xtJilvXldeuQ8giIB5IkpjCgMleHMNEsGH6pg=="},"engines":{"node":"^18.0.0 || ^20.0.0 || >=22.0.0"},"hasBin":true},"vite-plugin-static-copy@4.1.0":{"resolution":{"integrity":"sha512-9XOarNV7LgP0KBB7AApxdgFikLXx3daZdqjC3AevYsL6MrUH62zphonLUs2a6LZc1HN1GY+vQdheZ8VVJb6dQQ=="},"engines":{"node":"^22.0.0 || >=24.0.0"},"peerDependencies":{"vite":"^6.0.0 || ^7.0.0 || ^8.0.0"}},"vite@7.2.7":{"resolution":{"integrity":"sha512-ITcnkFeR3+fI8P1wMgItjGrR10170d8auB4EpMLPqmx6uxElH3a/hHGQabSHKdqd4FXWO1nFIp9rRn7JQ34ACQ=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"hasBin":true,"peerDependencies":{"@types/node":"^20.19.0 || >=22.12.0","jiti":">=1.21.0","less":"^4.0.0","lightningcss":"^1.21.0","sass":"^1.70.0","sass-embedded":"^1.70.0","stylus":">=0.54.8","sugarss":"^5.0.0","terser":"^5.16.0","tsx":"^4.8.1","yaml":"^2.4.2"},"peerDependenciesMeta":{"@types/node":{"optional":true},"jiti":{"optional":true},"less":{"optional":true},"lightningcss":{"optional":true},"sass":{"optional":true},"sass-embedded":{"optional":true},"stylus":{"optional":true},"sugarss":{"optional":true},"terser":{"optional":true},"tsx":{"optional":true},"yaml":{"optional":true}}},"vite@8.0.10":{"resolution":{"integrity":"sha512-rZuUu9j6J5uotLDs+cAA4O5H4K1SfPliUlQwqa6YEwSrWDZzP4rhm00oJR5snMewjxF5V/K3D4kctsUTsIU9Mw=="},"engines":{"node":"^20.19.0 || >=22.12.0"},"hasBin":true,"peerDependencies":{"@types/node":"^20.19.0 || >=22.12.0","@vitejs/devtools":"^0.1.0","esbuild":"^0.27.0 || ^0.28.0","jiti":">=1.21.0","less":"^4.0.0","sass":"^1.70.0","sass-embedded":"^1.70.0","stylus":">=0.54.8","sugarss":"^5.0.0","terser":"^5.16.0","tsx":"^4.8.1","yaml":"^2.4.2"},"peerDependenciesMeta":{"@types/node":{"optional":true},"@vitejs/devtools":{"optional":true},"esbuild":{"optional":true},"jiti":{"optional":true},"less":{"optional":true},"sass":{"optional":true},"sass-embedded":{"optional":true},"stylus":{"optional":true},"sugarss":{"optional":true},"terser":{"optional":true},"tsx":{"optional":true},"yaml":{"optional":true}}},"vitefu@1.1.1":{"resolution":{"integrity":"sha512-B/Fegf3i8zh0yFbpzZ21amWzHmuNlLlmJT6n7bu5e+pCHUKQIfXSYokrqOBGEMMe9UG2sostKQF9mml/vYaWJQ=="},"peerDependencies":{"vite":"^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 || ^7.0.0-beta.0"},"peerDependenciesMeta":{"vite":{"optional":true}}},"vitest@3.2.4":{"resolution":{"integrity":"sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A=="},"engines":{"node":"^18.0.0 || ^20.0.0 || >=22.0.0"},"hasBin":true,"peerDependencies":{"@edge-runtime/vm":"*","@types/debug":"^4.1.12","@types/node":"^18.0.0 || ^20.0.0 || >=22.0.0","@vitest/browser":"3.2.4","@vitest/ui":"3.2.4","happy-dom":"*","jsdom":"*"},"peerDependenciesMeta":{"@edge-runtime/vm":{"optional":true},"@types/debug":{"optional":true},"@types/node":{"optional":true},"@vitest/browser":{"optional":true},"@vitest/ui":{"optional":true},"happy-dom":{"optional":true},"jsdom":{"optional":true}}},"vitest@4.0.18":{"resolution":{"integrity":"sha512-hOQuK7h0FGKgBAas7v0mSAsnvrIgAvWmRFjmzpJ7SwFHH3g1k2u37JtYwOwmEKhK6ZO3v9ggDBBm0La1LCK4uQ=="},"engines":{"node":"^20.0.0 || ^22.0.0 || >=24.0.0"},"hasBin":true,"peerDependencies":{"@edge-runtime/vm":"*","@opentelemetry/api":"^1.9.0","@types/node":"^20.0.0 || ^22.0.0 || >=24.0.0","@vitest/browser-playwright":"4.0.18","@vitest/browser-preview":"4.0.18","@vitest/browser-webdriverio":"4.0.18","@vitest/ui":"4.0.18","happy-dom":"*","jsdom":"*"},"peerDependenciesMeta":{"@edge-runtime/vm":{"optional":true},"@opentelemetry/api":{"optional":true},"@types/node":{"optional":true},"@vitest/browser-playwright":{"optional":true},"@vitest/browser-preview":{"optional":true},"@vitest/browser-webdriverio":{"optional":true},"@vitest/ui":{"optional":true},"happy-dom":{"optional":true},"jsdom":{"optional":true}}},"vitest@4.1.5":{"resolution":{"integrity":"sha512-9Xx1v3/ih3m9hN+SbfkUyy0JAs72ap3r7joc87XL6jwF0jGg6mFBvQ1SrwaX+h8BlkX6Hz9shdd1uo6AF+ZGpg=="},"engines":{"node":"^20.0.0 || ^22.0.0 || >=24.0.0"},"hasBin":true,"peerDependencies":{"@edge-runtime/vm":"*","@opentelemetry/api":"^1.9.0","@types/node":"^20.0.0 || ^22.0.0 || >=24.0.0","@vitest/browser-playwright":"4.1.5","@vitest/browser-preview":"4.1.5","@vitest/browser-webdriverio":"4.1.5","@vitest/coverage-istanbul":"4.1.5","@vitest/coverage-v8":"4.1.5","@vitest/ui":"4.1.5","happy-dom":"*","jsdom":"*","vite":"^6.0.0 || ^7.0.0 || ^8.0.0"},"peerDependenciesMeta":{"@edge-runtime/vm":{"optional":true},"@opentelemetry/api":{"optional":true},"@types/node":{"optional":true},"@vitest/browser-playwright":{"optional":true},"@vitest/browser-preview":{"optional":true},"@vitest/browser-webdriverio":{"optional":true},"@vitest/coverage-istanbul":{"optional":true},"@vitest/coverage-v8":{"optional":true},"@vitest/ui":{"optional":true},"happy-dom":{"optional":true},"jsdom":{"optional":true}}},"vscode-jsonrpc@8.2.0":{"resolution":{"integrity":"sha512-C+r0eKJUIfiDIfwJhria30+TYWPtuHJXHtI7J0YlOmKAo7ogxP20T0zxB7HZQIFhIyvoBPwWskjxrvAtfjyZfA=="},"engines":{"node":">=14.0.0"}},"vscode-languageserver-protocol@3.17.5":{"resolution":{"integrity":"sha512-mb1bvRJN8SVznADSGWM9u/b07H7Ecg0I3OgXDuLdn307rl/J3A9YD6/eYOssqhecL27hK1IPZAsaqh00i/Jljg=="}},"vscode-languageserver-textdocument@1.0.12":{"resolution":{"integrity":"sha512-cxWNPesCnQCcMPeenjKKsOCKQZ/L6Tv19DTRIGuLWe32lyzWhihGVJ/rcckZXJxfdKCFvRLS3fpBIsV/ZGX4zA=="}},"vscode-languageserver-types@3.17.5":{"resolution":{"integrity":"sha512-Ld1VelNuX9pdF39h2Hgaeb5hEZM2Z3jUrrMgWQAu82jMtZp7p3vJT3BzToKtZI7NgQssZje5o0zryOrhQvzQAg=="}},"vscode-languageserver@9.0.1":{"resolution":{"integrity":"sha512-woByF3PDpkHFUreUa7Hos7+pUWdeWMXRd26+ZX2A8cFx6v/JPTtd4/uN0/jB6XQHYaOlHbio03NTHCqrgG5n7g=="},"hasBin":true},"vscode-uri@3.0.8":{"resolution":{"integrity":"sha512-AyFQ0EVmsOZOlAnxoFOGOq1SQDWAB7C6aqMGS23svWAllfOaxbuFvcT8D1i8z3Gyn8fraVeZNNmN6e9bxxXkKw=="}},"w3c-xmlserializer@5.0.0":{"resolution":{"integrity":"sha512-o8qghlI8NZHU1lLPrpi2+Uq7abh4GGPpYANlalzWxyWteJOCsr/P+oPBA49TOLu5FTZO4d3F9MnWJfiMo4BkmA=="},"engines":{"node":">=18"}},"wait-port@1.1.0":{"resolution":{"integrity":"sha512-3e04qkoN3LxTMLakdqeWth8nih8usyg+sf1Bgdf9wwUkp05iuK1eSY/QpLvscT/+F/gA89+LpUmmgBtesbqI2Q=="},"engines":{"node":">=10"},"hasBin":true},"watchpack@2.5.1":{"resolution":{"integrity":"sha512-Zn5uXdcFNIA1+1Ei5McRd+iRzfhENPCe7LeABkJtNulSxjma+l7ltNx55BWZkRlwRnpOgHqxnjyaDgJnNXnqzg=="},"engines":{"node":">=10.13.0"}},"wcwidth@1.0.1":{"resolution":{"integrity":"sha512-XHPEwS0q6TaxcvG85+8EYkbiCux2XtWG2mkc47Ng2A77BQu9+DqIOJldST4HgPkuea7dvKSj5VgX3P1d4rW8Tg=="}},"web-namespaces@2.0.1":{"resolution":{"integrity":"sha512-bKr1DkiNa2krS7qxNtdrtHAmzuYGFQLiQ13TsorsdT6ULTkPLKuu5+GsFpDlg6JFjUTwX2DyhMPG2be8uPrqsQ=="}},"web-streams-polyfill@3.3.3":{"resolution":{"integrity":"sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw=="},"engines":{"node":">= 8"}},"web-vitals@4.2.4":{"resolution":{"integrity":"sha512-r4DIlprAGwJ7YM11VZp4R884m0Vmgr6EAKe3P+kO0PPj3Unqyvv59rczf6UiGcb9Z8QxZVcqKNwv/g0WNdWwsw=="}},"web-vitals@5.1.0":{"resolution":{"integrity":"sha512-ArI3kx5jI0atlTtmV0fWU3fjpLmq/nD3Zr1iFFlJLaqa5wLBkUSzINwBPySCX/8jRyjlmy1Volw1kz1g9XE4Jg=="}},"webdriver@9.2.0":{"resolution":{"integrity":"sha512-UrhuHSLq4m3OgncvX75vShfl5w3gmjAy8LvLb6/L6V+a+xcqMRelFx/DQ72Mr84F4m8Li6wjtebrOH1t9V/uOQ=="},"engines":{"node":">=18.20.0"}},"webdriverio@9.2.1":{"resolution":{"integrity":"sha512-AI7xzqTmFiU7oAx4fpEF1U1MA7smhCPVDeM0gxPqG5qWepzib3WDX2SsRtcmhdVW+vLJ3m4bf8rAXxZ2M1msWA=="},"engines":{"node":">=18.20.0"},"peerDependencies":{"puppeteer-core":"^22.3.0"},"peerDependenciesMeta":{"puppeteer-core":{"optional":true}}},"webidl-conversions@7.0.0":{"resolution":{"integrity":"sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g=="},"engines":{"node":">=12"}},"webidl-conversions@8.0.0":{"resolution":{"integrity":"sha512-n4W4YFyz5JzOfQeA8oN7dUYpR+MBP3PIUsn2jLjWXwK5ASUzt0Jc/A5sAUZoCYFJRGF0FBKJ+1JjN43rNdsQzA=="},"engines":{"node":">=20"}},"webpack-sources@3.4.1":{"resolution":{"integrity":"sha512-eACpxRN02yaawnt+uUNIF7Qje6A9zArxBbcAJjK1PK3S9Ycg5jIuJ8pW4q8EMnwNZCEGltcjkRx1QzOxOkKD8A=="},"engines":{"node":">=10.13.0"}},"webpack-virtual-modules@0.6.2":{"resolution":{"integrity":"sha512-66/V2i5hQanC51vBQKPH4aI8NMAcBW59FVBs+rC7eGHupMyfn34q7rZIE+ETlJ+XTevqfUhVVBgSUNSW2flEUQ=="}},"webpack@5.99.9":{"resolution":{"integrity":"sha512-brOPwM3JnmOa+7kd3NsmOUOwbDAj8FT9xDsG3IW0MgbN9yZV7Oi/s/+MNQ/EcSMqw7qfoRyXPoeEWT8zLVdVGg=="},"engines":{"node":">=10.13.0"},"hasBin":true,"peerDependencies":{"webpack-cli":"*"},"peerDependenciesMeta":{"webpack-cli":{"optional":true}}},"whatwg-encoding@3.1.1":{"resolution":{"integrity":"sha512-6qN4hJdMwfYBtE3YBTTHhoeuUrDBPZmbQaxWAqSALV/MeEnR5z1xd8UKud2RAkFoPkmB+hli1TZSnyi84xz1vQ=="},"engines":{"node":">=18"},"deprecated":"Use @exodus/bytes instead for a more spec-conformant and faster implementation"},"whatwg-mimetype@3.0.0":{"resolution":{"integrity":"sha512-nt+N2dzIutVRxARx1nghPKGv1xHikU7HKdfafKkLNLindmPU/ch3U31NOCGGA/dmPcmb1VlofO0vnKAcsm0o/Q=="},"engines":{"node":">=12"}},"whatwg-mimetype@4.0.0":{"resolution":{"integrity":"sha512-QaKxh0eNIi2mE9p2vEdzfagOKHCcj1pJ56EEHGQOVxp8r9/iszLUUV7v89x9O1p/T+NlTM5W7jW6+cz4Fq1YVg=="},"engines":{"node":">=18"}},"whatwg-url@14.2.0":{"resolution":{"integrity":"sha512-De72GdQZzNTUBBChsXueQUnPKDkg/5A5zp7pFDuQAj5UFoENpiACU0wlCvzpAGnTkj++ihpKwKyYewn/XNUbKw=="},"engines":{"node":">=18"}},"whatwg-url@15.1.0":{"resolution":{"integrity":"sha512-2ytDk0kiEj/yu90JOAp44PVPUkO9+jVhyf+SybKlRHSDlvOOZhdPIrr7xTH64l4WixO2cP+wQIcgujkGBPPz6g=="},"engines":{"node":">=20"}},"which@2.0.2":{"resolution":{"integrity":"sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA=="},"engines":{"node":">= 8"},"hasBin":true},"which@4.0.0":{"resolution":{"integrity":"sha512-GlaYyEb07DPxYCKhKzplCWBJtvxZcZMrL+4UkrTSJHHPyZU4mYYTv3qaOe77H7EODLSSopAUFAc6W8U4yqvscg=="},"engines":{"node":"^16.13.0 || >=18.0.0"},"hasBin":true},"why-is-node-running@2.3.0":{"resolution":{"integrity":"sha512-hUrmaWBdVDcxvYqnyh09zunKzROWjbZTiNy8dBEjkS7ehEDQibXJ7XvlmtbwuTclUiIyN+CyXQD4Vmko8fNm8w=="},"engines":{"node":">=8"},"hasBin":true},"workerd@1.20260504.1":{"resolution":{"integrity":"sha512-AQTXSHbYNP9tLPgJNn0TmizyE4aDh2VuZZXlTAL0uu4fbCY436NAnQSJIzZbaFHM3DnAtVs9G8tkiJztSdYqDg=="},"engines":{"node":">=16"},"hasBin":true},"wrangler@4.88.0":{"resolution":{"integrity":"sha512-f470QwbeT/JM1S0duq+sLtkss7UBxIFDtYHgujv9tdQUyA/dLGDq51am0rqrsuFtCi97lTM1P5sqtt8xra1AlA=="},"engines":{"node":">=22.0.0"},"hasBin":true,"peerDependencies":{"@cloudflare/workers-types":"^4.20260504.1"},"peerDependenciesMeta":{"@cloudflare/workers-types":{"optional":true}}},"wrap-ansi@6.2.0":{"resolution":{"integrity":"sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA=="},"engines":{"node":">=8"}},"wrap-ansi@7.0.0":{"resolution":{"integrity":"sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="},"engines":{"node":">=10"}},"wrap-ansi@8.1.0":{"resolution":{"integrity":"sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ=="},"engines":{"node":">=12"}},"wrappy@1.0.2":{"resolution":{"integrity":"sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="}},"ws@8.18.0":{"resolution":{"integrity":"sha512-8VbfWfHLbbwu3+N6OKsOMpBdT4kXPDDB9cJk2bJ6mh9ucxdlnNvH1e+roYkKmN9Nxw2yjz7VzeO9oOz2zJ04Pw=="},"engines":{"node":">=10.0.0"},"peerDependencies":{"bufferutil":"^4.0.1","utf-8-validate":">=5.0.2"},"peerDependenciesMeta":{"bufferutil":{"optional":true},"utf-8-validate":{"optional":true}}},"ws@8.18.3":{"resolution":{"integrity":"sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg=="},"engines":{"node":">=10.0.0"},"peerDependencies":{"bufferutil":"^4.0.1","utf-8-validate":">=5.0.2"},"peerDependenciesMeta":{"bufferutil":{"optional":true},"utf-8-validate":{"optional":true}}},"ws@8.20.0":{"resolution":{"integrity":"sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA=="},"engines":{"node":">=10.0.0"},"peerDependencies":{"bufferutil":"^4.0.1","utf-8-validate":">=5.0.2"},"peerDependenciesMeta":{"bufferutil":{"optional":true},"utf-8-validate":{"optional":true}}},"xml-name-validator@5.0.0":{"resolution":{"integrity":"sha512-EvGK8EJ3DhaHfbRlETOWAS5pO9MZITeauHKJyb8wyajUfQUenkIg2MvLDTZ4T/TgIcm3HU0TFBgWWboAZ30UHg=="},"engines":{"node":">=18"}},"xmlbuilder2@4.0.3":{"resolution":{"integrity":"sha512-bx8Q1STctnNaaDymWnkfQLKofs0mGNN7rLLapJlGuV3VlvegD7Ls4ggMjE3aUSWItCCzU0PEv45lI87iSigiCA=="},"engines":{"node":">=20.0"}},"xmlchars@2.2.0":{"resolution":{"integrity":"sha512-JZnDKK8B0RCDw84FNdDAIpZK+JuJw+s7Lz8nksI7SIuU3UXJJslUthsi+uWBUYOwPFwW7W7PRLRfUKpxjtjFCw=="}},"y18n@5.0.8":{"resolution":{"integrity":"sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA=="},"engines":{"node":">=10"}},"yallist@3.1.1":{"resolution":{"integrity":"sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g=="}},"yallist@4.0.0":{"resolution":{"integrity":"sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A=="}},"yaml@2.8.1":{"resolution":{"integrity":"sha512-lcYcMxX2PO9XMGvAJkJ3OsNMw+/7FKes7/hgerGUYWIoWu5j/+YQqcZr5JnPZWzOsEBgMbSbiSTn/dv/69Mkpw=="},"engines":{"node":">= 14.6"},"hasBin":true},"yargs-parser@21.1.1":{"resolution":{"integrity":"sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="},"engines":{"node":">=12"}},"yargs-parser@22.0.0":{"resolution":{"integrity":"sha512-rwu/ClNdSMpkSrUb+d6BRsSkLUq1fmfsY6TOpYzTwvwkg1/NRG85KBy3kq++A8LKQwX6lsu+aWad+2khvuXrqw=="},"engines":{"node":"^20.19.0 || ^22.12.0 || >=23"}},"yargs@17.7.2":{"resolution":{"integrity":"sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w=="},"engines":{"node":">=12"}},"yauzl@2.10.0":{"resolution":{"integrity":"sha512-p4a9I6X6nu6IhoGmBqAcbJy1mlC4j27vEPZX9F4L4/vZT3Lyq1VkFHw/V/PUcB9Buo+DG3iHkT0x3Qya58zc3g=="}},"yoctocolors-cjs@2.1.3":{"resolution":{"integrity":"sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw=="},"engines":{"node":">=18"}},"youch-core@0.3.3":{"resolution":{"integrity":"sha512-ho7XuGjLaJ2hWHoK8yFnsUGy2Y5uDpqSTq1FkHLK4/oqKtyUU1AFbOOxY4IpC9f0fTLjwYbslUz0Po5BpD1wrA=="}},"youch@4.1.0-beta.10":{"resolution":{"integrity":"sha512-rLfVLB4FgQneDr0dv1oddCVZmKjcJ6yX6mS4pU82Mq/Dt9a3cLZQ62pDBL4AUO+uVrCvtWz3ZFUL2HFAFJ/BXQ=="}},"zip-stream@6.0.1":{"resolution":{"integrity":"sha512-zK7YHHz4ZXpW89AHXUPbQVGKI7uvkd3hzusTdotCg1UxyaVtg0zFJSTfW/Dq5f7OBBVnq6cZIaC8Ti4hb6dtCA=="},"engines":{"node":">= 14"}},"zod@3.25.76":{"resolution":{"integrity":"sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="}},"zwitch@2.0.4":{"resolution":{"integrity":"sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A=="}}},"snapshots":{"@acemir/cssom@0.9.28":{},"@ampproject/remapping@2.3.0":{"dependencies":{"@jridgewell/gen-mapping":"0.3.13","@jridgewell/trace-mapping":"0.3.30"}},"@antfu/install-pkg@1.1.0":{"dependencies":{"package-manager-detector":"1.5.0","tinyexec":"1.0.2"}},"@antfu/utils@9.3.0":{},"@asamuzakjp/css-color@3.1.4":{"dependencies":{"@csstools/css-calc":"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-color-parser":"3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-parser-algorithms":"3.0.5(@csstools/css-tokenizer@3.0.4)","@csstools/css-tokenizer":"3.0.4","lru-cache":"10.4.3"}},"@asamuzakjp/css-color@4.1.0":{"dependencies":{"@csstools/css-calc":"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-color-parser":"3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-parser-algorithms":"3.0.5(@csstools/css-tokenizer@3.0.4)","@csstools/css-tokenizer":"3.0.4","lru-cache":"11.2.4"}},"@asamuzakjp/dom-selector@6.7.6":{"dependencies":{"@asamuzakjp/nwsapi":"2.3.9","bidi-js":"1.0.3","css-tree":"3.1.0","is-potential-custom-element-name":"1.0.1","lru-cache":"11.2.4"}},"@asamuzakjp/nwsapi@2.3.9":{},"@babel/code-frame@7.27.1":{"dependencies":{"@babel/helper-validator-identifier":"7.28.5","js-tokens":"4.0.0","picocolors":"1.1.1"}},"@babel/compat-data@7.28.0":{},"@babel/core@7.28.5":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/generator":"7.28.5","@babel/helper-compilation-targets":"7.27.2","@babel/helper-module-transforms":"7.28.3(@babel/core@7.28.5)","@babel/helpers":"7.28.4","@babel/parser":"7.28.5","@babel/template":"7.27.2","@babel/traverse":"7.28.5","@babel/types":"7.28.5","@jridgewell/remapping":"2.3.5","convert-source-map":"2.0.0","debug":"4.4.3","gensync":"1.0.0-beta.2","json5":"2.2.3","semver":"6.3.1"},"transitivePeerDependencies":["supports-color"]},"@babel/generator@7.28.5":{"dependencies":{"@babel/parser":"7.28.5","@babel/types":"7.28.5","@jridgewell/gen-mapping":"0.3.13","@jridgewell/trace-mapping":"0.3.31","jsesc":"3.1.0"}},"@babel/helper-compilation-targets@7.27.2":{"dependencies":{"@babel/compat-data":"7.28.0","@babel/helper-validator-option":"7.27.1","browserslist":"4.25.3","lru-cache":"5.1.1","semver":"6.3.1"}},"@babel/helper-globals@7.28.0":{},"@babel/helper-module-imports@7.27.1":{"dependencies":{"@babel/traverse":"7.28.5","@babel/types":"7.28.5"},"transitivePeerDependencies":["supports-color"]},"@babel/helper-module-transforms@7.28.3(@babel/core@7.28.5)":{"dependencies":{"@babel/core":"7.28.5","@babel/helper-module-imports":"7.27.1","@babel/helper-validator-identifier":"7.28.5","@babel/traverse":"7.28.5"},"transitivePeerDependencies":["supports-color"]},"@babel/helper-plugin-utils@7.27.1":{},"@babel/helper-string-parser@7.27.1":{},"@babel/helper-validator-identifier@7.28.5":{},"@babel/helper-validator-option@7.27.1":{},"@babel/helpers@7.28.4":{"dependencies":{"@babel/template":"7.27.2","@babel/types":"7.28.5"}},"@babel/parser@7.28.5":{"dependencies":{"@babel/types":"7.28.5"}},"@babel/parser@7.29.3":{"dependencies":{"@babel/types":"7.29.0"}},"@babel/plugin-syntax-jsx@7.27.1(@babel/core@7.28.5)":{"dependencies":{"@babel/core":"7.28.5","@babel/helper-plugin-utils":"7.27.1"}},"@babel/plugin-syntax-typescript@7.27.1(@babel/core@7.28.5)":{"dependencies":{"@babel/core":"7.28.5","@babel/helper-plugin-utils":"7.27.1"}},"@babel/runtime@7.28.4":{},"@babel/template@7.27.2":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/parser":"7.28.5","@babel/types":"7.28.5"}},"@babel/traverse@7.28.5":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/generator":"7.28.5","@babel/helper-globals":"7.28.0","@babel/parser":"7.28.5","@babel/template":"7.27.2","@babel/types":"7.28.5","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"]},"@babel/types@7.28.5":{"dependencies":{"@babel/helper-string-parser":"7.27.1","@babel/helper-validator-identifier":"7.28.5"}},"@babel/types@7.29.0":{"dependencies":{"@babel/helper-string-parser":"7.27.1","@babel/helper-validator-identifier":"7.28.5"}},"@bcoe/v8-coverage@1.0.2":{},"@blazediff/core@1.9.1":{},"@braintree/sanitize-url@7.1.1":{},"@bufbuild/protobuf@2.12.0":{"optional":true},"@bundled-es-modules/cookie@2.0.1":{"dependencies":{"cookie":"0.7.2"},"optional":true},"@bundled-es-modules/statuses@1.0.1":{"dependencies":{"statuses":"2.0.2"},"optional":true},"@bundled-es-modules/tough-cookie@0.1.6":{"dependencies":{"@types/tough-cookie":"4.0.5","tough-cookie":"4.1.4"},"optional":true},"@changesets/apply-release-plan@7.0.13":{"dependencies":{"@changesets/config":"3.1.1","@changesets/get-version-range-type":"0.4.0","@changesets/git":"3.0.4","@changesets/should-skip-package":"0.1.2","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","detect-indent":"6.1.0","fs-extra":"7.0.1","lodash.startcase":"4.4.0","outdent":"0.5.0","prettier":"2.8.8","resolve-from":"5.0.0","semver":"7.7.3"}},"@changesets/assemble-release-plan@6.0.9":{"dependencies":{"@changesets/errors":"0.2.0","@changesets/get-dependents-graph":"2.1.3","@changesets/should-skip-package":"0.1.2","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","semver":"7.7.3"}},"@changesets/changelog-git@0.2.1":{"dependencies":{"@changesets/types":"6.1.0"}},"@changesets/cli@2.29.7(@types/node@24.10.2)":{"dependencies":{"@changesets/apply-release-plan":"7.0.13","@changesets/assemble-release-plan":"6.0.9","@changesets/changelog-git":"0.2.1","@changesets/config":"3.1.1","@changesets/errors":"0.2.0","@changesets/get-dependents-graph":"2.1.3","@changesets/get-release-plan":"4.0.13","@changesets/git":"3.0.4","@changesets/logger":"0.1.1","@changesets/pre":"2.0.2","@changesets/read":"0.6.5","@changesets/should-skip-package":"0.1.2","@changesets/types":"6.1.0","@changesets/write":"0.4.0","@inquirer/external-editor":"1.0.1(@types/node@24.10.2)","@manypkg/get-packages":"1.1.3","ansi-colors":"4.1.3","ci-info":"3.9.0","enquirer":"2.4.1","fs-extra":"7.0.1","mri":"1.2.0","p-limit":"2.3.0","package-manager-detector":"0.2.11","picocolors":"1.1.1","resolve-from":"5.0.0","semver":"7.7.3","spawndamnit":"3.0.1","term-size":"2.2.1"},"transitivePeerDependencies":["@types/node"]},"@changesets/config@3.1.1":{"dependencies":{"@changesets/errors":"0.2.0","@changesets/get-dependents-graph":"2.1.3","@changesets/logger":"0.1.1","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","fs-extra":"7.0.1","micromatch":"4.0.8"}},"@changesets/errors@0.2.0":{"dependencies":{"extendable-error":"0.1.7"}},"@changesets/get-dependents-graph@2.1.3":{"dependencies":{"@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","picocolors":"1.1.1","semver":"7.7.3"}},"@changesets/get-release-plan@4.0.13":{"dependencies":{"@changesets/assemble-release-plan":"6.0.9","@changesets/config":"3.1.1","@changesets/pre":"2.0.2","@changesets/read":"0.6.5","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3"}},"@changesets/get-version-range-type@0.4.0":{},"@changesets/git@3.0.4":{"dependencies":{"@changesets/errors":"0.2.0","@manypkg/get-packages":"1.1.3","is-subdir":"1.2.0","micromatch":"4.0.8","spawndamnit":"3.0.1"}},"@changesets/logger@0.1.1":{"dependencies":{"picocolors":"1.1.1"}},"@changesets/parse@0.4.1":{"dependencies":{"@changesets/types":"6.1.0","js-yaml":"3.14.1"}},"@changesets/pre@2.0.2":{"dependencies":{"@changesets/errors":"0.2.0","@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3","fs-extra":"7.0.1"}},"@changesets/read@0.6.5":{"dependencies":{"@changesets/git":"3.0.4","@changesets/logger":"0.1.1","@changesets/parse":"0.4.1","@changesets/types":"6.1.0","fs-extra":"7.0.1","p-filter":"2.1.0","picocolors":"1.1.1"}},"@changesets/should-skip-package@0.1.2":{"dependencies":{"@changesets/types":"6.1.0","@manypkg/get-packages":"1.1.3"}},"@changesets/types@4.1.0":{},"@changesets/types@6.1.0":{},"@changesets/write@0.4.0":{"dependencies":{"@changesets/types":"6.1.0","fs-extra":"7.0.1","human-id":"4.1.1","prettier":"2.8.8"}},"@chevrotain/cst-dts-gen@11.0.3":{"dependencies":{"@chevrotain/gast":"11.0.3","@chevrotain/types":"11.0.3","lodash-es":"4.17.21"}},"@chevrotain/gast@11.0.3":{"dependencies":{"@chevrotain/types":"11.0.3","lodash-es":"4.17.21"}},"@chevrotain/regexp-to-ast@11.0.3":{},"@chevrotain/types@11.0.3":{},"@chevrotain/utils@11.0.3":{},"@cloudflare/kv-asset-handler@0.5.0":{},"@cloudflare/unenv-preset@2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)":{"dependencies":{"unenv":"2.0.0-rc.24"},"optionalDependencies":{"workerd":"1.20260504.1"}},"@cloudflare/vite-plugin@1.36.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(workerd@1.20260504.1)(wrangler@4.88.0)":{"dependencies":{"@cloudflare/unenv-preset":"2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)","miniflare":"4.20260504.0","unenv":"2.0.0-rc.24","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","wrangler":"4.88.0","ws":"8.18.0"},"transitivePeerDependencies":["bufferutil","utf-8-validate","workerd"]},"@cloudflare/workerd-darwin-64@1.20260504.1":{"optional":true},"@cloudflare/workerd-darwin-arm64@1.20260504.1":{"optional":true},"@cloudflare/workerd-linux-64@1.20260504.1":{"optional":true},"@cloudflare/workerd-linux-arm64@1.20260504.1":{"optional":true},"@cloudflare/workerd-windows-64@1.20260504.1":{"optional":true},"@cspotcode/source-map-support@0.8.1":{"dependencies":{"@jridgewell/trace-mapping":"0.3.9"}},"@csstools/color-helpers@5.1.0":{},"@csstools/css-calc@2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)":{"dependencies":{"@csstools/css-parser-algorithms":"3.0.5(@csstools/css-tokenizer@3.0.4)","@csstools/css-tokenizer":"3.0.4"}},"@csstools/css-color-parser@3.1.0(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)":{"dependencies":{"@csstools/color-helpers":"5.1.0","@csstools/css-calc":"2.1.4(@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4))(@csstools/css-tokenizer@3.0.4)","@csstools/css-parser-algorithms":"3.0.5(@csstools/css-tokenizer@3.0.4)","@csstools/css-tokenizer":"3.0.4"}},"@csstools/css-parser-algorithms@3.0.5(@csstools/css-tokenizer@3.0.4)":{"dependencies":{"@csstools/css-tokenizer":"3.0.4"}},"@csstools/css-syntax-patches-for-csstree@1.0.14(postcss@8.5.14)":{"dependencies":{"postcss":"8.5.14"}},"@csstools/css-tokenizer@3.0.4":{},"@emnapi/core@1.10.0":{"dependencies":{"@emnapi/wasi-threads":"1.2.1","tslib":"2.8.1"},"optional":true},"@emnapi/core@1.4.5":{"dependencies":{"@emnapi/wasi-threads":"1.0.4","tslib":"2.8.1"}},"@emnapi/runtime@1.10.0":{"dependencies":{"tslib":"2.8.1"},"optional":true},"@emnapi/runtime@1.4.5":{"dependencies":{"tslib":"2.8.1"}},"@emnapi/wasi-threads@1.0.4":{"dependencies":{"tslib":"2.8.1"}},"@emnapi/wasi-threads@1.2.1":{"dependencies":{"tslib":"2.8.1"},"optional":true},"@esbuild/aix-ppc64@0.25.12":{"optional":true},"@esbuild/aix-ppc64@0.27.3":{"optional":true},"@esbuild/android-arm64@0.25.12":{"optional":true},"@esbuild/android-arm64@0.27.3":{"optional":true},"@esbuild/android-arm@0.25.12":{"optional":true},"@esbuild/android-arm@0.27.3":{"optional":true},"@esbuild/android-x64@0.25.12":{"optional":true},"@esbuild/android-x64@0.27.3":{"optional":true},"@esbuild/darwin-arm64@0.25.12":{"optional":true},"@esbuild/darwin-arm64@0.27.3":{"optional":true},"@esbuild/darwin-x64@0.25.12":{"optional":true},"@esbuild/darwin-x64@0.27.3":{"optional":true},"@esbuild/freebsd-arm64@0.25.12":{"optional":true},"@esbuild/freebsd-arm64@0.27.3":{"optional":true},"@esbuild/freebsd-x64@0.25.12":{"optional":true},"@esbuild/freebsd-x64@0.27.3":{"optional":true},"@esbuild/linux-arm64@0.25.12":{"optional":true},"@esbuild/linux-arm64@0.27.3":{"optional":true},"@esbuild/linux-arm@0.25.12":{"optional":true},"@esbuild/linux-arm@0.27.3":{"optional":true},"@esbuild/linux-ia32@0.25.12":{"optional":true},"@esbuild/linux-ia32@0.27.3":{"optional":true},"@esbuild/linux-loong64@0.25.12":{"optional":true},"@esbuild/linux-loong64@0.27.3":{"optional":true},"@esbuild/linux-mips64el@0.25.12":{"optional":true},"@esbuild/linux-mips64el@0.27.3":{"optional":true},"@esbuild/linux-ppc64@0.25.12":{"optional":true},"@esbuild/linux-ppc64@0.27.3":{"optional":true},"@esbuild/linux-riscv64@0.25.12":{"optional":true},"@esbuild/linux-riscv64@0.27.3":{"optional":true},"@esbuild/linux-s390x@0.25.12":{"optional":true},"@esbuild/linux-s390x@0.27.3":{"optional":true},"@esbuild/linux-x64@0.25.12":{"optional":true},"@esbuild/linux-x64@0.27.3":{"optional":true},"@esbuild/netbsd-arm64@0.25.12":{"optional":true},"@esbuild/netbsd-arm64@0.27.3":{"optional":true},"@esbuild/netbsd-x64@0.25.12":{"optional":true},"@esbuild/netbsd-x64@0.27.3":{"optional":true},"@esbuild/openbsd-arm64@0.25.12":{"optional":true},"@esbuild/openbsd-arm64@0.27.3":{"optional":true},"@esbuild/openbsd-x64@0.25.12":{"optional":true},"@esbuild/openbsd-x64@0.27.3":{"optional":true},"@esbuild/openharmony-arm64@0.25.12":{"optional":true},"@esbuild/openharmony-arm64@0.27.3":{"optional":true},"@esbuild/sunos-x64@0.25.12":{"optional":true},"@esbuild/sunos-x64@0.27.3":{"optional":true},"@esbuild/win32-arm64@0.25.12":{"optional":true},"@esbuild/win32-arm64@0.27.3":{"optional":true},"@esbuild/win32-ia32@0.25.12":{"optional":true},"@esbuild/win32-ia32@0.27.3":{"optional":true},"@esbuild/win32-x64@0.25.12":{"optional":true},"@esbuild/win32-x64@0.27.3":{"optional":true},"@iconify/types@2.0.0":{},"@iconify/utils@3.0.2":{"dependencies":{"@antfu/install-pkg":"1.1.0","@antfu/utils":"9.3.0","@iconify/types":"2.0.0","debug":"4.4.3","globals":"15.15.0","kolorist":"1.8.0","local-pkg":"1.1.2","mlly":"1.8.0"},"transitivePeerDependencies":["supports-color"]},"@img/colour@1.1.0":{},"@img/sharp-darwin-arm64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-darwin-arm64":"1.2.4"},"optional":true},"@img/sharp-darwin-x64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-darwin-x64":"1.2.4"},"optional":true},"@img/sharp-libvips-darwin-arm64@1.2.4":{"optional":true},"@img/sharp-libvips-darwin-x64@1.2.4":{"optional":true},"@img/sharp-libvips-linux-arm64@1.2.4":{"optional":true},"@img/sharp-libvips-linux-arm@1.2.4":{"optional":true},"@img/sharp-libvips-linux-ppc64@1.2.4":{"optional":true},"@img/sharp-libvips-linux-riscv64@1.2.4":{"optional":true},"@img/sharp-libvips-linux-s390x@1.2.4":{"optional":true},"@img/sharp-libvips-linux-x64@1.2.4":{"optional":true},"@img/sharp-libvips-linuxmusl-arm64@1.2.4":{"optional":true},"@img/sharp-libvips-linuxmusl-x64@1.2.4":{"optional":true},"@img/sharp-linux-arm64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-arm64":"1.2.4"},"optional":true},"@img/sharp-linux-arm@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-arm":"1.2.4"},"optional":true},"@img/sharp-linux-ppc64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-ppc64":"1.2.4"},"optional":true},"@img/sharp-linux-riscv64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-riscv64":"1.2.4"},"optional":true},"@img/sharp-linux-s390x@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-s390x":"1.2.4"},"optional":true},"@img/sharp-linux-x64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linux-x64":"1.2.4"},"optional":true},"@img/sharp-linuxmusl-arm64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linuxmusl-arm64":"1.2.4"},"optional":true},"@img/sharp-linuxmusl-x64@0.34.5":{"optionalDependencies":{"@img/sharp-libvips-linuxmusl-x64":"1.2.4"},"optional":true},"@img/sharp-wasm32@0.34.5":{"dependencies":{"@emnapi/runtime":"1.10.0"},"optional":true},"@img/sharp-win32-arm64@0.34.5":{"optional":true},"@img/sharp-win32-ia32@0.34.5":{"optional":true},"@img/sharp-win32-x64@0.34.5":{"optional":true},"@inquirer/ansi@1.0.2":{"optional":true},"@inquirer/confirm@5.1.21(@types/node@22.15.33)":{"dependencies":{"@inquirer/core":"10.3.2(@types/node@22.15.33)","@inquirer/type":"3.0.10(@types/node@22.15.33)"},"optionalDependencies":{"@types/node":"22.15.33"},"optional":true},"@inquirer/confirm@5.1.21(@types/node@24.10.2)":{"dependencies":{"@inquirer/core":"10.3.2(@types/node@24.10.2)","@inquirer/type":"3.0.10(@types/node@24.10.2)"},"optionalDependencies":{"@types/node":"24.10.2"},"optional":true},"@inquirer/core@10.3.2(@types/node@22.15.33)":{"dependencies":{"@inquirer/ansi":"1.0.2","@inquirer/figures":"1.0.15","@inquirer/type":"3.0.10(@types/node@22.15.33)","cli-width":"4.1.0","mute-stream":"2.0.0","signal-exit":"4.1.0","wrap-ansi":"6.2.0","yoctocolors-cjs":"2.1.3"},"optionalDependencies":{"@types/node":"22.15.33"},"optional":true},"@inquirer/core@10.3.2(@types/node@24.10.2)":{"dependencies":{"@inquirer/ansi":"1.0.2","@inquirer/figures":"1.0.15","@inquirer/type":"3.0.10(@types/node@24.10.2)","cli-width":"4.1.0","mute-stream":"2.0.0","signal-exit":"4.1.0","wrap-ansi":"6.2.0","yoctocolors-cjs":"2.1.3"},"optionalDependencies":{"@types/node":"24.10.2"},"optional":true},"@inquirer/external-editor@1.0.1(@types/node@24.10.2)":{"dependencies":{"chardet":"2.1.0","iconv-lite":"0.6.3"},"optionalDependencies":{"@types/node":"24.10.2"}},"@inquirer/figures@1.0.15":{"optional":true},"@inquirer/type@3.0.10(@types/node@22.15.33)":{"optionalDependencies":{"@types/node":"22.15.33"},"optional":true},"@inquirer/type@3.0.10(@types/node@24.10.2)":{"optionalDependencies":{"@types/node":"24.10.2"},"optional":true},"@isaacs/cliui@8.0.2":{"dependencies":{"string-width":"5.1.2","string-width-cjs":"string-width@4.2.3","strip-ansi":"7.1.2","strip-ansi-cjs":"strip-ansi@6.0.1","wrap-ansi":"8.1.0","wrap-ansi-cjs":"wrap-ansi@7.0.0"}},"@istanbuljs/schema@0.1.3":{},"@jest/diff-sequences@30.0.1":{},"@jest/get-type@30.1.0":{},"@jest/schemas@30.0.5":{"dependencies":{"@sinclair/typebox":"0.34.40"}},"@jridgewell/gen-mapping@0.3.13":{"dependencies":{"@jridgewell/sourcemap-codec":"1.5.5","@jridgewell/trace-mapping":"0.3.31"}},"@jridgewell/remapping@2.3.5":{"dependencies":{"@jridgewell/gen-mapping":"0.3.13","@jridgewell/trace-mapping":"0.3.31"}},"@jridgewell/resolve-uri@3.1.2":{},"@jridgewell/source-map@0.3.11":{"dependencies":{"@jridgewell/gen-mapping":"0.3.13","@jridgewell/trace-mapping":"0.3.31"},"optional":true},"@jridgewell/sourcemap-codec@1.5.5":{},"@jridgewell/trace-mapping@0.3.30":{"dependencies":{"@jridgewell/resolve-uri":"3.1.2","@jridgewell/sourcemap-codec":"1.5.5"}},"@jridgewell/trace-mapping@0.3.31":{"dependencies":{"@jridgewell/resolve-uri":"3.1.2","@jridgewell/sourcemap-codec":"1.5.5"}},"@jridgewell/trace-mapping@0.3.9":{"dependencies":{"@jridgewell/resolve-uri":"3.1.2","@jridgewell/sourcemap-codec":"1.5.5"}},"@jsonjoy.com/buffers@17.63.0(tslib@2.8.1)":{"dependencies":{"tslib":"2.8.1"}},"@jsonjoy.com/codegen@17.63.0(tslib@2.8.1)":{"dependencies":{"tslib":"2.8.1"}},"@jsonjoy.com/json-pointer@17.63.0(tslib@2.8.1)":{"dependencies":{"@jsonjoy.com/util":"17.63.0(tslib@2.8.1)","tslib":"2.8.1"}},"@jsonjoy.com/util@17.63.0(tslib@2.8.1)":{"dependencies":{"@jsonjoy.com/buffers":"17.63.0(tslib@2.8.1)","@jsonjoy.com/codegen":"17.63.0(tslib@2.8.1)","tslib":"2.8.1"}},"@lix-js/plugin-json@1.0.1(tslib@2.8.1)":{"dependencies":{"@jsonjoy.com/json-pointer":"17.63.0(tslib@2.8.1)","@lix-js/sdk":"0.5.1"},"transitivePeerDependencies":["tslib"]},"@lix-js/sdk@0.5.1":{"dependencies":{"@lix-js/server-protocol-schema":"0.1.1","@marcbachmann/cel-js":"2.5.2","@opral/zettel-ast":"0.1.0","@sqlite.org/sqlite-wasm":"3.50.4-build1","ajv":"8.17.1","chevrotain":"11.0.3","kysely":"0.28.7","uuid":"11.1.0"}},"@lix-js/server-protocol-schema@0.1.1":{},"@manypkg/find-root@1.1.0":{"dependencies":{"@babel/runtime":"7.28.4","@types/node":"12.20.55","find-up":"4.1.0","fs-extra":"8.1.0"}},"@manypkg/get-packages@1.1.3":{"dependencies":{"@babel/runtime":"7.28.4","@changesets/types":"4.1.0","@manypkg/find-root":"1.1.0","fs-extra":"8.1.0","globby":"11.1.0","read-yaml-file":"1.1.0"}},"@marcbachmann/cel-js@2.5.2":{},"@mermaid-js/parser@0.6.3":{"dependencies":{"langium":"3.3.1"}},"@mswjs/interceptors@0.39.8":{"dependencies":{"@open-draft/deferred-promise":"2.2.0","@open-draft/logger":"0.3.0","@open-draft/until":"2.1.0","is-node-process":"1.2.0","outvariant":"1.4.3","strict-event-emitter":"0.5.1"},"optional":true},"@napi-rs/wasm-runtime@0.2.4":{"dependencies":{"@emnapi/core":"1.4.5","@emnapi/runtime":"1.4.5","@tybys/wasm-util":"0.9.0"}},"@napi-rs/wasm-runtime@1.1.4(@emnapi/core@1.10.0)(@emnapi/runtime@1.10.0)":{"dependencies":{"@emnapi/core":"1.10.0","@emnapi/runtime":"1.10.0","@tybys/wasm-util":"0.10.2"},"optional":true},"@nodelib/fs.scandir@2.1.5":{"dependencies":{"@nodelib/fs.stat":"2.0.5","run-parallel":"1.2.0"}},"@nodelib/fs.stat@2.0.5":{},"@nodelib/fs.walk@1.2.8":{"dependencies":{"@nodelib/fs.scandir":"2.1.5","fastq":"1.17.1"}},"@nrwl/nx-cloud@19.1.0":{"dependencies":{"nx-cloud":"19.1.0"},"transitivePeerDependencies":["debug"]},"@nx/nx-darwin-arm64@21.4.1":{"optional":true},"@nx/nx-darwin-x64@21.4.1":{"optional":true},"@nx/nx-freebsd-x64@21.4.1":{"optional":true},"@nx/nx-linux-arm-gnueabihf@21.4.1":{"optional":true},"@nx/nx-linux-arm64-gnu@21.4.1":{"optional":true},"@nx/nx-linux-arm64-musl@21.4.1":{"optional":true},"@nx/nx-linux-x64-gnu@21.4.1":{"optional":true},"@nx/nx-linux-x64-musl@21.4.1":{"optional":true},"@nx/nx-win32-arm64-msvc@21.4.1":{"optional":true},"@nx/nx-win32-x64-msvc@21.4.1":{"optional":true},"@oozcitak/dom@2.0.2":{"dependencies":{"@oozcitak/infra":"2.0.2","@oozcitak/url":"3.0.0","@oozcitak/util":"10.0.0"}},"@oozcitak/infra@2.0.2":{"dependencies":{"@oozcitak/util":"10.0.0"}},"@oozcitak/url@3.0.0":{"dependencies":{"@oozcitak/infra":"2.0.2","@oozcitak/util":"10.0.0"}},"@oozcitak/util@10.0.0":{},"@open-draft/deferred-promise@2.2.0":{"optional":true},"@open-draft/logger@0.3.0":{"dependencies":{"is-node-process":"1.2.0","outvariant":"1.4.3"},"optional":true},"@open-draft/until@2.1.0":{"optional":true},"@opentelemetry/api-logs@0.208.0":{"dependencies":{"@opentelemetry/api":"1.9.0"}},"@opentelemetry/api@1.9.0":{},"@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/core@2.4.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/exporter-logs-otlp-http@0.208.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/api-logs":"0.208.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/otlp-exporter-base":"0.208.0(@opentelemetry/api@1.9.0)","@opentelemetry/otlp-transformer":"0.208.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-logs":"0.208.0(@opentelemetry/api@1.9.0)"}},"@opentelemetry/otlp-exporter-base@0.208.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/otlp-transformer":"0.208.0(@opentelemetry/api@1.9.0)"}},"@opentelemetry/otlp-transformer@0.208.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/api-logs":"0.208.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-logs":"0.208.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-metrics":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-trace-base":"2.2.0(@opentelemetry/api@1.9.0)","protobufjs":"7.5.4"}},"@opentelemetry/resources@2.2.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/resources@2.4.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.4.0(@opentelemetry/api@1.9.0)","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/sdk-logs@0.208.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/api-logs":"0.208.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.2.0(@opentelemetry/api@1.9.0)"}},"@opentelemetry/sdk-metrics@2.2.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.2.0(@opentelemetry/api@1.9.0)"}},"@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0)":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/core":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.2.0(@opentelemetry/api@1.9.0)","@opentelemetry/semantic-conventions":"1.38.0"}},"@opentelemetry/semantic-conventions@1.38.0":{},"@opral/markdown-wc@0.9.0":{"dependencies":{"mermaid":"11.12.1","rehype-autolink-headings":"7.1.0","rehype-highlight":"7.0.2","rehype-parse":"9.0.1","rehype-raw":"7.0.0","rehype-remark":"10.0.1","rehype-sanitize":"6.0.0","rehype-slug":"6.0.0","rehype-stringify":"10.0.1","remark-frontmatter":"5.0.0","remark-gfm":"4.0.1","remark-parse":"11.0.0","remark-rehype":"11.1.2","remark-stringify":"11.0.0","unified":"11.0.5","unist-util-visit":"5.0.0","yaml":"2.8.1"},"transitivePeerDependencies":["supports-color"]},"@opral/zettel-ast@0.1.0":{"dependencies":{"@sinclair/typebox":"0.34.40"}},"@oxc-project/types@0.127.0":{},"@oxlint/darwin-arm64@1.26.0":{"optional":true},"@oxlint/darwin-x64@1.26.0":{"optional":true},"@oxlint/linux-arm64-gnu@1.26.0":{"optional":true},"@oxlint/linux-arm64-musl@1.26.0":{"optional":true},"@oxlint/linux-x64-gnu@1.26.0":{"optional":true},"@oxlint/linux-x64-musl@1.26.0":{"optional":true},"@oxlint/win32-arm64@1.26.0":{"optional":true},"@oxlint/win32-x64@1.26.0":{"optional":true},"@pkgjs/parseargs@0.11.0":{"optional":true},"@polka/url@1.0.0-next.29":{},"@poppinss/colors@4.1.5":{"dependencies":{"kleur":"4.1.5"}},"@poppinss/dumper@0.6.5":{"dependencies":{"@poppinss/colors":"4.1.5","@sindresorhus/is":"7.1.1","supports-color":"10.2.2"}},"@poppinss/exception@1.2.2":{},"@posthog/core@1.9.1":{"dependencies":{"cross-spawn":"7.0.6"}},"@posthog/types@1.321.2":{},"@promptbook/utils@0.69.5":{"dependencies":{"spacetrim":"0.11.59"},"optional":true},"@protobufjs/aspromise@1.1.2":{},"@protobufjs/base64@1.1.2":{},"@protobufjs/codegen@2.0.4":{},"@protobufjs/eventemitter@1.1.0":{},"@protobufjs/fetch@1.1.0":{"dependencies":{"@protobufjs/aspromise":"1.1.2","@protobufjs/inquire":"1.1.0"}},"@protobufjs/float@1.0.2":{},"@protobufjs/inquire@1.1.0":{},"@protobufjs/path@1.1.2":{},"@protobufjs/pool@1.1.0":{},"@protobufjs/utf8@1.1.0":{},"@puppeteer/browsers@2.13.1":{"dependencies":{"debug":"4.4.3","extract-zip":"2.0.1","progress":"2.0.3","proxy-agent":"6.5.0","semver":"7.7.4","tar-fs":"3.1.2","yargs":"17.7.2"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a","supports-color"],"optional":true},"@rolldown/binding-android-arm64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-darwin-arm64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-darwin-x64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-freebsd-x64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-arm-gnueabihf@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-arm64-gnu@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-arm64-musl@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-ppc64-gnu@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-s390x-gnu@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-x64-gnu@1.0.0-rc.17":{"optional":true},"@rolldown/binding-linux-x64-musl@1.0.0-rc.17":{"optional":true},"@rolldown/binding-openharmony-arm64@1.0.0-rc.17":{"optional":true},"@rolldown/binding-wasm32-wasi@1.0.0-rc.17":{"dependencies":{"@emnapi/core":"1.10.0","@emnapi/runtime":"1.10.0","@napi-rs/wasm-runtime":"1.1.4(@emnapi/core@1.10.0)(@emnapi/runtime@1.10.0)"},"optional":true},"@rolldown/binding-win32-arm64-msvc@1.0.0-rc.17":{"optional":true},"@rolldown/binding-win32-x64-msvc@1.0.0-rc.17":{"optional":true},"@rolldown/pluginutils@1.0.0-beta.40":{},"@rolldown/pluginutils@1.0.0-rc.17":{},"@rolldown/pluginutils@1.0.0-rc.7":{},"@rollup/rollup-android-arm-eabi@4.53.2":{"optional":true},"@rollup/rollup-android-arm64@4.53.2":{"optional":true},"@rollup/rollup-darwin-arm64@4.53.2":{"optional":true},"@rollup/rollup-darwin-x64@4.53.2":{"optional":true},"@rollup/rollup-freebsd-arm64@4.53.2":{"optional":true},"@rollup/rollup-freebsd-x64@4.53.2":{"optional":true},"@rollup/rollup-linux-arm-gnueabihf@4.53.2":{"optional":true},"@rollup/rollup-linux-arm-musleabihf@4.53.2":{"optional":true},"@rollup/rollup-linux-arm64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-arm64-musl@4.53.2":{"optional":true},"@rollup/rollup-linux-loong64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-ppc64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-riscv64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-riscv64-musl@4.53.2":{"optional":true},"@rollup/rollup-linux-s390x-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-x64-gnu@4.53.2":{"optional":true},"@rollup/rollup-linux-x64-musl@4.53.2":{"optional":true},"@rollup/rollup-openharmony-arm64@4.53.2":{"optional":true},"@rollup/rollup-win32-arm64-msvc@4.53.2":{"optional":true},"@rollup/rollup-win32-ia32-msvc@4.53.2":{"optional":true},"@rollup/rollup-win32-x64-gnu@4.53.2":{"optional":true},"@rollup/rollup-win32-x64-msvc@4.53.2":{"optional":true},"@shikijs/core@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0","@shikijs/vscode-textmate":"10.0.2","@types/hast":"3.0.4","hast-util-to-html":"9.0.5"}},"@shikijs/engine-javascript@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0","@shikijs/vscode-textmate":"10.0.2","oniguruma-to-es":"4.3.3"}},"@shikijs/engine-oniguruma@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0","@shikijs/vscode-textmate":"10.0.2"}},"@shikijs/langs@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0"}},"@shikijs/themes@3.15.0":{"dependencies":{"@shikijs/types":"3.15.0"}},"@shikijs/types@3.15.0":{"dependencies":{"@shikijs/vscode-textmate":"10.0.2","@types/hast":"3.0.4"}},"@shikijs/vscode-textmate@10.0.2":{},"@sinclair/typebox@0.34.40":{},"@sindresorhus/is@7.1.1":{},"@speed-highlight/core@1.2.12":{},"@sqlite.org/sqlite-wasm@3.50.4-build1":{},"@standard-schema/spec@1.0.0":{},"@standard-schema/spec@1.1.0":{},"@tailwindcss/node@4.2.4":{"dependencies":{"@jridgewell/remapping":"2.3.5","enhanced-resolve":"5.21.0","jiti":"2.6.1","lightningcss":"1.32.0","magic-string":"0.30.21","source-map-js":"1.2.1","tailwindcss":"4.2.4"}},"@tailwindcss/oxide-android-arm64@4.2.4":{"optional":true},"@tailwindcss/oxide-darwin-arm64@4.2.4":{"optional":true},"@tailwindcss/oxide-darwin-x64@4.2.4":{"optional":true},"@tailwindcss/oxide-freebsd-x64@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-arm-gnueabihf@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-arm64-gnu@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-arm64-musl@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-x64-gnu@4.2.4":{"optional":true},"@tailwindcss/oxide-linux-x64-musl@4.2.4":{"optional":true},"@tailwindcss/oxide-wasm32-wasi@4.2.4":{"optional":true},"@tailwindcss/oxide-win32-arm64-msvc@4.2.4":{"optional":true},"@tailwindcss/oxide-win32-x64-msvc@4.2.4":{"optional":true},"@tailwindcss/oxide@4.2.4":{"optionalDependencies":{"@tailwindcss/oxide-android-arm64":"4.2.4","@tailwindcss/oxide-darwin-arm64":"4.2.4","@tailwindcss/oxide-darwin-x64":"4.2.4","@tailwindcss/oxide-freebsd-x64":"4.2.4","@tailwindcss/oxide-linux-arm-gnueabihf":"4.2.4","@tailwindcss/oxide-linux-arm64-gnu":"4.2.4","@tailwindcss/oxide-linux-arm64-musl":"4.2.4","@tailwindcss/oxide-linux-x64-gnu":"4.2.4","@tailwindcss/oxide-linux-x64-musl":"4.2.4","@tailwindcss/oxide-wasm32-wasi":"4.2.4","@tailwindcss/oxide-win32-arm64-msvc":"4.2.4","@tailwindcss/oxide-win32-x64-msvc":"4.2.4"}},"@tailwindcss/vite@4.2.4(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@tailwindcss/node":"4.2.4","@tailwindcss/oxide":"4.2.4","tailwindcss":"4.2.4","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@tanstack/history@1.161.6":{},"@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@tanstack/history":"1.161.6","@tanstack/react-store":"0.9.3(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-core":"1.169.2","isbot":"5.1.28","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"}},"@tanstack/react-start-client@1.166.48(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-core":"1.169.2","@tanstack/start-client-core":"1.168.2","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"}},"@tanstack/react-start-rsc@0.0.43(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/react-start-server":"1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-core":"1.169.2","@tanstack/router-utils":"1.161.8","@tanstack/start-client-core":"1.168.2","@tanstack/start-fn-stubs":"1.161.6","@tanstack/start-plugin-core":"1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))","@tanstack/start-server-core":"1.167.30","@tanstack/start-storage-context":"1.166.35","pathe":"2.0.3","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"},"transitivePeerDependencies":["@rsbuild/core","crossws","supports-color","vite","vite-plugin-solid","webpack"]},"@tanstack/react-start-server@1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@tanstack/history":"1.161.6","@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-core":"1.169.2","@tanstack/start-client-core":"1.168.2","@tanstack/start-server-core":"1.167.30","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"},"transitivePeerDependencies":["crossws"]},"@tanstack/react-start@1.167.64(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/react-start-client":"1.166.48(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/react-start-rsc":"0.0.43(react-dom@19.2.0(react@19.2.0))(react@19.2.0)(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))","@tanstack/react-start-server":"1.166.52(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","@tanstack/router-utils":"1.161.8","@tanstack/start-client-core":"1.168.2","@tanstack/start-plugin-core":"1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))","@tanstack/start-server-core":"1.167.30","pathe":"2.0.3","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"},"optionalDependencies":{"vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"transitivePeerDependencies":["@rspack/core","crossws","react-server-dom-rspack","supports-color","vite-plugin-solid","webpack"]},"@tanstack/react-store@0.9.3(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@tanstack/store":"0.9.3","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)","use-sync-external-store":"1.6.0(react@19.2.0)"}},"@tanstack/router-core@1.169.2":{"dependencies":{"@tanstack/history":"1.161.6","cookie-es":"3.1.1","seroval":"1.5.4","seroval-plugins":"1.5.4(seroval@1.5.4)"}},"@tanstack/router-generator@1.166.41":{"dependencies":{"@babel/types":"7.28.5","@tanstack/router-core":"1.169.2","@tanstack/router-utils":"1.161.8","@tanstack/virtual-file-routes":"1.161.7","jiti":"2.6.1","magic-string":"0.30.21","prettier":"3.6.2","zod":"3.25.76"},"transitivePeerDependencies":["supports-color"]},"@tanstack/router-plugin@1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@babel/core":"7.28.5","@babel/plugin-syntax-jsx":"7.27.1(@babel/core@7.28.5)","@babel/plugin-syntax-typescript":"7.27.1(@babel/core@7.28.5)","@babel/template":"7.27.2","@babel/traverse":"7.28.5","@babel/types":"7.28.5","@tanstack/router-core":"1.169.2","@tanstack/router-generator":"1.166.41","@tanstack/router-utils":"1.161.8","@tanstack/virtual-file-routes":"1.161.7","chokidar":"3.6.0","unplugin":"3.0.0","zod":"3.25.76"},"optionalDependencies":{"@tanstack/react-router":"1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0)","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","webpack":"5.99.9(esbuild@0.27.3)"},"transitivePeerDependencies":["supports-color"]},"@tanstack/router-utils@1.161.8":{"dependencies":{"@babel/core":"7.28.5","@babel/generator":"7.28.5","@babel/parser":"7.28.5","@babel/types":"7.28.5","ansis":"4.1.0","babel-dead-code-elimination":"1.0.12","diff":"8.0.2","pathe":"2.0.3","tinyglobby":"0.2.16"},"transitivePeerDependencies":["supports-color"]},"@tanstack/start-client-core@1.168.2":{"dependencies":{"@tanstack/router-core":"1.169.2","@tanstack/start-fn-stubs":"1.161.6","@tanstack/start-storage-context":"1.166.35","seroval":"1.5.4"}},"@tanstack/start-fn-stubs@1.161.6":{},"@tanstack/start-plugin-core@1.169.19(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/core":"7.28.5","@babel/types":"7.28.5","@rolldown/pluginutils":"1.0.0-beta.40","@tanstack/router-core":"1.169.2","@tanstack/router-generator":"1.166.41","@tanstack/router-plugin":"1.167.34(@tanstack/react-router@1.169.2(react-dom@19.2.0(react@19.2.0))(react@19.2.0))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(webpack@5.99.9(esbuild@0.27.3))","@tanstack/router-utils":"1.161.8","@tanstack/start-client-core":"1.168.2","@tanstack/start-server-core":"1.167.30","cheerio":"1.1.2","exsolve":"1.0.8","lightningcss":"1.32.0","pathe":"2.0.3","picomatch":"4.0.3","seroval":"1.5.4","source-map":"0.7.6","srvx":"0.11.15","tinyglobby":"0.2.16","ufo":"1.6.1","vitefu":"1.1.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","xmlbuilder2":"4.0.3","zod":"3.25.76"},"optionalDependencies":{"vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"transitivePeerDependencies":["@tanstack/react-router","crossws","supports-color","vite-plugin-solid","webpack"]},"@tanstack/start-server-core@1.167.30":{"dependencies":{"@tanstack/history":"1.161.6","@tanstack/router-core":"1.169.2","@tanstack/start-client-core":"1.168.2","@tanstack/start-storage-context":"1.166.35","fetchdts":"0.1.7","h3-v2":"h3@2.0.1-rc.20","seroval":"1.5.4"},"transitivePeerDependencies":["crossws"]},"@tanstack/start-storage-context@1.166.35":{"dependencies":{"@tanstack/router-core":"1.169.2"}},"@tanstack/store@0.9.3":{},"@tanstack/virtual-file-routes@1.161.7":{},"@testing-library/dom@10.4.1":{"dependencies":{"@babel/code-frame":"7.27.1","@babel/runtime":"7.28.4","@types/aria-query":"5.0.4","aria-query":"5.3.0","dom-accessibility-api":"0.5.16","lz-string":"1.5.0","picocolors":"1.1.1","pretty-format":"27.5.1"}},"@testing-library/react@16.3.0(@testing-library/dom@10.4.1)(@types/react-dom@19.2.3(@types/react@19.2.7))(@types/react@19.2.7)(react-dom@19.2.0(react@19.2.0))(react@19.2.0)":{"dependencies":{"@babel/runtime":"7.28.4","@testing-library/dom":"10.4.1","react":"19.2.0","react-dom":"19.2.0(react@19.2.0)"},"optionalDependencies":{"@types/react":"19.2.7","@types/react-dom":"19.2.3(@types/react@19.2.7)"}},"@testing-library/user-event@14.6.1(@testing-library/dom@10.4.1)":{"dependencies":{"@testing-library/dom":"10.4.1"},"optional":true},"@tootallnate/quickjs-emscripten@0.23.0":{"optional":true},"@tybys/wasm-util@0.10.2":{"dependencies":{"tslib":"2.8.1"},"optional":true},"@tybys/wasm-util@0.9.0":{"dependencies":{"tslib":"2.8.1"}},"@types/aria-query@5.0.4":{},"@types/chai@5.2.2":{"dependencies":{"@types/deep-eql":"4.0.2"}},"@types/chai@5.2.3":{"dependencies":{"@types/deep-eql":"4.0.2","assertion-error":"2.0.1"}},"@types/cookie@0.6.0":{"optional":true},"@types/d3-array@3.2.1":{},"@types/d3-axis@3.0.6":{"dependencies":{"@types/d3-selection":"3.0.11"}},"@types/d3-brush@3.0.6":{"dependencies":{"@types/d3-selection":"3.0.11"}},"@types/d3-chord@3.0.6":{},"@types/d3-color@3.1.3":{},"@types/d3-contour@3.0.6":{"dependencies":{"@types/d3-array":"3.2.1","@types/geojson":"7946.0.15"}},"@types/d3-delaunay@6.0.4":{},"@types/d3-dispatch@3.0.6":{},"@types/d3-drag@3.0.7":{"dependencies":{"@types/d3-selection":"3.0.11"}},"@types/d3-dsv@3.0.7":{},"@types/d3-ease@3.0.2":{},"@types/d3-fetch@3.0.7":{"dependencies":{"@types/d3-dsv":"3.0.7"}},"@types/d3-force@3.0.10":{},"@types/d3-format@3.0.4":{},"@types/d3-geo@3.1.0":{"dependencies":{"@types/geojson":"7946.0.15"}},"@types/d3-hierarchy@3.1.7":{},"@types/d3-interpolate@3.0.4":{"dependencies":{"@types/d3-color":"3.1.3"}},"@types/d3-path@3.1.0":{},"@types/d3-polygon@3.0.2":{},"@types/d3-quadtree@3.0.6":{},"@types/d3-random@3.0.3":{},"@types/d3-scale-chromatic@3.1.0":{},"@types/d3-scale@4.0.8":{"dependencies":{"@types/d3-time":"3.0.4"}},"@types/d3-selection@3.0.11":{},"@types/d3-shape@3.1.7":{"dependencies":{"@types/d3-path":"3.1.0"}},"@types/d3-time-format@4.0.3":{},"@types/d3-time@3.0.4":{},"@types/d3-timer@3.0.2":{},"@types/d3-transition@3.0.9":{"dependencies":{"@types/d3-selection":"3.0.11"}},"@types/d3-zoom@3.0.8":{"dependencies":{"@types/d3-interpolate":"3.0.4","@types/d3-selection":"3.0.11"}},"@types/d3@7.4.3":{"dependencies":{"@types/d3-array":"3.2.1","@types/d3-axis":"3.0.6","@types/d3-brush":"3.0.6","@types/d3-chord":"3.0.6","@types/d3-color":"3.1.3","@types/d3-contour":"3.0.6","@types/d3-delaunay":"6.0.4","@types/d3-dispatch":"3.0.6","@types/d3-drag":"3.0.7","@types/d3-dsv":"3.0.7","@types/d3-ease":"3.0.2","@types/d3-fetch":"3.0.7","@types/d3-force":"3.0.10","@types/d3-format":"3.0.4","@types/d3-geo":"3.1.0","@types/d3-hierarchy":"3.1.7","@types/d3-interpolate":"3.0.4","@types/d3-path":"3.1.0","@types/d3-polygon":"3.0.2","@types/d3-quadtree":"3.0.6","@types/d3-random":"3.0.3","@types/d3-scale":"4.0.8","@types/d3-scale-chromatic":"3.1.0","@types/d3-selection":"3.0.11","@types/d3-shape":"3.1.7","@types/d3-time":"3.0.4","@types/d3-time-format":"4.0.3","@types/d3-timer":"3.0.2","@types/d3-transition":"3.0.9","@types/d3-zoom":"3.0.8"}},"@types/debug@4.1.12":{"dependencies":{"@types/ms":"2.1.0"}},"@types/deep-eql@4.0.2":{},"@types/eslint-scope@3.7.7":{"dependencies":{"@types/eslint":"9.6.1","@types/estree":"1.0.9"},"optional":true},"@types/eslint@9.6.1":{"dependencies":{"@types/estree":"1.0.9","@types/json-schema":"7.0.15"},"optional":true},"@types/estree@1.0.8":{},"@types/estree@1.0.9":{"optional":true},"@types/geojson@7946.0.15":{},"@types/hast@3.0.4":{"dependencies":{"@types/unist":"3.0.3"}},"@types/json-schema@7.0.15":{"optional":true},"@types/mdast@4.0.4":{"dependencies":{"@types/unist":"3.0.3"}},"@types/ms@2.1.0":{},"@types/node@12.20.55":{},"@types/node@20.19.39":{"dependencies":{"undici-types":"6.21.0"},"optional":true},"@types/node@22.15.33":{"dependencies":{"undici-types":"6.21.0"}},"@types/node@22.19.17":{"dependencies":{"undici-types":"6.21.0"},"optional":true},"@types/node@24.10.2":{"dependencies":{"undici-types":"7.16.0"},"optional":true},"@types/react-dom@19.2.3(@types/react@19.2.7)":{"dependencies":{"@types/react":"19.2.7"}},"@types/react@19.2.7":{"dependencies":{"csstype":"3.2.3"}},"@types/sinonjs__fake-timers@8.1.5":{"optional":true},"@types/statuses@2.0.6":{"optional":true},"@types/tough-cookie@4.0.5":{"optional":true},"@types/trusted-types@2.0.7":{"optional":true},"@types/unist@3.0.3":{},"@types/whatwg-mimetype@3.0.2":{"optional":true},"@types/which@2.0.2":{"optional":true},"@types/ws@8.18.1":{"dependencies":{"@types/node":"22.19.17"},"optional":true},"@types/yauzl@2.10.3":{"dependencies":{"@types/node":"22.19.17"},"optional":true},"@ungap/structured-clone@1.2.1":{},"@vitejs/plugin-react@6.0.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@rolldown/pluginutils":"1.0.0-rc.7","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/browser@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)":{"dependencies":{"@testing-library/dom":"10.4.1","@testing-library/user-event":"14.6.1(@testing-library/dom@10.4.1)","@vitest/mocker":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/utils":"3.2.4","magic-string":"0.30.21","sirv":"3.0.2","tinyrainbow":"2.0.0","vitest":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","ws":"8.20.0"},"optionalDependencies":{"playwright":"1.55.0","webdriverio":"9.2.1"},"transitivePeerDependencies":["bufferutil","msw","utf-8-validate","vite"],"optional":true},"@vitest/browser@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)":{"dependencies":{"@testing-library/dom":"10.4.1","@testing-library/user-event":"14.6.1(@testing-library/dom@10.4.1)","@vitest/mocker":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/utils":"3.2.4","magic-string":"0.30.21","sirv":"3.0.2","tinyrainbow":"2.0.0","vitest":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","ws":"8.20.0"},"optionalDependencies":{"playwright":"1.55.0","webdriverio":"9.2.1"},"transitivePeerDependencies":["bufferutil","msw","utf-8-validate","vite"],"optional":true},"@vitest/browser@4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)":{"dependencies":{"@blazediff/core":"1.9.1","@vitest/mocker":"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/utils":"4.1.5","magic-string":"0.30.21","pngjs":"7.0.0","sirv":"3.0.2","tinyrainbow":"3.1.0","vitest":"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","ws":"8.20.0"},"transitivePeerDependencies":["bufferutil","msw","utf-8-validate","vite"]},"@vitest/coverage-v8@3.2.4(@vitest/browser@3.2.4)(vitest@3.2.4)":{"dependencies":{"@ampproject/remapping":"2.3.0","@bcoe/v8-coverage":"1.0.2","ast-v8-to-istanbul":"0.3.4","debug":"4.4.1","istanbul-lib-coverage":"3.2.2","istanbul-lib-report":"3.0.1","istanbul-lib-source-maps":"5.0.6","istanbul-reports":"3.2.0","magic-string":"0.30.18","magicast":"0.3.5","std-env":"3.9.0","test-exclude":"7.0.1","tinyrainbow":"2.0.0","vitest":"3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"optionalDependencies":{"@vitest/browser":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)"},"transitivePeerDependencies":["supports-color"]},"@vitest/coverage-v8@4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)":{"dependencies":{"@bcoe/v8-coverage":"1.0.2","@vitest/utils":"4.1.5","ast-v8-to-istanbul":"1.0.0","istanbul-lib-coverage":"3.2.2","istanbul-lib-report":"3.0.1","istanbul-reports":"3.2.0","magicast":"0.5.2","obug":"2.1.1","std-env":"4.1.0","tinyrainbow":"3.1.0","vitest":"4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))"},"optionalDependencies":{"@vitest/browser":"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@4.1.5)"}},"@vitest/expect@3.2.4":{"dependencies":{"@types/chai":"5.2.2","@vitest/spy":"3.2.4","@vitest/utils":"3.2.4","chai":"5.3.3","tinyrainbow":"2.0.0"}},"@vitest/expect@4.0.18":{"dependencies":{"@standard-schema/spec":"1.0.0","@types/chai":"5.2.3","@vitest/spy":"4.0.18","@vitest/utils":"4.0.18","chai":"6.2.2","tinyrainbow":"3.1.0"}},"@vitest/expect@4.1.5":{"dependencies":{"@standard-schema/spec":"1.1.0","@types/chai":"5.2.3","@vitest/spy":"4.1.5","@vitest/utils":"4.1.5","chai":"6.2.2","tinyrainbow":"3.1.0"}},"@vitest/mocker@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/spy":"3.2.4","estree-walker":"3.0.3","magic-string":"0.30.21"},"optionalDependencies":{"msw":"2.10.2(@types/node@24.10.2)(typescript@5.8.3)","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/mocker@3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/spy":"3.2.4","estree-walker":"3.0.3","magic-string":"0.30.21"},"optionalDependencies":{"msw":"2.10.2(@types/node@24.10.2)(typescript@5.9.3)","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/mocker@4.0.18(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/spy":"4.0.18","estree-walker":"3.0.3","magic-string":"0.30.21"},"optionalDependencies":{"msw":"2.10.2(@types/node@24.10.2)(typescript@5.9.3)","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/mocker@4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/spy":"4.1.5","estree-walker":"3.0.3","magic-string":"0.30.21"},"optionalDependencies":{"msw":"2.10.2(@types/node@22.15.33)(typescript@5.8.3)","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"@vitest/pretty-format@3.2.4":{"dependencies":{"tinyrainbow":"2.0.0"}},"@vitest/pretty-format@4.0.18":{"dependencies":{"tinyrainbow":"3.1.0"}},"@vitest/pretty-format@4.1.5":{"dependencies":{"tinyrainbow":"3.1.0"}},"@vitest/runner@3.2.4":{"dependencies":{"@vitest/utils":"3.2.4","pathe":"2.0.3","strip-literal":"3.0.0"}},"@vitest/runner@4.0.18":{"dependencies":{"@vitest/utils":"4.0.18","pathe":"2.0.3"}},"@vitest/runner@4.1.5":{"dependencies":{"@vitest/utils":"4.1.5","pathe":"2.0.3"}},"@vitest/snapshot@3.2.4":{"dependencies":{"@vitest/pretty-format":"3.2.4","magic-string":"0.30.21","pathe":"2.0.3"}},"@vitest/snapshot@4.0.18":{"dependencies":{"@vitest/pretty-format":"4.0.18","magic-string":"0.30.21","pathe":"2.0.3"}},"@vitest/snapshot@4.1.5":{"dependencies":{"@vitest/pretty-format":"4.1.5","@vitest/utils":"4.1.5","magic-string":"0.30.21","pathe":"2.0.3"}},"@vitest/spy@3.2.4":{"dependencies":{"tinyspy":"4.0.3"}},"@vitest/spy@4.0.18":{},"@vitest/spy@4.1.5":{},"@vitest/utils@3.2.4":{"dependencies":{"@vitest/pretty-format":"3.2.4","loupe":"3.2.1","tinyrainbow":"2.0.0"}},"@vitest/utils@4.0.18":{"dependencies":{"@vitest/pretty-format":"4.0.18","tinyrainbow":"3.1.0"}},"@vitest/utils@4.1.5":{"dependencies":{"@vitest/pretty-format":"4.1.5","convert-source-map":"2.0.0","tinyrainbow":"3.1.0"}},"@wdio/config@9.1.3":{"dependencies":{"@wdio/logger":"9.1.3","@wdio/types":"9.1.3","@wdio/utils":"9.1.3","decamelize":"6.0.1","deepmerge-ts":"7.1.5","glob":"10.5.0","import-meta-resolve":"4.2.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a","supports-color"],"optional":true},"@wdio/logger@8.38.0":{"dependencies":{"chalk":"5.6.2","loglevel":"1.9.2","loglevel-plugin-prefix":"0.8.4","strip-ansi":"7.2.0"},"optional":true},"@wdio/logger@9.1.3":{"dependencies":{"chalk":"5.6.2","loglevel":"1.9.2","loglevel-plugin-prefix":"0.8.4","strip-ansi":"7.2.0"},"optional":true},"@wdio/protocols@9.2.0":{"optional":true},"@wdio/repl@9.0.8":{"dependencies":{"@types/node":"20.19.39"},"optional":true},"@wdio/types@9.1.3":{"dependencies":{"@types/node":"20.19.39"},"optional":true},"@wdio/utils@9.1.3":{"dependencies":{"@puppeteer/browsers":"2.13.1","@wdio/logger":"9.1.3","@wdio/types":"9.1.3","decamelize":"6.0.1","deepmerge-ts":"7.1.5","edgedriver":"5.6.1","geckodriver":"4.5.1","get-port":"7.2.0","import-meta-resolve":"4.2.0","locate-app":"2.5.0","safaridriver":"0.1.2","split2":"4.2.0","wait-port":"1.1.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a","supports-color"],"optional":true},"@webassemblyjs/ast@1.14.1":{"dependencies":{"@webassemblyjs/helper-numbers":"1.13.2","@webassemblyjs/helper-wasm-bytecode":"1.13.2"},"optional":true},"@webassemblyjs/floating-point-hex-parser@1.13.2":{"optional":true},"@webassemblyjs/helper-api-error@1.13.2":{"optional":true},"@webassemblyjs/helper-buffer@1.14.1":{"optional":true},"@webassemblyjs/helper-numbers@1.13.2":{"dependencies":{"@webassemblyjs/floating-point-hex-parser":"1.13.2","@webassemblyjs/helper-api-error":"1.13.2","@xtuc/long":"4.2.2"},"optional":true},"@webassemblyjs/helper-wasm-bytecode@1.13.2":{"optional":true},"@webassemblyjs/helper-wasm-section@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-buffer":"1.14.1","@webassemblyjs/helper-wasm-bytecode":"1.13.2","@webassemblyjs/wasm-gen":"1.14.1"},"optional":true},"@webassemblyjs/ieee754@1.13.2":{"dependencies":{"@xtuc/ieee754":"1.2.0"},"optional":true},"@webassemblyjs/leb128@1.13.2":{"dependencies":{"@xtuc/long":"4.2.2"},"optional":true},"@webassemblyjs/utf8@1.13.2":{"optional":true},"@webassemblyjs/wasm-edit@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-buffer":"1.14.1","@webassemblyjs/helper-wasm-bytecode":"1.13.2","@webassemblyjs/helper-wasm-section":"1.14.1","@webassemblyjs/wasm-gen":"1.14.1","@webassemblyjs/wasm-opt":"1.14.1","@webassemblyjs/wasm-parser":"1.14.1","@webassemblyjs/wast-printer":"1.14.1"},"optional":true},"@webassemblyjs/wasm-gen@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-wasm-bytecode":"1.13.2","@webassemblyjs/ieee754":"1.13.2","@webassemblyjs/leb128":"1.13.2","@webassemblyjs/utf8":"1.13.2"},"optional":true},"@webassemblyjs/wasm-opt@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-buffer":"1.14.1","@webassemblyjs/wasm-gen":"1.14.1","@webassemblyjs/wasm-parser":"1.14.1"},"optional":true},"@webassemblyjs/wasm-parser@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@webassemblyjs/helper-api-error":"1.13.2","@webassemblyjs/helper-wasm-bytecode":"1.13.2","@webassemblyjs/ieee754":"1.13.2","@webassemblyjs/leb128":"1.13.2","@webassemblyjs/utf8":"1.13.2"},"optional":true},"@webassemblyjs/wast-printer@1.14.1":{"dependencies":{"@webassemblyjs/ast":"1.14.1","@xtuc/long":"4.2.2"},"optional":true},"@xtuc/ieee754@1.2.0":{"optional":true},"@xtuc/long@4.2.2":{"optional":true},"@yarnpkg/lockfile@1.1.0":{},"@yarnpkg/parsers@3.0.2":{"dependencies":{"js-yaml":"3.14.1","tslib":"2.8.1"}},"@zip.js/zip.js@2.8.26":{"optional":true},"@zkochan/js-yaml@0.0.7":{"dependencies":{"argparse":"2.0.1"}},"abort-controller@3.0.0":{"dependencies":{"event-target-shim":"5.0.1"},"optional":true},"acorn@8.16.0":{},"agent-base@7.1.3":{},"agent-base@7.1.4":{"optional":true},"ajv-formats@2.1.1(ajv@8.20.0)":{"optionalDependencies":{"ajv":"8.20.0"},"optional":true},"ajv-keywords@5.1.0(ajv@8.20.0)":{"dependencies":{"ajv":"8.20.0","fast-deep-equal":"3.1.3"},"optional":true},"ajv@8.17.1":{"dependencies":{"fast-deep-equal":"3.1.3","fast-uri":"3.0.3","json-schema-traverse":"1.0.0","require-from-string":"2.0.2"}},"ajv@8.20.0":{"dependencies":{"fast-deep-equal":"3.1.3","fast-uri":"3.1.2","json-schema-traverse":"1.0.0","require-from-string":"2.0.2"},"optional":true},"ansi-colors@4.1.3":{},"ansi-regex@5.0.1":{},"ansi-regex@6.1.0":{},"ansi-regex@6.2.2":{"optional":true},"ansi-styles@4.3.0":{"dependencies":{"color-convert":"2.0.1"}},"ansi-styles@5.2.0":{},"ansi-styles@6.2.1":{},"ansis@4.1.0":{},"anymatch@3.1.3":{"dependencies":{"normalize-path":"3.0.0","picomatch":"2.3.1"}},"archiver-utils@5.0.2":{"dependencies":{"glob":"10.5.0","graceful-fs":"4.2.11","is-stream":"2.0.1","lazystream":"1.0.1","lodash":"4.18.1","normalize-path":"3.0.0","readable-stream":"4.7.0"},"optional":true},"archiver@7.0.1":{"dependencies":{"archiver-utils":"5.0.2","async":"3.2.6","buffer-crc32":"1.0.0","readable-stream":"4.7.0","readdir-glob":"1.1.3","tar-stream":"3.2.0","zip-stream":"6.0.1"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a"],"optional":true},"argparse@1.0.10":{"dependencies":{"sprintf-js":"1.0.3"}},"argparse@2.0.1":{},"aria-query@5.3.0":{"dependencies":{"dequal":"2.0.3"}},"aria-query@5.3.2":{"optional":true},"array-union@2.1.0":{},"assertion-error@2.0.1":{},"ast-types@0.13.4":{"dependencies":{"tslib":"2.8.1"},"optional":true},"ast-v8-to-istanbul@0.3.4":{"dependencies":{"@jridgewell/trace-mapping":"0.3.30","estree-walker":"3.0.3","js-tokens":"9.0.1"}},"ast-v8-to-istanbul@1.0.0":{"dependencies":{"@jridgewell/trace-mapping":"0.3.31","estree-walker":"3.0.3","js-tokens":"10.0.0"}},"async@3.2.6":{"optional":true},"asynckit@0.4.0":{},"axios@1.11.0":{"dependencies":{"follow-redirects":"1.15.11","form-data":"4.0.4","proxy-from-env":"1.1.0"},"transitivePeerDependencies":["debug"]},"b4a@1.8.1":{"optional":true},"babel-dead-code-elimination@1.0.12":{"dependencies":{"@babel/core":"7.28.5","@babel/parser":"7.28.5","@babel/traverse":"7.28.5","@babel/types":"7.28.5"},"transitivePeerDependencies":["supports-color"]},"bail@2.0.2":{},"balanced-match@1.0.2":{},"bare-events@2.8.2":{"optional":true},"bare-fs@4.7.1":{"dependencies":{"bare-events":"2.8.2","bare-path":"3.0.0","bare-stream":"2.13.1(bare-events@2.8.2)","bare-url":"2.4.3","fast-fifo":"1.3.2"},"transitivePeerDependencies":["bare-abort-controller","react-native-b4a"],"optional":true},"bare-os@3.9.1":{"optional":true},"bare-path@3.0.0":{"dependencies":{"bare-os":"3.9.1"},"optional":true},"bare-stream@2.13.1(bare-events@2.8.2)":{"dependencies":{"streamx":"2.25.0","teex":"1.0.1"},"optionalDependencies":{"bare-events":"2.8.2"},"transitivePeerDependencies":["react-native-b4a"],"optional":true},"bare-url@2.4.3":{"dependencies":{"bare-path":"3.0.0"},"optional":true},"base64-js@1.5.1":{},"baseline-browser-mapping@2.10.27":{"optional":true},"basic-ftp@5.3.1":{"optional":true},"better-path-resolve@1.0.0":{"dependencies":{"is-windows":"1.0.2"}},"better-sqlite3@12.9.0":{"dependencies":{"bindings":"1.5.0","prebuild-install":"7.1.3"}},"bidi-js@1.0.3":{"dependencies":{"require-from-string":"2.0.2"}},"binary-extensions@2.3.0":{},"bindings@1.5.0":{"dependencies":{"file-uri-to-path":"1.0.0"}},"bl@4.1.0":{"dependencies":{"buffer":"5.7.1","inherits":"2.0.4","readable-stream":"3.6.2"}},"blake3-wasm@2.1.5":{},"boolbase@1.0.0":{},"brace-expansion@2.0.2":{"dependencies":{"balanced-match":"1.0.2"}},"brace-expansion@2.1.0":{"dependencies":{"balanced-match":"1.0.2"},"optional":true},"braces@3.0.3":{"dependencies":{"fill-range":"7.1.1"}},"browserslist@4.25.3":{"dependencies":{"caniuse-lite":"1.0.30001737","electron-to-chromium":"1.5.211","node-releases":"2.0.19","update-browserslist-db":"1.1.3(browserslist@4.25.3)"}},"browserslist@4.28.2":{"dependencies":{"baseline-browser-mapping":"2.10.27","caniuse-lite":"1.0.30001792","electron-to-chromium":"1.5.352","node-releases":"2.0.38","update-browserslist-db":"1.2.3(browserslist@4.28.2)"},"optional":true},"buffer-builder@0.2.0":{"optional":true},"buffer-crc32@0.2.13":{"optional":true},"buffer-crc32@1.0.0":{"optional":true},"buffer-from@1.1.2":{"optional":true},"buffer@5.7.1":{"dependencies":{"base64-js":"1.5.1","ieee754":"1.2.1"}},"buffer@6.0.3":{"dependencies":{"base64-js":"1.5.1","ieee754":"1.2.1"},"optional":true},"cac@6.7.14":{},"call-bind-apply-helpers@1.0.2":{"dependencies":{"es-errors":"1.3.0","function-bind":"1.1.2"}},"caniuse-lite@1.0.30001737":{},"caniuse-lite@1.0.30001792":{"optional":true},"ccount@2.0.1":{},"chai@5.3.3":{"dependencies":{"assertion-error":"2.0.1","check-error":"2.1.1","deep-eql":"5.0.2","loupe":"3.2.1","pathval":"2.0.1"}},"chai@6.2.2":{},"chalk@4.1.2":{"dependencies":{"ansi-styles":"4.3.0","supports-color":"7.2.0"}},"chalk@5.6.2":{"optional":true},"character-entities-html4@2.1.0":{},"character-entities-legacy@3.0.0":{},"character-entities@2.0.2":{},"chardet@2.1.0":{},"check-error@2.1.1":{},"cheerio-select@2.1.0":{"dependencies":{"boolbase":"1.0.0","css-select":"5.1.0","css-what":"6.1.0","domelementtype":"2.3.0","domhandler":"5.0.3","domutils":"3.2.2"}},"cheerio@1.1.2":{"dependencies":{"cheerio-select":"2.1.0","dom-serializer":"2.0.0","domhandler":"5.0.3","domutils":"3.2.2","encoding-sniffer":"0.2.1","htmlparser2":"10.0.0","parse5":"7.3.0","parse5-htmlparser2-tree-adapter":"7.1.0","parse5-parser-stream":"7.1.2","undici":"7.16.0","whatwg-mimetype":"4.0.0"}},"cheerio@1.2.0":{"dependencies":{"cheerio-select":"2.1.0","dom-serializer":"2.0.0","domhandler":"5.0.3","domutils":"3.2.2","encoding-sniffer":"0.2.1","htmlparser2":"10.1.0","parse5":"7.3.0","parse5-htmlparser2-tree-adapter":"7.1.0","parse5-parser-stream":"7.1.2","undici":"7.25.0","whatwg-mimetype":"4.0.0"},"optional":true},"chevrotain-allstar@0.3.1(chevrotain@11.0.3)":{"dependencies":{"chevrotain":"11.0.3","lodash-es":"4.17.21"}},"chevrotain@11.0.3":{"dependencies":{"@chevrotain/cst-dts-gen":"11.0.3","@chevrotain/gast":"11.0.3","@chevrotain/regexp-to-ast":"11.0.3","@chevrotain/types":"11.0.3","@chevrotain/utils":"11.0.3","lodash-es":"4.17.21"}},"chokidar@3.6.0":{"dependencies":{"anymatch":"3.1.3","braces":"3.0.3","glob-parent":"5.1.2","is-binary-path":"2.1.0","is-glob":"4.0.3","normalize-path":"3.0.0","readdirp":"3.6.0"},"optionalDependencies":{"fsevents":"2.3.3"}},"chownr@1.1.4":{},"chownr@2.0.0":{},"chrome-trace-event@1.0.4":{"optional":true},"ci-info@3.9.0":{},"cli-cursor@3.1.0":{"dependencies":{"restore-cursor":"3.1.0"}},"cli-spinners@2.6.1":{},"cli-spinners@2.9.2":{},"cli-width@4.1.0":{"optional":true},"cliui@8.0.1":{"dependencies":{"string-width":"4.2.3","strip-ansi":"6.0.1","wrap-ansi":"7.0.0"}},"clone@1.0.4":{},"color-convert@2.0.1":{"dependencies":{"color-name":"1.1.4"}},"color-name@1.1.4":{},"colorjs.io@0.5.2":{"optional":true},"combined-stream@1.0.8":{"dependencies":{"delayed-stream":"1.0.0"}},"comma-separated-tokens@2.0.3":{},"commander@2.20.3":{"optional":true},"commander@7.2.0":{},"commander@8.3.0":{},"commander@9.5.0":{"optional":true},"compress-commons@6.0.2":{"dependencies":{"crc-32":"1.2.2","crc32-stream":"6.0.0","is-stream":"2.0.1","normalize-path":"3.0.0","readable-stream":"4.7.0"},"optional":true},"confbox@0.1.8":{},"confbox@0.2.2":{},"convert-source-map@2.0.0":{},"cookie-es@3.1.1":{},"cookie@0.7.2":{"optional":true},"cookie@1.0.2":{},"core-js@3.46.0":{},"core-util-is@1.0.3":{"optional":true},"cose-base@1.0.3":{"dependencies":{"layout-base":"1.0.2"}},"cose-base@2.2.0":{"dependencies":{"layout-base":"2.0.1"}},"crc-32@1.2.2":{"optional":true},"crc32-stream@6.0.0":{"dependencies":{"crc-32":"1.2.2","readable-stream":"4.7.0"},"optional":true},"cross-spawn@7.0.6":{"dependencies":{"path-key":"3.1.1","shebang-command":"2.0.0","which":"2.0.2"}},"css-select@5.1.0":{"dependencies":{"boolbase":"1.0.0","css-what":"6.1.0","domhandler":"5.0.3","domutils":"3.2.2","nth-check":"2.1.1"}},"css-shorthand-properties@1.1.2":{"optional":true},"css-tree@3.1.0":{"dependencies":{"mdn-data":"2.12.2","source-map-js":"1.2.1"}},"css-value@0.0.1":{"optional":true},"css-what@6.1.0":{},"cssstyle@4.3.1":{"dependencies":{"@asamuzakjp/css-color":"3.1.4","rrweb-cssom":"0.8.0"}},"cssstyle@5.3.4(postcss@8.5.14)":{"dependencies":{"@asamuzakjp/css-color":"4.1.0","@csstools/css-syntax-patches-for-csstree":"1.0.14(postcss@8.5.14)","css-tree":"3.1.0"},"transitivePeerDependencies":["postcss"]},"csstype@3.2.3":{},"cytoscape-cose-bilkent@4.1.0(cytoscape@3.30.4)":{"dependencies":{"cose-base":"1.0.3","cytoscape":"3.30.4"}},"cytoscape-fcose@2.2.0(cytoscape@3.30.4)":{"dependencies":{"cose-base":"2.2.0","cytoscape":"3.30.4"}},"cytoscape@3.30.4":{},"d3-array@2.12.1":{"dependencies":{"internmap":"1.0.1"}},"d3-array@3.2.4":{"dependencies":{"internmap":"2.0.3"}},"d3-axis@3.0.0":{},"d3-brush@3.0.0":{"dependencies":{"d3-dispatch":"3.0.1","d3-drag":"3.0.0","d3-interpolate":"3.0.1","d3-selection":"3.0.0","d3-transition":"3.0.1(d3-selection@3.0.0)"}},"d3-chord@3.0.1":{"dependencies":{"d3-path":"3.1.0"}},"d3-color@3.1.0":{},"d3-contour@4.0.2":{"dependencies":{"d3-array":"3.2.4"}},"d3-delaunay@6.0.4":{"dependencies":{"delaunator":"5.0.1"}},"d3-dispatch@3.0.1":{},"d3-drag@3.0.0":{"dependencies":{"d3-dispatch":"3.0.1","d3-selection":"3.0.0"}},"d3-dsv@3.0.1":{"dependencies":{"commander":"7.2.0","iconv-lite":"0.6.3","rw":"1.3.3"}},"d3-ease@3.0.1":{},"d3-fetch@3.0.1":{"dependencies":{"d3-dsv":"3.0.1"}},"d3-force@3.0.0":{"dependencies":{"d3-dispatch":"3.0.1","d3-quadtree":"3.0.1","d3-timer":"3.0.1"}},"d3-format@3.1.0":{},"d3-geo@3.1.1":{"dependencies":{"d3-array":"3.2.4"}},"d3-hierarchy@3.1.2":{},"d3-interpolate@3.0.1":{"dependencies":{"d3-color":"3.1.0"}},"d3-path@1.0.9":{},"d3-path@3.1.0":{},"d3-polygon@3.0.1":{},"d3-quadtree@3.0.1":{},"d3-random@3.0.1":{},"d3-sankey@0.12.3":{"dependencies":{"d3-array":"2.12.1","d3-shape":"1.3.7"}},"d3-scale-chromatic@3.1.0":{"dependencies":{"d3-color":"3.1.0","d3-interpolate":"3.0.1"}},"d3-scale@4.0.2":{"dependencies":{"d3-array":"3.2.4","d3-format":"3.1.0","d3-interpolate":"3.0.1","d3-time":"3.1.0","d3-time-format":"4.1.0"}},"d3-selection@3.0.0":{},"d3-shape@1.3.7":{"dependencies":{"d3-path":"1.0.9"}},"d3-shape@3.2.0":{"dependencies":{"d3-path":"3.1.0"}},"d3-time-format@4.1.0":{"dependencies":{"d3-time":"3.1.0"}},"d3-time@3.1.0":{"dependencies":{"d3-array":"3.2.4"}},"d3-timer@3.0.1":{},"d3-transition@3.0.1(d3-selection@3.0.0)":{"dependencies":{"d3-color":"3.1.0","d3-dispatch":"3.0.1","d3-ease":"3.0.1","d3-interpolate":"3.0.1","d3-selection":"3.0.0","d3-timer":"3.0.1"}},"d3-zoom@3.0.0":{"dependencies":{"d3-dispatch":"3.0.1","d3-drag":"3.0.0","d3-interpolate":"3.0.1","d3-selection":"3.0.0","d3-transition":"3.0.1(d3-selection@3.0.0)"}},"d3@7.9.0":{"dependencies":{"d3-array":"3.2.4","d3-axis":"3.0.0","d3-brush":"3.0.0","d3-chord":"3.0.1","d3-color":"3.1.0","d3-contour":"4.0.2","d3-delaunay":"6.0.4","d3-dispatch":"3.0.1","d3-drag":"3.0.0","d3-dsv":"3.0.1","d3-ease":"3.0.1","d3-fetch":"3.0.1","d3-force":"3.0.0","d3-format":"3.1.0","d3-geo":"3.1.1","d3-hierarchy":"3.1.2","d3-interpolate":"3.0.1","d3-path":"3.1.0","d3-polygon":"3.0.1","d3-quadtree":"3.0.1","d3-random":"3.0.1","d3-scale":"4.0.2","d3-scale-chromatic":"3.1.0","d3-selection":"3.0.0","d3-shape":"3.2.0","d3-time":"3.1.0","d3-time-format":"4.1.0","d3-timer":"3.0.1","d3-transition":"3.0.1(d3-selection@3.0.0)","d3-zoom":"3.0.0"}},"dagre-d3-es@7.0.13":{"dependencies":{"d3":"7.9.0","lodash-es":"4.17.21"}},"data-uri-to-buffer@4.0.1":{"optional":true},"data-uri-to-buffer@6.0.2":{"optional":true},"data-urls@5.0.0":{"dependencies":{"whatwg-mimetype":"4.0.0","whatwg-url":"14.2.0"}},"data-urls@6.0.0":{"dependencies":{"whatwg-mimetype":"4.0.0","whatwg-url":"15.1.0"}},"dayjs@1.11.19":{},"debug@4.4.1":{"dependencies":{"ms":"2.1.3"}},"debug@4.4.3":{"dependencies":{"ms":"2.1.3"}},"decamelize@6.0.1":{"optional":true},"decimal.js@10.6.0":{},"decode-named-character-reference@1.0.2":{"dependencies":{"character-entities":"2.0.2"}},"decompress-response@6.0.0":{"dependencies":{"mimic-response":"3.1.0"}},"deep-eql@5.0.2":{},"deep-extend@0.6.0":{},"deepmerge-ts@7.1.5":{"optional":true},"defaults@1.0.4":{"dependencies":{"clone":"1.0.4"}},"define-lazy-prop@2.0.0":{},"degenerator@5.0.1":{"dependencies":{"ast-types":"0.13.4","escodegen":"2.1.0","esprima":"4.0.1"},"optional":true},"delaunator@5.0.1":{"dependencies":{"robust-predicates":"3.0.2"}},"delayed-stream@1.0.0":{},"dequal@2.0.3":{},"detect-indent@6.1.0":{},"detect-libc@2.1.2":{},"devlop@1.1.0":{"dependencies":{"dequal":"2.0.3"}},"diff@8.0.2":{},"dir-glob@3.0.1":{"dependencies":{"path-type":"4.0.0"}},"dom-accessibility-api@0.5.16":{},"dom-serializer@2.0.0":{"dependencies":{"domelementtype":"2.3.0","domhandler":"5.0.3","entities":"4.5.0"}},"domelementtype@2.3.0":{},"domhandler@5.0.3":{"dependencies":{"domelementtype":"2.3.0"}},"dompurify@3.3.1":{"optionalDependencies":{"@types/trusted-types":"2.0.7"}},"domutils@3.2.2":{"dependencies":{"dom-serializer":"2.0.0","domelementtype":"2.3.0","domhandler":"5.0.3"}},"dotenv-expand@11.0.7":{"dependencies":{"dotenv":"16.5.0"}},"dotenv@10.0.0":{},"dotenv@16.4.7":{},"dotenv@16.5.0":{},"dunder-proto@1.0.1":{"dependencies":{"call-bind-apply-helpers":"1.0.2","es-errors":"1.3.0","gopd":"1.2.0"}},"eastasianwidth@0.2.0":{},"edge-paths@3.0.5":{"dependencies":{"@types/which":"2.0.2","which":"2.0.2"},"optional":true},"edgedriver@5.6.1":{"dependencies":{"@wdio/logger":"8.38.0","@zip.js/zip.js":"2.8.26","decamelize":"6.0.1","edge-paths":"3.0.5","fast-xml-parser":"4.5.6","node-fetch":"3.3.2","which":"4.0.0"},"optional":true},"electron-to-chromium@1.5.211":{},"electron-to-chromium@1.5.352":{"optional":true},"emoji-regex@8.0.0":{},"emoji-regex@9.2.2":{},"encoding-sniffer@0.2.1":{"dependencies":{"iconv-lite":"0.6.3","whatwg-encoding":"3.1.1"}},"end-of-stream@1.4.5":{"dependencies":{"once":"1.4.0"}},"enhanced-resolve@5.21.0":{"dependencies":{"graceful-fs":"4.2.11","tapable":"2.3.3"}},"enquirer@2.3.6":{"dependencies":{"ansi-colors":"4.1.3"}},"enquirer@2.4.1":{"dependencies":{"ansi-colors":"4.1.3","strip-ansi":"6.0.1"}},"entities@4.5.0":{},"entities@6.0.1":{},"entities@7.0.1":{"optional":true},"error-stack-parser-es@1.0.5":{},"es-define-property@1.0.1":{},"es-errors@1.3.0":{},"es-module-lexer@1.7.0":{},"es-module-lexer@2.1.0":{},"es-object-atoms@1.1.1":{"dependencies":{"es-errors":"1.3.0"}},"es-set-tostringtag@2.1.0":{"dependencies":{"es-errors":"1.3.0","get-intrinsic":"1.3.0","has-tostringtag":"1.0.2","hasown":"2.0.2"}},"esbuild@0.25.12":{"optionalDependencies":{"@esbuild/aix-ppc64":"0.25.12","@esbuild/android-arm":"0.25.12","@esbuild/android-arm64":"0.25.12","@esbuild/android-x64":"0.25.12","@esbuild/darwin-arm64":"0.25.12","@esbuild/darwin-x64":"0.25.12","@esbuild/freebsd-arm64":"0.25.12","@esbuild/freebsd-x64":"0.25.12","@esbuild/linux-arm":"0.25.12","@esbuild/linux-arm64":"0.25.12","@esbuild/linux-ia32":"0.25.12","@esbuild/linux-loong64":"0.25.12","@esbuild/linux-mips64el":"0.25.12","@esbuild/linux-ppc64":"0.25.12","@esbuild/linux-riscv64":"0.25.12","@esbuild/linux-s390x":"0.25.12","@esbuild/linux-x64":"0.25.12","@esbuild/netbsd-arm64":"0.25.12","@esbuild/netbsd-x64":"0.25.12","@esbuild/openbsd-arm64":"0.25.12","@esbuild/openbsd-x64":"0.25.12","@esbuild/openharmony-arm64":"0.25.12","@esbuild/sunos-x64":"0.25.12","@esbuild/win32-arm64":"0.25.12","@esbuild/win32-ia32":"0.25.12","@esbuild/win32-x64":"0.25.12"}},"esbuild@0.27.3":{"optionalDependencies":{"@esbuild/aix-ppc64":"0.27.3","@esbuild/android-arm":"0.27.3","@esbuild/android-arm64":"0.27.3","@esbuild/android-x64":"0.27.3","@esbuild/darwin-arm64":"0.27.3","@esbuild/darwin-x64":"0.27.3","@esbuild/freebsd-arm64":"0.27.3","@esbuild/freebsd-x64":"0.27.3","@esbuild/linux-arm":"0.27.3","@esbuild/linux-arm64":"0.27.3","@esbuild/linux-ia32":"0.27.3","@esbuild/linux-loong64":"0.27.3","@esbuild/linux-mips64el":"0.27.3","@esbuild/linux-ppc64":"0.27.3","@esbuild/linux-riscv64":"0.27.3","@esbuild/linux-s390x":"0.27.3","@esbuild/linux-x64":"0.27.3","@esbuild/netbsd-arm64":"0.27.3","@esbuild/netbsd-x64":"0.27.3","@esbuild/openbsd-arm64":"0.27.3","@esbuild/openbsd-x64":"0.27.3","@esbuild/openharmony-arm64":"0.27.3","@esbuild/sunos-x64":"0.27.3","@esbuild/win32-arm64":"0.27.3","@esbuild/win32-ia32":"0.27.3","@esbuild/win32-x64":"0.27.3"}},"escalade@3.2.0":{},"escape-string-regexp@1.0.5":{},"escape-string-regexp@5.0.0":{},"escodegen@2.1.0":{"dependencies":{"esprima":"4.0.1","estraverse":"5.3.0","esutils":"2.0.3"},"optionalDependencies":{"source-map":"0.6.1"},"optional":true},"eslint-scope@5.1.1":{"dependencies":{"esrecurse":"4.3.0","estraverse":"4.3.0"},"optional":true},"esprima@4.0.1":{},"esrecurse@4.3.0":{"dependencies":{"estraverse":"5.3.0"},"optional":true},"estraverse@4.3.0":{"optional":true},"estraverse@5.3.0":{"optional":true},"estree-walker@3.0.3":{"dependencies":{"@types/estree":"1.0.8"}},"esutils@2.0.3":{"optional":true},"event-target-shim@5.0.1":{"optional":true},"events-universal@1.0.1":{"dependencies":{"bare-events":"2.8.2"},"transitivePeerDependencies":["bare-abort-controller"],"optional":true},"events@3.3.0":{"optional":true},"expand-template@2.0.3":{},"expect-type@1.2.2":{},"expect-type@1.3.0":{},"exsolve@1.0.8":{},"extend@3.0.2":{},"extendable-error@0.1.7":{},"extract-zip@2.0.1":{"dependencies":{"debug":"4.4.3","get-stream":"5.2.0","yauzl":"2.10.0"},"optionalDependencies":{"@types/yauzl":"2.10.3"},"transitivePeerDependencies":["supports-color"],"optional":true},"fast-deep-equal@2.0.1":{"optional":true},"fast-deep-equal@3.1.3":{},"fast-fifo@1.3.2":{"optional":true},"fast-glob@3.3.3":{"dependencies":{"@nodelib/fs.stat":"2.0.5","@nodelib/fs.walk":"1.2.8","glob-parent":"5.1.2","merge2":"1.4.1","micromatch":"4.0.8"}},"fast-uri@3.0.3":{},"fast-uri@3.1.2":{"optional":true},"fast-xml-parser@4.5.6":{"dependencies":{"strnum":"1.1.2"},"optional":true},"fastq@1.17.1":{"dependencies":{"reusify":"1.0.4"}},"fault@2.0.1":{"dependencies":{"format":"0.2.2"}},"fd-slicer@1.1.0":{"dependencies":{"pend":"1.2.0"},"optional":true},"fdir@6.5.0(picomatch@4.0.4)":{"optionalDependencies":{"picomatch":"4.0.4"}},"fetch-blob@3.2.0":{"dependencies":{"node-domexception":"1.0.0","web-streams-polyfill":"3.3.3"},"optional":true},"fetchdts@0.1.7":{},"fflate@0.4.8":{},"figures@3.2.0":{"dependencies":{"escape-string-regexp":"1.0.5"}},"file-uri-to-path@1.0.0":{},"fill-range@7.1.1":{"dependencies":{"to-regex-range":"5.0.1"}},"find-up@4.1.0":{"dependencies":{"locate-path":"5.0.0","path-exists":"4.0.0"}},"flat@5.0.2":{},"follow-redirects@1.15.11":{},"foreground-child@3.3.1":{"dependencies":{"cross-spawn":"7.0.6","signal-exit":"4.1.0"}},"form-data@4.0.4":{"dependencies":{"asynckit":"0.4.0","combined-stream":"1.0.8","es-set-tostringtag":"2.1.0","hasown":"2.0.2","mime-types":"2.1.35"}},"format@0.2.2":{},"formdata-polyfill@4.0.10":{"dependencies":{"fetch-blob":"3.2.0"},"optional":true},"front-matter@4.0.2":{"dependencies":{"js-yaml":"3.14.1"}},"fs-constants@1.0.0":{},"fs-extra@11.3.1":{"dependencies":{"graceful-fs":"4.2.11","jsonfile":"6.2.0","universalify":"2.0.1"}},"fs-extra@7.0.1":{"dependencies":{"graceful-fs":"4.2.11","jsonfile":"4.0.0","universalify":"0.1.2"}},"fs-extra@8.1.0":{"dependencies":{"graceful-fs":"4.2.11","jsonfile":"4.0.0","universalify":"0.1.2"}},"fs-minipass@2.1.0":{"dependencies":{"minipass":"3.3.6"}},"fsevents@2.3.2":{"optional":true},"fsevents@2.3.3":{"optional":true},"function-bind@1.1.2":{},"geckodriver@4.5.1":{"dependencies":{"@wdio/logger":"9.1.3","@zip.js/zip.js":"2.8.26","decamelize":"6.0.1","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","node-fetch":"3.3.2","tar-fs":"3.1.2","which":"4.0.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a","supports-color"],"optional":true},"gensync@1.0.0-beta.2":{},"get-caller-file@2.0.5":{},"get-intrinsic@1.3.0":{"dependencies":{"call-bind-apply-helpers":"1.0.2","es-define-property":"1.0.1","es-errors":"1.3.0","es-object-atoms":"1.1.1","function-bind":"1.1.2","get-proto":"1.0.1","gopd":"1.2.0","has-symbols":"1.1.0","hasown":"2.0.2","math-intrinsics":"1.1.0"}},"get-port@7.2.0":{"optional":true},"get-proto@1.0.1":{"dependencies":{"dunder-proto":"1.0.1","es-object-atoms":"1.1.1"}},"get-stream@5.2.0":{"dependencies":{"pump":"3.0.4"},"optional":true},"get-tsconfig@4.14.0":{"dependencies":{"resolve-pkg-maps":"1.0.0"},"optional":true},"get-uri@6.0.5":{"dependencies":{"basic-ftp":"5.3.1","data-uri-to-buffer":"6.0.2","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"],"optional":true},"github-from-package@0.0.0":{},"github-slugger@2.0.0":{},"glob-parent@5.1.2":{"dependencies":{"is-glob":"4.0.3"}},"glob-to-regexp@0.4.1":{"optional":true},"glob@10.4.5":{"dependencies":{"foreground-child":"3.3.1","jackspeak":"3.4.3","minimatch":"9.0.5","minipass":"7.1.2","package-json-from-dist":"1.0.1","path-scurry":"1.11.1"}},"glob@10.5.0":{"dependencies":{"foreground-child":"3.3.1","jackspeak":"3.4.3","minimatch":"9.0.9","minipass":"7.1.3","package-json-from-dist":"1.0.1","path-scurry":"1.11.1"},"optional":true},"globals@15.15.0":{},"globby@11.1.0":{"dependencies":{"array-union":"2.1.0","dir-glob":"3.0.1","fast-glob":"3.3.3","ignore":"5.3.2","merge2":"1.4.1","slash":"3.0.0"}},"gopd@1.2.0":{},"graceful-fs@4.2.11":{},"grapheme-splitter@1.0.4":{"optional":true},"graphql@16.14.0":{"optional":true},"h3@2.0.1-rc.20":{"dependencies":{"rou3":"0.8.1","srvx":"0.11.15"}},"hachure-fill@0.5.2":{},"happy-dom@18.0.1":{"dependencies":{"@types/node":"20.19.39","@types/whatwg-mimetype":"3.0.2","whatwg-mimetype":"3.0.0"},"optional":true},"has-flag@4.0.0":{},"has-symbols@1.1.0":{},"has-tostringtag@1.0.2":{"dependencies":{"has-symbols":"1.1.0"}},"hasown@2.0.2":{"dependencies":{"function-bind":"1.1.2"}},"hast-util-embedded@3.0.0":{"dependencies":{"@types/hast":"3.0.4","hast-util-is-element":"3.0.0"}},"hast-util-from-html@2.0.3":{"dependencies":{"@types/hast":"3.0.4","devlop":"1.1.0","hast-util-from-parse5":"8.0.3","parse5":"7.3.0","vfile":"6.0.3","vfile-message":"4.0.2"}},"hast-util-from-parse5@8.0.3":{"dependencies":{"@types/hast":"3.0.4","@types/unist":"3.0.3","devlop":"1.1.0","hastscript":"9.0.1","property-information":"7.1.0","vfile":"6.0.3","vfile-location":"5.0.3","web-namespaces":"2.0.1"}},"hast-util-has-property@3.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-heading-rank@3.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-is-body-ok-link@3.0.1":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-is-element@3.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-minify-whitespace@1.0.1":{"dependencies":{"@types/hast":"3.0.4","hast-util-embedded":"3.0.0","hast-util-is-element":"3.0.0","hast-util-whitespace":"3.0.0","unist-util-is":"6.0.0"}},"hast-util-parse-selector@4.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-phrasing@3.0.1":{"dependencies":{"@types/hast":"3.0.4","hast-util-embedded":"3.0.0","hast-util-has-property":"3.0.0","hast-util-is-body-ok-link":"3.0.1","hast-util-is-element":"3.0.0"}},"hast-util-raw@9.1.0":{"dependencies":{"@types/hast":"3.0.4","@types/unist":"3.0.3","@ungap/structured-clone":"1.2.1","hast-util-from-parse5":"8.0.3","hast-util-to-parse5":"8.0.0","html-void-elements":"3.0.0","mdast-util-to-hast":"13.2.0","parse5":"7.3.0","unist-util-position":"5.0.0","unist-util-visit":"5.0.0","vfile":"6.0.3","web-namespaces":"2.0.1","zwitch":"2.0.4"}},"hast-util-sanitize@5.0.2":{"dependencies":{"@types/hast":"3.0.4","@ungap/structured-clone":"1.2.1","unist-util-position":"5.0.0"}},"hast-util-to-html@9.0.5":{"dependencies":{"@types/hast":"3.0.4","@types/unist":"3.0.3","ccount":"2.0.1","comma-separated-tokens":"2.0.3","hast-util-whitespace":"3.0.0","html-void-elements":"3.0.0","mdast-util-to-hast":"13.2.0","property-information":"7.1.0","space-separated-tokens":"2.0.2","stringify-entities":"4.0.4","zwitch":"2.0.4"}},"hast-util-to-mdast@10.1.2":{"dependencies":{"@types/hast":"3.0.4","@types/mdast":"4.0.4","@ungap/structured-clone":"1.2.1","hast-util-phrasing":"3.0.1","hast-util-to-html":"9.0.5","hast-util-to-text":"4.0.2","hast-util-whitespace":"3.0.0","mdast-util-phrasing":"4.1.0","mdast-util-to-hast":"13.2.0","mdast-util-to-string":"4.0.0","rehype-minify-whitespace":"6.0.2","trim-trailing-lines":"2.1.0","unist-util-position":"5.0.0","unist-util-visit":"5.0.0"}},"hast-util-to-parse5@8.0.0":{"dependencies":{"@types/hast":"3.0.4","comma-separated-tokens":"2.0.3","devlop":"1.1.0","property-information":"6.5.0","space-separated-tokens":"2.0.2","web-namespaces":"2.0.1","zwitch":"2.0.4"}},"hast-util-to-string@3.0.1":{"dependencies":{"@types/hast":"3.0.4"}},"hast-util-to-text@4.0.2":{"dependencies":{"@types/hast":"3.0.4","@types/unist":"3.0.3","hast-util-is-element":"3.0.0","unist-util-find-after":"5.0.0"}},"hast-util-whitespace@3.0.0":{"dependencies":{"@types/hast":"3.0.4"}},"hastscript@9.0.1":{"dependencies":{"@types/hast":"3.0.4","comma-separated-tokens":"2.0.3","hast-util-parse-selector":"4.0.0","property-information":"7.1.0","space-separated-tokens":"2.0.2"}},"headers-polyfill@4.0.3":{"optional":true},"highlight.js@11.11.1":{},"html-encoding-sniffer@4.0.0":{"dependencies":{"whatwg-encoding":"3.1.1"}},"html-escaper@2.0.2":{},"html-void-elements@3.0.0":{},"htmlfy@0.3.2":{"optional":true},"htmlparser2@10.0.0":{"dependencies":{"domelementtype":"2.3.0","domhandler":"5.0.3","domutils":"3.2.2","entities":"6.0.1"}},"htmlparser2@10.1.0":{"dependencies":{"domelementtype":"2.3.0","domhandler":"5.0.3","domutils":"3.2.2","entities":"7.0.1"},"optional":true},"http-proxy-agent@7.0.2":{"dependencies":{"agent-base":"7.1.3","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"]},"https-proxy-agent@7.0.2":{"dependencies":{"agent-base":"7.1.3","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"]},"https-proxy-agent@7.0.6":{"dependencies":{"agent-base":"7.1.3","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"]},"human-id@4.1.1":{},"iconv-lite@0.6.3":{"dependencies":{"safer-buffer":"2.1.2"}},"ieee754@1.2.1":{},"ignore@5.3.2":{},"immediate@3.0.6":{"optional":true},"immutable@5.1.5":{"optional":true},"import-meta-resolve@4.2.0":{"optional":true},"inherits@2.0.4":{},"ini@1.3.8":{},"ini@4.1.3":{},"internmap@1.0.1":{},"internmap@2.0.3":{},"ip-address@10.2.0":{"optional":true},"is-binary-path@2.1.0":{"dependencies":{"binary-extensions":"2.3.0"}},"is-docker@2.2.1":{},"is-extglob@2.1.1":{},"is-fullwidth-code-point@3.0.0":{},"is-glob@4.0.3":{"dependencies":{"is-extglob":"2.1.1"}},"is-interactive@1.0.0":{},"is-node-process@1.2.0":{"optional":true},"is-number@7.0.0":{},"is-plain-obj@4.1.0":{},"is-potential-custom-element-name@1.0.1":{},"is-stream@2.0.1":{"optional":true},"is-subdir@1.2.0":{"dependencies":{"better-path-resolve":"1.0.0"}},"is-unicode-supported@0.1.0":{},"is-windows@1.0.2":{},"is-wsl@2.2.0":{"dependencies":{"is-docker":"2.2.1"}},"isarray@1.0.0":{"optional":true},"isbot@5.1.28":{},"isexe@2.0.0":{},"isexe@3.1.5":{"optional":true},"istanbul-lib-coverage@3.2.2":{},"istanbul-lib-report@3.0.1":{"dependencies":{"istanbul-lib-coverage":"3.2.2","make-dir":"4.0.0","supports-color":"7.2.0"}},"istanbul-lib-source-maps@5.0.6":{"dependencies":{"@jridgewell/trace-mapping":"0.3.31","debug":"4.4.3","istanbul-lib-coverage":"3.2.2"},"transitivePeerDependencies":["supports-color"]},"istanbul-reports@3.2.0":{"dependencies":{"html-escaper":"2.0.2","istanbul-lib-report":"3.0.1"}},"jackspeak@3.4.3":{"dependencies":{"@isaacs/cliui":"8.0.2"},"optionalDependencies":{"@pkgjs/parseargs":"0.11.0"}},"jest-diff@30.1.1":{"dependencies":{"@jest/diff-sequences":"30.0.1","@jest/get-type":"30.1.0","chalk":"4.1.2","pretty-format":"30.0.5"}},"jest-worker@27.5.1":{"dependencies":{"@types/node":"22.19.17","merge-stream":"2.0.0","supports-color":"8.1.1"},"optional":true},"jiti@2.6.1":{},"js-tokens@10.0.0":{},"js-tokens@4.0.0":{},"js-tokens@9.0.1":{},"js-yaml@3.14.1":{"dependencies":{"argparse":"1.0.10","esprima":"4.0.1"}},"js-yaml@4.1.1":{"dependencies":{"argparse":"2.0.1"}},"jsdom@26.1.0":{"dependencies":{"cssstyle":"4.3.1","data-urls":"5.0.0","decimal.js":"10.6.0","html-encoding-sniffer":"4.0.0","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","is-potential-custom-element-name":"1.0.1","nwsapi":"2.2.20","parse5":"7.3.0","rrweb-cssom":"0.8.0","saxes":"6.0.0","symbol-tree":"3.2.4","tough-cookie":"5.1.2","w3c-xmlserializer":"5.0.0","webidl-conversions":"7.0.0","whatwg-encoding":"3.1.1","whatwg-mimetype":"4.0.0","whatwg-url":"14.2.0","ws":"8.18.3","xml-name-validator":"5.0.0"},"transitivePeerDependencies":["bufferutil","supports-color","utf-8-validate"]},"jsdom@27.3.0(postcss@8.5.14)":{"dependencies":{"@acemir/cssom":"0.9.28","@asamuzakjp/dom-selector":"6.7.6","cssstyle":"5.3.4(postcss@8.5.14)","data-urls":"6.0.0","decimal.js":"10.6.0","html-encoding-sniffer":"4.0.0","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","is-potential-custom-element-name":"1.0.1","parse5":"8.0.0","saxes":"6.0.0","symbol-tree":"3.2.4","tough-cookie":"6.0.0","w3c-xmlserializer":"5.0.0","webidl-conversions":"8.0.0","whatwg-encoding":"3.1.1","whatwg-mimetype":"4.0.0","whatwg-url":"15.1.0","ws":"8.18.3","xml-name-validator":"5.0.0"},"transitivePeerDependencies":["bufferutil","postcss","supports-color","utf-8-validate"]},"jsesc@3.1.0":{},"json-parse-even-better-errors@2.3.1":{"optional":true},"json-schema-to-ts@3.1.1":{"dependencies":{"@babel/runtime":"7.28.4","ts-algebra":"2.0.0"}},"json-schema-traverse@1.0.0":{},"json5@2.2.3":{},"jsonc-parser@3.2.0":{},"jsonfile@4.0.0":{"optionalDependencies":{"graceful-fs":"4.2.11"}},"jsonfile@6.2.0":{"dependencies":{"universalify":"2.0.1"},"optionalDependencies":{"graceful-fs":"4.2.11"}},"jszip@3.10.1":{"dependencies":{"lie":"3.3.0","pako":"1.0.11","readable-stream":"2.3.8","setimmediate":"1.0.5"},"optional":true},"katex@0.16.22":{"dependencies":{"commander":"8.3.0"}},"khroma@2.1.0":{},"kleur@4.1.5":{},"kolorist@1.8.0":{},"kysely@0.28.7":{},"langium@3.3.1":{"dependencies":{"chevrotain":"11.0.3","chevrotain-allstar":"0.3.1(chevrotain@11.0.3)","vscode-languageserver":"9.0.1","vscode-languageserver-textdocument":"1.0.12","vscode-uri":"3.0.8"}},"layout-base@1.0.2":{},"layout-base@2.0.1":{},"lazystream@1.0.1":{"dependencies":{"readable-stream":"2.3.8"},"optional":true},"lie@3.3.0":{"dependencies":{"immediate":"3.0.6"},"optional":true},"lightningcss-android-arm64@1.32.0":{"optional":true},"lightningcss-darwin-arm64@1.32.0":{"optional":true},"lightningcss-darwin-x64@1.32.0":{"optional":true},"lightningcss-freebsd-x64@1.32.0":{"optional":true},"lightningcss-linux-arm-gnueabihf@1.32.0":{"optional":true},"lightningcss-linux-arm64-gnu@1.32.0":{"optional":true},"lightningcss-linux-arm64-musl@1.32.0":{"optional":true},"lightningcss-linux-x64-gnu@1.32.0":{"optional":true},"lightningcss-linux-x64-musl@1.32.0":{"optional":true},"lightningcss-win32-arm64-msvc@1.32.0":{"optional":true},"lightningcss-win32-x64-msvc@1.32.0":{"optional":true},"lightningcss@1.32.0":{"dependencies":{"detect-libc":"2.1.2"},"optionalDependencies":{"lightningcss-android-arm64":"1.32.0","lightningcss-darwin-arm64":"1.32.0","lightningcss-darwin-x64":"1.32.0","lightningcss-freebsd-x64":"1.32.0","lightningcss-linux-arm-gnueabihf":"1.32.0","lightningcss-linux-arm64-gnu":"1.32.0","lightningcss-linux-arm64-musl":"1.32.0","lightningcss-linux-x64-gnu":"1.32.0","lightningcss-linux-x64-musl":"1.32.0","lightningcss-win32-arm64-msvc":"1.32.0","lightningcss-win32-x64-msvc":"1.32.0"}},"lines-and-columns@2.0.3":{},"loader-runner@4.3.2":{"optional":true},"local-pkg@1.1.2":{"dependencies":{"mlly":"1.8.0","pkg-types":"2.3.0","quansync":"0.2.11"}},"locate-app@2.5.0":{"dependencies":{"@promptbook/utils":"0.69.5","type-fest":"4.26.0","userhome":"1.0.1"},"optional":true},"locate-path@5.0.0":{"dependencies":{"p-locate":"4.1.0"}},"lodash-es@4.17.21":{},"lodash.clonedeep@4.5.0":{"optional":true},"lodash.startcase@4.4.0":{},"lodash.zip@4.2.0":{"optional":true},"lodash@4.18.1":{"optional":true},"log-symbols@4.1.0":{"dependencies":{"chalk":"4.1.2","is-unicode-supported":"0.1.0"}},"loglevel-plugin-prefix@0.8.4":{"optional":true},"loglevel@1.9.2":{"optional":true},"long@5.3.2":{},"longest-streak@3.1.0":{},"loupe@3.2.1":{},"lowlight@3.3.0":{"dependencies":{"@types/hast":"3.0.4","devlop":"1.1.0","highlight.js":"11.11.1"}},"lru-cache@10.4.3":{},"lru-cache@11.2.4":{},"lru-cache@5.1.1":{"dependencies":{"yallist":"3.1.1"}},"lru-cache@7.18.3":{"optional":true},"lucide-react@0.544.0(react@19.2.0)":{"dependencies":{"react":"19.2.0"}},"lz-string@1.5.0":{},"magic-string@0.30.18":{"dependencies":{"@jridgewell/sourcemap-codec":"1.5.5"}},"magic-string@0.30.21":{"dependencies":{"@jridgewell/sourcemap-codec":"1.5.5"}},"magicast@0.3.5":{"dependencies":{"@babel/parser":"7.28.5","@babel/types":"7.28.5","source-map-js":"1.2.1"}},"magicast@0.5.2":{"dependencies":{"@babel/parser":"7.29.3","@babel/types":"7.29.0","source-map-js":"1.2.1"}},"make-dir@4.0.0":{"dependencies":{"semver":"7.7.3"}},"markdown-table@3.0.4":{},"marked@16.4.2":{},"math-intrinsics@1.1.0":{},"mdast-util-find-and-replace@3.0.2":{"dependencies":{"@types/mdast":"4.0.4","escape-string-regexp":"5.0.0","unist-util-is":"6.0.0","unist-util-visit-parents":"6.0.1"}},"mdast-util-from-markdown@2.0.2":{"dependencies":{"@types/mdast":"4.0.4","@types/unist":"3.0.3","decode-named-character-reference":"1.0.2","devlop":"1.1.0","mdast-util-to-string":"4.0.0","micromark":"4.0.1","micromark-util-decode-numeric-character-reference":"2.0.2","micromark-util-decode-string":"2.0.1","micromark-util-normalize-identifier":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1","unist-util-stringify-position":"4.0.0"},"transitivePeerDependencies":["supports-color"]},"mdast-util-frontmatter@2.0.1":{"dependencies":{"@types/mdast":"4.0.4","devlop":"1.1.0","escape-string-regexp":"5.0.0","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2","micromark-extension-frontmatter":"2.0.0"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm-autolink-literal@2.0.1":{"dependencies":{"@types/mdast":"4.0.4","ccount":"2.0.1","devlop":"1.1.0","mdast-util-find-and-replace":"3.0.2","micromark-util-character":"2.1.1"}},"mdast-util-gfm-footnote@2.0.0":{"dependencies":{"@types/mdast":"4.0.4","devlop":"1.1.0","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2","micromark-util-normalize-identifier":"2.0.1"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm-strikethrough@2.0.0":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm-table@2.0.0":{"dependencies":{"@types/mdast":"4.0.4","devlop":"1.1.0","markdown-table":"3.0.4","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm-task-list-item@2.0.0":{"dependencies":{"@types/mdast":"4.0.4","devlop":"1.1.0","mdast-util-from-markdown":"2.0.2","mdast-util-to-markdown":"2.1.2"},"transitivePeerDependencies":["supports-color"]},"mdast-util-gfm@3.0.0":{"dependencies":{"mdast-util-from-markdown":"2.0.2","mdast-util-gfm-autolink-literal":"2.0.1","mdast-util-gfm-footnote":"2.0.0","mdast-util-gfm-strikethrough":"2.0.0","mdast-util-gfm-table":"2.0.0","mdast-util-gfm-task-list-item":"2.0.0","mdast-util-to-markdown":"2.1.2"},"transitivePeerDependencies":["supports-color"]},"mdast-util-phrasing@4.1.0":{"dependencies":{"@types/mdast":"4.0.4","unist-util-is":"6.0.0"}},"mdast-util-to-hast@13.2.0":{"dependencies":{"@types/hast":"3.0.4","@types/mdast":"4.0.4","@ungap/structured-clone":"1.2.1","devlop":"1.1.0","micromark-util-sanitize-uri":"2.0.1","trim-lines":"3.0.1","unist-util-position":"5.0.0","unist-util-visit":"5.0.0","vfile":"6.0.3"}},"mdast-util-to-markdown@2.1.2":{"dependencies":{"@types/mdast":"4.0.4","@types/unist":"3.0.3","longest-streak":"3.1.0","mdast-util-phrasing":"4.1.0","mdast-util-to-string":"4.0.0","micromark-util-classify-character":"2.0.1","micromark-util-decode-string":"2.0.1","unist-util-visit":"5.0.0","zwitch":"2.0.4"}},"mdast-util-to-string@4.0.0":{"dependencies":{"@types/mdast":"4.0.4"}},"mdn-data@2.12.2":{},"merge-stream@2.0.0":{"optional":true},"merge2@1.4.1":{},"mermaid@11.12.1":{"dependencies":{"@braintree/sanitize-url":"7.1.1","@iconify/utils":"3.0.2","@mermaid-js/parser":"0.6.3","@types/d3":"7.4.3","cytoscape":"3.30.4","cytoscape-cose-bilkent":"4.1.0(cytoscape@3.30.4)","cytoscape-fcose":"2.2.0(cytoscape@3.30.4)","d3":"7.9.0","d3-sankey":"0.12.3","dagre-d3-es":"7.0.13","dayjs":"1.11.19","dompurify":"3.3.1","katex":"0.16.22","khroma":"2.1.0","lodash-es":"4.17.21","marked":"16.4.2","roughjs":"4.6.6","stylis":"4.3.6","ts-dedent":"2.2.0","uuid":"11.1.0"},"transitivePeerDependencies":["supports-color"]},"micromark-core-commonmark@2.0.2":{"dependencies":{"decode-named-character-reference":"1.0.2","devlop":"1.1.0","micromark-factory-destination":"2.0.1","micromark-factory-label":"2.0.1","micromark-factory-space":"2.0.1","micromark-factory-title":"2.0.1","micromark-factory-whitespace":"2.0.1","micromark-util-character":"2.1.1","micromark-util-chunked":"2.0.1","micromark-util-classify-character":"2.0.1","micromark-util-html-tag-name":"2.0.1","micromark-util-normalize-identifier":"2.0.1","micromark-util-resolve-all":"2.0.1","micromark-util-subtokenize":"2.0.3","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-frontmatter@2.0.0":{"dependencies":{"fault":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-autolink-literal@2.1.0":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-sanitize-uri":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-footnote@2.1.0":{"dependencies":{"devlop":"1.1.0","micromark-core-commonmark":"2.0.2","micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-normalize-identifier":"2.0.1","micromark-util-sanitize-uri":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-strikethrough@2.1.0":{"dependencies":{"devlop":"1.1.0","micromark-util-chunked":"2.0.1","micromark-util-classify-character":"2.0.1","micromark-util-resolve-all":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-table@2.1.1":{"dependencies":{"devlop":"1.1.0","micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm-tagfilter@2.0.0":{"dependencies":{"micromark-util-types":"2.0.1"}},"micromark-extension-gfm-task-list-item@2.1.0":{"dependencies":{"devlop":"1.1.0","micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-extension-gfm@3.0.0":{"dependencies":{"micromark-extension-gfm-autolink-literal":"2.1.0","micromark-extension-gfm-footnote":"2.1.0","micromark-extension-gfm-strikethrough":"2.1.0","micromark-extension-gfm-table":"2.1.1","micromark-extension-gfm-tagfilter":"2.0.0","micromark-extension-gfm-task-list-item":"2.1.0","micromark-util-combine-extensions":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-factory-destination@2.0.1":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-factory-label@2.0.1":{"dependencies":{"devlop":"1.1.0","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-factory-space@2.0.1":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-types":"2.0.1"}},"micromark-factory-title@2.0.1":{"dependencies":{"micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-factory-whitespace@2.0.1":{"dependencies":{"micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-character@2.1.1":{"dependencies":{"micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-chunked@2.0.1":{"dependencies":{"micromark-util-symbol":"2.0.1"}},"micromark-util-classify-character@2.0.1":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-combine-extensions@2.0.1":{"dependencies":{"micromark-util-chunked":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-decode-numeric-character-reference@2.0.2":{"dependencies":{"micromark-util-symbol":"2.0.1"}},"micromark-util-decode-string@2.0.1":{"dependencies":{"decode-named-character-reference":"1.0.2","micromark-util-character":"2.1.1","micromark-util-decode-numeric-character-reference":"2.0.2","micromark-util-symbol":"2.0.1"}},"micromark-util-encode@2.0.1":{},"micromark-util-html-tag-name@2.0.1":{},"micromark-util-normalize-identifier@2.0.1":{"dependencies":{"micromark-util-symbol":"2.0.1"}},"micromark-util-resolve-all@2.0.1":{"dependencies":{"micromark-util-types":"2.0.1"}},"micromark-util-sanitize-uri@2.0.1":{"dependencies":{"micromark-util-character":"2.1.1","micromark-util-encode":"2.0.1","micromark-util-symbol":"2.0.1"}},"micromark-util-subtokenize@2.0.3":{"dependencies":{"devlop":"1.1.0","micromark-util-chunked":"2.0.1","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"}},"micromark-util-symbol@2.0.1":{},"micromark-util-types@2.0.1":{},"micromark@4.0.1":{"dependencies":{"@types/debug":"4.1.12","debug":"4.4.3","decode-named-character-reference":"1.0.2","devlop":"1.1.0","micromark-core-commonmark":"2.0.2","micromark-factory-space":"2.0.1","micromark-util-character":"2.1.1","micromark-util-chunked":"2.0.1","micromark-util-combine-extensions":"2.0.1","micromark-util-decode-numeric-character-reference":"2.0.2","micromark-util-encode":"2.0.1","micromark-util-normalize-identifier":"2.0.1","micromark-util-resolve-all":"2.0.1","micromark-util-sanitize-uri":"2.0.1","micromark-util-subtokenize":"2.0.3","micromark-util-symbol":"2.0.1","micromark-util-types":"2.0.1"},"transitivePeerDependencies":["supports-color"]},"micromatch@4.0.8":{"dependencies":{"braces":"3.0.3","picomatch":"2.3.1"}},"mime-db@1.52.0":{},"mime-types@2.1.35":{"dependencies":{"mime-db":"1.52.0"}},"mimic-fn@2.1.0":{},"mimic-response@3.1.0":{},"miniflare@4.20260504.0":{"dependencies":{"@cspotcode/source-map-support":"0.8.1","sharp":"0.34.5","undici":"7.24.8","workerd":"1.20260504.1","ws":"8.18.0","youch":"4.1.0-beta.10"},"transitivePeerDependencies":["bufferutil","utf-8-validate"]},"minimatch@5.1.9":{"dependencies":{"brace-expansion":"2.1.0"},"optional":true},"minimatch@9.0.3":{"dependencies":{"brace-expansion":"2.0.2"}},"minimatch@9.0.5":{"dependencies":{"brace-expansion":"2.0.2"}},"minimatch@9.0.9":{"dependencies":{"brace-expansion":"2.1.0"},"optional":true},"minimist@1.2.8":{},"minipass@3.3.6":{"dependencies":{"yallist":"4.0.0"}},"minipass@5.0.0":{},"minipass@7.1.2":{},"minipass@7.1.3":{"optional":true},"minizlib@2.1.2":{"dependencies":{"minipass":"3.3.6","yallist":"4.0.0"}},"mkdirp-classic@0.5.3":{},"mkdirp@1.0.4":{},"mlly@1.8.0":{"dependencies":{"acorn":"8.16.0","pathe":"2.0.3","pkg-types":"1.3.1","ufo":"1.6.1"}},"mri@1.2.0":{},"mrmime@2.0.1":{},"ms@2.1.3":{},"msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3)":{"dependencies":{"@bundled-es-modules/cookie":"2.0.1","@bundled-es-modules/statuses":"1.0.1","@bundled-es-modules/tough-cookie":"0.1.6","@inquirer/confirm":"5.1.21(@types/node@22.15.33)","@mswjs/interceptors":"0.39.8","@open-draft/deferred-promise":"2.2.0","@open-draft/until":"2.1.0","@types/cookie":"0.6.0","@types/statuses":"2.0.6","graphql":"16.14.0","headers-polyfill":"4.0.3","is-node-process":"1.2.0","outvariant":"1.4.3","path-to-regexp":"6.3.0","picocolors":"1.1.1","strict-event-emitter":"0.5.1","type-fest":"4.41.0","yargs":"17.7.2"},"optionalDependencies":{"typescript":"5.8.3"},"transitivePeerDependencies":["@types/node"],"optional":true},"msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3)":{"dependencies":{"@bundled-es-modules/cookie":"2.0.1","@bundled-es-modules/statuses":"1.0.1","@bundled-es-modules/tough-cookie":"0.1.6","@inquirer/confirm":"5.1.21(@types/node@24.10.2)","@mswjs/interceptors":"0.39.8","@open-draft/deferred-promise":"2.2.0","@open-draft/until":"2.1.0","@types/cookie":"0.6.0","@types/statuses":"2.0.6","graphql":"16.14.0","headers-polyfill":"4.0.3","is-node-process":"1.2.0","outvariant":"1.4.3","path-to-regexp":"6.3.0","picocolors":"1.1.1","strict-event-emitter":"0.5.1","type-fest":"4.41.0","yargs":"17.7.2"},"optionalDependencies":{"typescript":"5.8.3"},"transitivePeerDependencies":["@types/node"],"optional":true},"msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3)":{"dependencies":{"@bundled-es-modules/cookie":"2.0.1","@bundled-es-modules/statuses":"1.0.1","@bundled-es-modules/tough-cookie":"0.1.6","@inquirer/confirm":"5.1.21(@types/node@24.10.2)","@mswjs/interceptors":"0.39.8","@open-draft/deferred-promise":"2.2.0","@open-draft/until":"2.1.0","@types/cookie":"0.6.0","@types/statuses":"2.0.6","graphql":"16.14.0","headers-polyfill":"4.0.3","is-node-process":"1.2.0","outvariant":"1.4.3","path-to-regexp":"6.3.0","picocolors":"1.1.1","strict-event-emitter":"0.5.1","type-fest":"4.41.0","yargs":"17.7.2"},"optionalDependencies":{"typescript":"5.9.3"},"transitivePeerDependencies":["@types/node"],"optional":true},"mute-stream@2.0.0":{"optional":true},"nanoid@3.3.11":{},"napi-build-utils@2.0.0":{},"neo-async@2.6.2":{"optional":true},"netmask@2.1.1":{"optional":true},"node-abi@3.89.0":{"dependencies":{"semver":"7.7.3"}},"node-domexception@1.0.0":{"optional":true},"node-fetch@3.3.2":{"dependencies":{"data-uri-to-buffer":"4.0.1","fetch-blob":"3.2.0","formdata-polyfill":"4.0.10"},"optional":true},"node-machine-id@1.1.12":{},"node-releases@2.0.19":{},"node-releases@2.0.38":{"optional":true},"normalize-path@3.0.0":{},"npm-run-path@4.0.1":{"dependencies":{"path-key":"3.1.1"}},"nth-check@2.1.1":{"dependencies":{"boolbase":"1.0.0"}},"nwsapi@2.2.20":{},"nx-cloud@19.1.0":{"dependencies":{"@nrwl/nx-cloud":"19.1.0","axios":"1.11.0","chalk":"4.1.2","dotenv":"10.0.0","fs-extra":"11.3.1","ini":"4.1.3","node-machine-id":"1.1.12","open":"8.4.2","tar":"6.2.1","yargs-parser":"22.0.0"},"transitivePeerDependencies":["debug"]},"nx@21.4.1":{"dependencies":{"@napi-rs/wasm-runtime":"0.2.4","@yarnpkg/lockfile":"1.1.0","@yarnpkg/parsers":"3.0.2","@zkochan/js-yaml":"0.0.7","axios":"1.11.0","chalk":"4.1.2","cli-cursor":"3.1.0","cli-spinners":"2.6.1","cliui":"8.0.1","dotenv":"16.4.7","dotenv-expand":"11.0.7","enquirer":"2.3.6","figures":"3.2.0","flat":"5.0.2","front-matter":"4.0.2","ignore":"5.3.2","jest-diff":"30.1.1","jsonc-parser":"3.2.0","lines-and-columns":"2.0.3","minimatch":"9.0.3","node-machine-id":"1.1.12","npm-run-path":"4.0.1","open":"8.4.2","ora":"5.3.0","resolve.exports":"2.0.3","semver":"7.7.2","string-width":"4.2.3","tar-stream":"2.2.0","tmp":"0.2.5","tree-kill":"1.2.2","tsconfig-paths":"4.2.0","tslib":"2.8.1","yaml":"2.8.1","yargs":"17.7.2","yargs-parser":"21.1.1"},"optionalDependencies":{"@nx/nx-darwin-arm64":"21.4.1","@nx/nx-darwin-x64":"21.4.1","@nx/nx-freebsd-x64":"21.4.1","@nx/nx-linux-arm-gnueabihf":"21.4.1","@nx/nx-linux-arm64-gnu":"21.4.1","@nx/nx-linux-arm64-musl":"21.4.1","@nx/nx-linux-x64-gnu":"21.4.1","@nx/nx-linux-x64-musl":"21.4.1","@nx/nx-win32-arm64-msvc":"21.4.1","@nx/nx-win32-x64-msvc":"21.4.1"},"transitivePeerDependencies":["debug"]},"obug@2.1.1":{},"once@1.4.0":{"dependencies":{"wrappy":"1.0.2"}},"onetime@5.1.2":{"dependencies":{"mimic-fn":"2.1.0"}},"oniguruma-parser@0.12.1":{},"oniguruma-to-es@4.3.3":{"dependencies":{"oniguruma-parser":"0.12.1","regex":"6.0.1","regex-recursion":"6.0.2"}},"open@8.4.2":{"dependencies":{"define-lazy-prop":"2.0.0","is-docker":"2.2.1","is-wsl":"2.2.0"}},"ora@5.3.0":{"dependencies":{"bl":"4.1.0","chalk":"4.1.2","cli-cursor":"3.1.0","cli-spinners":"2.9.2","is-interactive":"1.0.0","log-symbols":"4.1.0","strip-ansi":"6.0.1","wcwidth":"1.0.1"}},"outdent@0.5.0":{},"outvariant@1.4.3":{"optional":true},"oxlint@1.26.0":{"optionalDependencies":{"@oxlint/darwin-arm64":"1.26.0","@oxlint/darwin-x64":"1.26.0","@oxlint/linux-arm64-gnu":"1.26.0","@oxlint/linux-arm64-musl":"1.26.0","@oxlint/linux-x64-gnu":"1.26.0","@oxlint/linux-x64-musl":"1.26.0","@oxlint/win32-arm64":"1.26.0","@oxlint/win32-x64":"1.26.0"}},"p-filter@2.1.0":{"dependencies":{"p-map":"2.1.0"}},"p-limit@2.3.0":{"dependencies":{"p-try":"2.2.0"}},"p-locate@4.1.0":{"dependencies":{"p-limit":"2.3.0"}},"p-map@2.1.0":{},"p-map@7.0.4":{},"p-try@2.2.0":{},"pac-proxy-agent@7.2.0":{"dependencies":{"@tootallnate/quickjs-emscripten":"0.23.0","agent-base":"7.1.4","debug":"4.4.3","get-uri":"6.0.5","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","pac-resolver":"7.0.1","socks-proxy-agent":"8.0.5"},"transitivePeerDependencies":["supports-color"],"optional":true},"pac-resolver@7.0.1":{"dependencies":{"degenerator":"5.0.1","netmask":"2.1.1"},"optional":true},"package-json-from-dist@1.0.1":{},"package-manager-detector@0.2.11":{"dependencies":{"quansync":"0.2.11"}},"package-manager-detector@1.5.0":{},"pako@1.0.11":{"optional":true},"parse5-htmlparser2-tree-adapter@7.1.0":{"dependencies":{"domhandler":"5.0.3","parse5":"7.3.0"}},"parse5-parser-stream@7.1.2":{"dependencies":{"parse5":"7.3.0"}},"parse5@7.3.0":{"dependencies":{"entities":"6.0.1"}},"parse5@8.0.0":{"dependencies":{"entities":"6.0.1"}},"path-data-parser@0.1.0":{},"path-exists@4.0.0":{},"path-key@3.1.1":{},"path-scurry@1.11.1":{"dependencies":{"lru-cache":"10.4.3","minipass":"7.1.2"}},"path-to-regexp@6.3.0":{},"path-type@4.0.0":{},"pathe@2.0.3":{},"pathval@2.0.1":{},"pend@1.2.0":{"optional":true},"picocolors@1.1.1":{},"picomatch@2.3.1":{},"picomatch@4.0.3":{},"picomatch@4.0.4":{},"pify@4.0.1":{},"pkg-types@1.3.1":{"dependencies":{"confbox":"0.1.8","mlly":"1.8.0","pathe":"2.0.3"}},"pkg-types@2.3.0":{"dependencies":{"confbox":"0.2.2","exsolve":"1.0.8","pathe":"2.0.3"}},"playwright-core@1.55.0":{"optional":true},"playwright@1.55.0":{"dependencies":{"playwright-core":"1.55.0"},"optionalDependencies":{"fsevents":"2.3.2"},"optional":true},"pngjs@7.0.0":{},"points-on-curve@0.2.0":{},"points-on-path@0.2.1":{"dependencies":{"path-data-parser":"0.1.0","points-on-curve":"0.2.0"}},"postcss@8.5.14":{"dependencies":{"nanoid":"3.3.11","picocolors":"1.1.1","source-map-js":"1.2.1"}},"postcss@8.5.6":{"dependencies":{"nanoid":"3.3.11","picocolors":"1.1.1","source-map-js":"1.2.1"}},"posthog-js@1.321.2":{"dependencies":{"@opentelemetry/api":"1.9.0","@opentelemetry/api-logs":"0.208.0","@opentelemetry/exporter-logs-otlp-http":"0.208.0(@opentelemetry/api@1.9.0)","@opentelemetry/resources":"2.4.0(@opentelemetry/api@1.9.0)","@opentelemetry/sdk-logs":"0.208.0(@opentelemetry/api@1.9.0)","@posthog/core":"1.9.1","@posthog/types":"1.321.2","core-js":"3.46.0","dompurify":"3.3.1","fflate":"0.4.8","preact":"10.28.2","query-selector-shadow-dom":"1.0.1","web-vitals":"4.2.4"}},"preact@10.28.2":{},"prebuild-install@7.1.3":{"dependencies":{"detect-libc":"2.1.2","expand-template":"2.0.3","github-from-package":"0.0.0","minimist":"1.2.8","mkdirp-classic":"0.5.3","napi-build-utils":"2.0.0","node-abi":"3.89.0","pump":"3.0.3","rc":"1.2.8","simple-get":"4.0.1","tar-fs":"2.1.4","tunnel-agent":"0.6.0"}},"prettier@2.8.8":{},"prettier@3.6.2":{},"pretty-format@27.5.1":{"dependencies":{"ansi-regex":"5.0.1","ansi-styles":"5.2.0","react-is":"17.0.2"}},"pretty-format@30.0.5":{"dependencies":{"@jest/schemas":"30.0.5","ansi-styles":"5.2.0","react-is":"18.3.1"}},"process-nextick-args@2.0.1":{"optional":true},"process@0.11.10":{"optional":true},"progress@2.0.3":{"optional":true},"property-information@6.5.0":{},"property-information@7.1.0":{},"protobufjs@7.5.4":{"dependencies":{"@protobufjs/aspromise":"1.1.2","@protobufjs/base64":"1.1.2","@protobufjs/codegen":"2.0.4","@protobufjs/eventemitter":"1.1.0","@protobufjs/fetch":"1.1.0","@protobufjs/float":"1.0.2","@protobufjs/inquire":"1.1.0","@protobufjs/path":"1.1.2","@protobufjs/pool":"1.1.0","@protobufjs/utf8":"1.1.0","@types/node":"22.15.33","long":"5.3.2"}},"proxy-agent@6.5.0":{"dependencies":{"agent-base":"7.1.4","debug":"4.4.3","http-proxy-agent":"7.0.2","https-proxy-agent":"7.0.6","lru-cache":"7.18.3","pac-proxy-agent":"7.2.0","proxy-from-env":"1.1.0","socks-proxy-agent":"8.0.5"},"transitivePeerDependencies":["supports-color"],"optional":true},"proxy-from-env@1.1.0":{},"psl@1.15.0":{"dependencies":{"punycode":"2.3.1"},"optional":true},"pump@3.0.3":{"dependencies":{"end-of-stream":"1.4.5","once":"1.4.0"}},"pump@3.0.4":{"dependencies":{"end-of-stream":"1.4.5","once":"1.4.0"},"optional":true},"punycode@2.3.1":{},"quansync@0.2.11":{},"query-selector-shadow-dom@1.0.1":{},"querystringify@2.2.0":{"optional":true},"queue-microtask@1.2.3":{},"rc@1.2.8":{"dependencies":{"deep-extend":"0.6.0","ini":"1.3.8","minimist":"1.2.8","strip-json-comments":"2.0.1"}},"react-dom@19.2.0(react@19.2.0)":{"dependencies":{"react":"19.2.0","scheduler":"0.27.0"}},"react-is@17.0.2":{},"react-is@18.3.1":{},"react@19.2.0":{},"read-yaml-file@1.1.0":{"dependencies":{"graceful-fs":"4.2.11","js-yaml":"3.14.1","pify":"4.0.1","strip-bom":"3.0.0"}},"readable-stream@2.3.8":{"dependencies":{"core-util-is":"1.0.3","inherits":"2.0.4","isarray":"1.0.0","process-nextick-args":"2.0.1","safe-buffer":"5.1.2","string_decoder":"1.1.1","util-deprecate":"1.0.2"},"optional":true},"readable-stream@3.6.2":{"dependencies":{"inherits":"2.0.4","string_decoder":"1.3.0","util-deprecate":"1.0.2"}},"readable-stream@4.7.0":{"dependencies":{"abort-controller":"3.0.0","buffer":"6.0.3","events":"3.3.0","process":"0.11.10","string_decoder":"1.3.0"},"optional":true},"readdir-glob@1.1.3":{"dependencies":{"minimatch":"5.1.9"},"optional":true},"readdirp@3.6.0":{"dependencies":{"picomatch":"2.3.1"}},"regex-recursion@6.0.2":{"dependencies":{"regex-utilities":"2.3.0"}},"regex-utilities@2.3.0":{},"regex@6.0.1":{"dependencies":{"regex-utilities":"2.3.0"}},"rehype-autolink-headings@7.1.0":{"dependencies":{"@types/hast":"3.0.4","@ungap/structured-clone":"1.2.1","hast-util-heading-rank":"3.0.0","hast-util-is-element":"3.0.0","unified":"11.0.5","unist-util-visit":"5.0.0"}},"rehype-highlight@7.0.2":{"dependencies":{"@types/hast":"3.0.4","hast-util-to-text":"4.0.2","lowlight":"3.3.0","unist-util-visit":"5.0.0","vfile":"6.0.3"}},"rehype-minify-whitespace@6.0.2":{"dependencies":{"@types/hast":"3.0.4","hast-util-minify-whitespace":"1.0.1"}},"rehype-parse@9.0.1":{"dependencies":{"@types/hast":"3.0.4","hast-util-from-html":"2.0.3","unified":"11.0.5"}},"rehype-raw@7.0.0":{"dependencies":{"@types/hast":"3.0.4","hast-util-raw":"9.1.0","vfile":"6.0.3"}},"rehype-remark@10.0.1":{"dependencies":{"@types/hast":"3.0.4","@types/mdast":"4.0.4","hast-util-to-mdast":"10.1.2","unified":"11.0.5","vfile":"6.0.3"}},"rehype-sanitize@6.0.0":{"dependencies":{"@types/hast":"3.0.4","hast-util-sanitize":"5.0.2"}},"rehype-slug@6.0.0":{"dependencies":{"@types/hast":"3.0.4","github-slugger":"2.0.0","hast-util-heading-rank":"3.0.0","hast-util-to-string":"3.0.1","unist-util-visit":"5.0.0"}},"rehype-stringify@10.0.1":{"dependencies":{"@types/hast":"3.0.4","hast-util-to-html":"9.0.5","unified":"11.0.5"}},"remark-frontmatter@5.0.0":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-frontmatter":"2.0.1","micromark-extension-frontmatter":"2.0.0","unified":"11.0.5"},"transitivePeerDependencies":["supports-color"]},"remark-gfm@4.0.1":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-gfm":"3.0.0","micromark-extension-gfm":"3.0.0","remark-parse":"11.0.0","remark-stringify":"11.0.0","unified":"11.0.5"},"transitivePeerDependencies":["supports-color"]},"remark-parse@11.0.0":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-from-markdown":"2.0.2","micromark-util-types":"2.0.1","unified":"11.0.5"},"transitivePeerDependencies":["supports-color"]},"remark-rehype@11.1.2":{"dependencies":{"@types/hast":"3.0.4","@types/mdast":"4.0.4","mdast-util-to-hast":"13.2.0","unified":"11.0.5","vfile":"6.0.3"}},"remark-stringify@11.0.0":{"dependencies":{"@types/mdast":"4.0.4","mdast-util-to-markdown":"2.1.2","unified":"11.0.5"}},"require-directory@2.1.1":{},"require-from-string@2.0.2":{},"requires-port@1.0.0":{"optional":true},"resolve-from@5.0.0":{},"resolve-pkg-maps@1.0.0":{"optional":true},"resolve.exports@2.0.3":{},"resq@1.11.0":{"dependencies":{"fast-deep-equal":"2.0.1"},"optional":true},"restore-cursor@3.1.0":{"dependencies":{"onetime":"5.1.2","signal-exit":"3.0.7"}},"reusify@1.0.4":{},"rgb2hex@0.2.5":{"optional":true},"robust-predicates@3.0.2":{},"rolldown@1.0.0-rc.17":{"dependencies":{"@oxc-project/types":"0.127.0","@rolldown/pluginutils":"1.0.0-rc.17"},"optionalDependencies":{"@rolldown/binding-android-arm64":"1.0.0-rc.17","@rolldown/binding-darwin-arm64":"1.0.0-rc.17","@rolldown/binding-darwin-x64":"1.0.0-rc.17","@rolldown/binding-freebsd-x64":"1.0.0-rc.17","@rolldown/binding-linux-arm-gnueabihf":"1.0.0-rc.17","@rolldown/binding-linux-arm64-gnu":"1.0.0-rc.17","@rolldown/binding-linux-arm64-musl":"1.0.0-rc.17","@rolldown/binding-linux-ppc64-gnu":"1.0.0-rc.17","@rolldown/binding-linux-s390x-gnu":"1.0.0-rc.17","@rolldown/binding-linux-x64-gnu":"1.0.0-rc.17","@rolldown/binding-linux-x64-musl":"1.0.0-rc.17","@rolldown/binding-openharmony-arm64":"1.0.0-rc.17","@rolldown/binding-wasm32-wasi":"1.0.0-rc.17","@rolldown/binding-win32-arm64-msvc":"1.0.0-rc.17","@rolldown/binding-win32-x64-msvc":"1.0.0-rc.17"}},"rollup@4.53.2":{"dependencies":{"@types/estree":"1.0.8"},"optionalDependencies":{"@rollup/rollup-android-arm-eabi":"4.53.2","@rollup/rollup-android-arm64":"4.53.2","@rollup/rollup-darwin-arm64":"4.53.2","@rollup/rollup-darwin-x64":"4.53.2","@rollup/rollup-freebsd-arm64":"4.53.2","@rollup/rollup-freebsd-x64":"4.53.2","@rollup/rollup-linux-arm-gnueabihf":"4.53.2","@rollup/rollup-linux-arm-musleabihf":"4.53.2","@rollup/rollup-linux-arm64-gnu":"4.53.2","@rollup/rollup-linux-arm64-musl":"4.53.2","@rollup/rollup-linux-loong64-gnu":"4.53.2","@rollup/rollup-linux-ppc64-gnu":"4.53.2","@rollup/rollup-linux-riscv64-gnu":"4.53.2","@rollup/rollup-linux-riscv64-musl":"4.53.2","@rollup/rollup-linux-s390x-gnu":"4.53.2","@rollup/rollup-linux-x64-gnu":"4.53.2","@rollup/rollup-linux-x64-musl":"4.53.2","@rollup/rollup-openharmony-arm64":"4.53.2","@rollup/rollup-win32-arm64-msvc":"4.53.2","@rollup/rollup-win32-ia32-msvc":"4.53.2","@rollup/rollup-win32-x64-gnu":"4.53.2","@rollup/rollup-win32-x64-msvc":"4.53.2","fsevents":"2.3.3"}},"rou3@0.8.1":{},"roughjs@4.6.6":{"dependencies":{"hachure-fill":"0.5.2","path-data-parser":"0.1.0","points-on-curve":"0.2.0","points-on-path":"0.2.1"}},"rrweb-cssom@0.8.0":{},"run-parallel@1.2.0":{"dependencies":{"queue-microtask":"1.2.3"}},"rw@1.3.3":{},"rxjs@7.8.2":{"dependencies":{"tslib":"2.8.1"},"optional":true},"safaridriver@0.1.2":{"optional":true},"safe-buffer@5.1.2":{"optional":true},"safe-buffer@5.2.1":{},"safer-buffer@2.1.2":{},"sass-embedded-android-arm64@1.89.2":{"optional":true},"sass-embedded-android-arm@1.89.2":{"optional":true},"sass-embedded-android-riscv64@1.89.2":{"optional":true},"sass-embedded-android-x64@1.89.2":{"optional":true},"sass-embedded-darwin-arm64@1.89.2":{"optional":true},"sass-embedded-darwin-x64@1.89.2":{"optional":true},"sass-embedded-linux-arm64@1.89.2":{"optional":true},"sass-embedded-linux-arm@1.89.2":{"optional":true},"sass-embedded-linux-musl-arm64@1.89.2":{"optional":true},"sass-embedded-linux-musl-arm@1.89.2":{"optional":true},"sass-embedded-linux-musl-riscv64@1.89.2":{"optional":true},"sass-embedded-linux-musl-x64@1.89.2":{"optional":true},"sass-embedded-linux-riscv64@1.89.2":{"optional":true},"sass-embedded-linux-x64@1.89.2":{"optional":true},"sass-embedded-win32-arm64@1.89.2":{"optional":true},"sass-embedded-win32-x64@1.89.2":{"optional":true},"sass-embedded@1.89.2":{"dependencies":{"@bufbuild/protobuf":"2.12.0","buffer-builder":"0.2.0","colorjs.io":"0.5.2","immutable":"5.1.5","rxjs":"7.8.2","supports-color":"8.1.1","sync-child-process":"1.0.2","varint":"6.0.0"},"optionalDependencies":{"sass-embedded-android-arm":"1.89.2","sass-embedded-android-arm64":"1.89.2","sass-embedded-android-riscv64":"1.89.2","sass-embedded-android-x64":"1.89.2","sass-embedded-darwin-arm64":"1.89.2","sass-embedded-darwin-x64":"1.89.2","sass-embedded-linux-arm":"1.89.2","sass-embedded-linux-arm64":"1.89.2","sass-embedded-linux-musl-arm":"1.89.2","sass-embedded-linux-musl-arm64":"1.89.2","sass-embedded-linux-musl-riscv64":"1.89.2","sass-embedded-linux-musl-x64":"1.89.2","sass-embedded-linux-riscv64":"1.89.2","sass-embedded-linux-x64":"1.89.2","sass-embedded-win32-arm64":"1.89.2","sass-embedded-win32-x64":"1.89.2"},"optional":true},"saxes@6.0.0":{"dependencies":{"xmlchars":"2.2.0"}},"scheduler@0.27.0":{},"schema-utils@4.3.3":{"dependencies":{"@types/json-schema":"7.0.15","ajv":"8.20.0","ajv-formats":"2.1.1(ajv@8.20.0)","ajv-keywords":"5.1.0(ajv@8.20.0)"},"optional":true},"semver@6.3.1":{},"semver@7.7.2":{},"semver@7.7.3":{},"semver@7.7.4":{"optional":true},"serialize-error@11.0.3":{"dependencies":{"type-fest":"2.19.0"},"optional":true},"seroval-plugins@1.5.4(seroval@1.5.4)":{"dependencies":{"seroval":"1.5.4"}},"seroval@1.5.4":{},"setimmediate@1.0.5":{"optional":true},"sharp@0.34.5":{"dependencies":{"@img/colour":"1.1.0","detect-libc":"2.1.2","semver":"7.7.3"},"optionalDependencies":{"@img/sharp-darwin-arm64":"0.34.5","@img/sharp-darwin-x64":"0.34.5","@img/sharp-libvips-darwin-arm64":"1.2.4","@img/sharp-libvips-darwin-x64":"1.2.4","@img/sharp-libvips-linux-arm":"1.2.4","@img/sharp-libvips-linux-arm64":"1.2.4","@img/sharp-libvips-linux-ppc64":"1.2.4","@img/sharp-libvips-linux-riscv64":"1.2.4","@img/sharp-libvips-linux-s390x":"1.2.4","@img/sharp-libvips-linux-x64":"1.2.4","@img/sharp-libvips-linuxmusl-arm64":"1.2.4","@img/sharp-libvips-linuxmusl-x64":"1.2.4","@img/sharp-linux-arm":"0.34.5","@img/sharp-linux-arm64":"0.34.5","@img/sharp-linux-ppc64":"0.34.5","@img/sharp-linux-riscv64":"0.34.5","@img/sharp-linux-s390x":"0.34.5","@img/sharp-linux-x64":"0.34.5","@img/sharp-linuxmusl-arm64":"0.34.5","@img/sharp-linuxmusl-x64":"0.34.5","@img/sharp-wasm32":"0.34.5","@img/sharp-win32-arm64":"0.34.5","@img/sharp-win32-ia32":"0.34.5","@img/sharp-win32-x64":"0.34.5"}},"shebang-command@2.0.0":{"dependencies":{"shebang-regex":"3.0.0"}},"shebang-regex@3.0.0":{},"shiki@3.15.0":{"dependencies":{"@shikijs/core":"3.15.0","@shikijs/engine-javascript":"3.15.0","@shikijs/engine-oniguruma":"3.15.0","@shikijs/langs":"3.15.0","@shikijs/themes":"3.15.0","@shikijs/types":"3.15.0","@shikijs/vscode-textmate":"10.0.2","@types/hast":"3.0.4"}},"siginfo@2.0.0":{},"signal-exit@3.0.7":{},"signal-exit@4.1.0":{},"simple-concat@1.0.1":{},"simple-get@4.0.1":{"dependencies":{"decompress-response":"6.0.0","once":"1.4.0","simple-concat":"1.0.1"}},"sirv@3.0.2":{"dependencies":{"@polka/url":"1.0.0-next.29","mrmime":"2.0.1","totalist":"3.0.1"}},"slash@3.0.0":{},"smart-buffer@4.2.0":{"optional":true},"socks-proxy-agent@8.0.5":{"dependencies":{"agent-base":"7.1.4","debug":"4.4.3","socks":"2.8.8"},"transitivePeerDependencies":["supports-color"],"optional":true},"socks@2.8.8":{"dependencies":{"ip-address":"10.2.0","smart-buffer":"4.2.0"},"optional":true},"source-map-js@1.2.1":{},"source-map-support@0.5.21":{"dependencies":{"buffer-from":"1.1.2","source-map":"0.6.1"},"optional":true},"source-map@0.6.1":{"optional":true},"source-map@0.7.6":{},"space-separated-tokens@2.0.2":{},"spacetrim@0.11.59":{"optional":true},"spawndamnit@3.0.1":{"dependencies":{"cross-spawn":"7.0.6","signal-exit":"4.1.0"}},"split2@4.2.0":{"optional":true},"sprintf-js@1.0.3":{},"srvx@0.11.15":{},"stackback@0.0.2":{},"statuses@2.0.2":{"optional":true},"std-env@3.10.0":{},"std-env@3.9.0":{},"std-env@4.1.0":{},"streamx@2.25.0":{"dependencies":{"events-universal":"1.0.1","fast-fifo":"1.3.2","text-decoder":"1.2.7"},"transitivePeerDependencies":["bare-abort-controller","react-native-b4a"],"optional":true},"strict-event-emitter@0.5.1":{"optional":true},"string-width@4.2.3":{"dependencies":{"emoji-regex":"8.0.0","is-fullwidth-code-point":"3.0.0","strip-ansi":"6.0.1"}},"string-width@5.1.2":{"dependencies":{"eastasianwidth":"0.2.0","emoji-regex":"9.2.2","strip-ansi":"7.1.2"}},"string_decoder@1.1.1":{"dependencies":{"safe-buffer":"5.1.2"},"optional":true},"string_decoder@1.3.0":{"dependencies":{"safe-buffer":"5.2.1"}},"stringify-entities@4.0.4":{"dependencies":{"character-entities-html4":"2.1.0","character-entities-legacy":"3.0.0"}},"strip-ansi@6.0.1":{"dependencies":{"ansi-regex":"5.0.1"}},"strip-ansi@7.1.2":{"dependencies":{"ansi-regex":"6.1.0"}},"strip-ansi@7.2.0":{"dependencies":{"ansi-regex":"6.2.2"},"optional":true},"strip-bom@3.0.0":{},"strip-json-comments@2.0.1":{},"strip-literal@3.0.0":{"dependencies":{"js-tokens":"9.0.1"}},"strnum@1.1.2":{"optional":true},"stylis@4.3.6":{},"supports-color@10.2.2":{},"supports-color@7.2.0":{"dependencies":{"has-flag":"4.0.0"}},"supports-color@8.1.1":{"dependencies":{"has-flag":"4.0.0"},"optional":true},"symbol-tree@3.2.4":{},"sync-child-process@1.0.2":{"dependencies":{"sync-message-port":"1.2.0"},"optional":true},"sync-message-port@1.2.0":{"optional":true},"tailwindcss@4.2.4":{},"tapable@2.3.3":{},"tar-fs@2.1.4":{"dependencies":{"chownr":"1.1.4","mkdirp-classic":"0.5.3","pump":"3.0.3","tar-stream":"2.2.0"}},"tar-fs@3.1.2":{"dependencies":{"pump":"3.0.4","tar-stream":"3.2.0"},"optionalDependencies":{"bare-fs":"4.7.1","bare-path":"3.0.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a"],"optional":true},"tar-stream@2.2.0":{"dependencies":{"bl":"4.1.0","end-of-stream":"1.4.5","fs-constants":"1.0.0","inherits":"2.0.4","readable-stream":"3.6.2"}},"tar-stream@3.2.0":{"dependencies":{"b4a":"1.8.1","bare-fs":"4.7.1","fast-fifo":"1.3.2","streamx":"2.25.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","react-native-b4a"],"optional":true},"tar@6.2.1":{"dependencies":{"chownr":"2.0.0","fs-minipass":"2.1.0","minipass":"5.0.0","minizlib":"2.1.2","mkdirp":"1.0.4","yallist":"4.0.0"}},"teex@1.0.1":{"dependencies":{"streamx":"2.25.0"},"transitivePeerDependencies":["bare-abort-controller","react-native-b4a"],"optional":true},"term-size@2.2.1":{},"terser-webpack-plugin@5.5.0(esbuild@0.27.3)(webpack@5.99.9(esbuild@0.27.3))":{"dependencies":{"@jridgewell/trace-mapping":"0.3.31","jest-worker":"27.5.1","schema-utils":"4.3.3","terser":"5.36.0","webpack":"5.99.9(esbuild@0.27.3)"},"optionalDependencies":{"esbuild":"0.27.3"},"optional":true},"terser@5.36.0":{"dependencies":{"@jridgewell/source-map":"0.3.11","acorn":"8.16.0","commander":"2.20.3","source-map-support":"0.5.21"},"optional":true},"test-exclude@7.0.1":{"dependencies":{"@istanbuljs/schema":"0.1.3","glob":"10.4.5","minimatch":"9.0.5"}},"text-decoder@1.2.7":{"dependencies":{"b4a":"1.8.1"},"transitivePeerDependencies":["react-native-b4a"],"optional":true},"tinybench@2.9.0":{},"tinyexec@0.3.2":{},"tinyexec@1.0.2":{},"tinyglobby@0.2.14":{"dependencies":{"fdir":"6.5.0(picomatch@4.0.4)","picomatch":"4.0.4"}},"tinyglobby@0.2.15":{"dependencies":{"fdir":"6.5.0(picomatch@4.0.4)","picomatch":"4.0.4"}},"tinyglobby@0.2.16":{"dependencies":{"fdir":"6.5.0(picomatch@4.0.4)","picomatch":"4.0.4"}},"tinypool@1.1.1":{},"tinyrainbow@2.0.0":{},"tinyrainbow@3.0.3":{},"tinyrainbow@3.1.0":{},"tinyspy@4.0.3":{},"tldts-core@6.1.52":{},"tldts-core@7.0.19":{},"tldts@6.1.52":{"dependencies":{"tldts-core":"6.1.52"}},"tldts@7.0.19":{"dependencies":{"tldts-core":"7.0.19"}},"tmp@0.2.5":{},"to-regex-range@5.0.1":{"dependencies":{"is-number":"7.0.0"}},"totalist@3.0.1":{},"tough-cookie@4.1.4":{"dependencies":{"psl":"1.15.0","punycode":"2.3.1","universalify":"0.2.0","url-parse":"1.5.10"},"optional":true},"tough-cookie@5.1.2":{"dependencies":{"tldts":"6.1.52"}},"tough-cookie@6.0.0":{"dependencies":{"tldts":"7.0.19"}},"tr46@5.1.1":{"dependencies":{"punycode":"2.3.1"}},"tr46@6.0.0":{"dependencies":{"punycode":"2.3.1"}},"tree-kill@1.2.2":{},"trim-lines@3.0.1":{},"trim-trailing-lines@2.1.0":{},"trough@2.2.0":{},"ts-algebra@2.0.0":{},"ts-dedent@2.2.0":{},"tsconfig-paths@4.2.0":{"dependencies":{"json5":"2.2.3","minimist":"1.2.8","strip-bom":"3.0.0"}},"tslib@2.8.1":{},"tsx@4.20.5":{"dependencies":{"esbuild":"0.25.12","get-tsconfig":"4.14.0"},"optionalDependencies":{"fsevents":"2.3.3"},"optional":true},"tunnel-agent@0.6.0":{"dependencies":{"safe-buffer":"5.2.1"}},"type-fest@2.19.0":{"optional":true},"type-fest@4.26.0":{"optional":true},"type-fest@4.41.0":{"optional":true},"typescript@5.8.3":{},"typescript@5.9.3":{},"ufo@1.6.1":{},"undici-types@6.21.0":{},"undici-types@7.16.0":{"optional":true},"undici@7.16.0":{},"undici@7.24.8":{},"undici@7.25.0":{"optional":true},"unenv@2.0.0-rc.24":{"dependencies":{"pathe":"2.0.3"}},"unified@11.0.5":{"dependencies":{"@types/unist":"3.0.3","bail":"2.0.2","devlop":"1.1.0","extend":"3.0.2","is-plain-obj":"4.1.0","trough":"2.2.0","vfile":"6.0.3"}},"unist-util-find-after@5.0.0":{"dependencies":{"@types/unist":"3.0.3","unist-util-is":"6.0.0"}},"unist-util-is@6.0.0":{"dependencies":{"@types/unist":"3.0.3"}},"unist-util-position@5.0.0":{"dependencies":{"@types/unist":"3.0.3"}},"unist-util-stringify-position@4.0.0":{"dependencies":{"@types/unist":"3.0.3"}},"unist-util-visit-parents@6.0.1":{"dependencies":{"@types/unist":"3.0.3","unist-util-is":"6.0.0"}},"unist-util-visit@5.0.0":{"dependencies":{"@types/unist":"3.0.3","unist-util-is":"6.0.0","unist-util-visit-parents":"6.0.1"}},"universalify@0.1.2":{},"universalify@0.2.0":{"optional":true},"universalify@2.0.1":{},"unplugin@3.0.0":{"dependencies":{"@jridgewell/remapping":"2.3.5","picomatch":"4.0.3","webpack-virtual-modules":"0.6.2"}},"update-browserslist-db@1.1.3(browserslist@4.25.3)":{"dependencies":{"browserslist":"4.25.3","escalade":"3.2.0","picocolors":"1.1.1"}},"update-browserslist-db@1.2.3(browserslist@4.28.2)":{"dependencies":{"browserslist":"4.28.2","escalade":"3.2.0","picocolors":"1.1.1"},"optional":true},"url-parse@1.5.10":{"dependencies":{"querystringify":"2.2.0","requires-port":"1.0.0"},"optional":true},"urlpattern-polyfill@10.1.0":{"optional":true},"use-sync-external-store@1.6.0(react@19.2.0)":{"dependencies":{"react":"19.2.0"}},"userhome@1.0.1":{"optional":true},"util-deprecate@1.0.2":{},"uuid@11.1.0":{},"varint@6.0.0":{"optional":true},"vfile-location@5.0.3":{"dependencies":{"@types/unist":"3.0.3","vfile":"6.0.3"}},"vfile-message@4.0.2":{"dependencies":{"@types/unist":"3.0.3","unist-util-stringify-position":"4.0.0"}},"vfile@6.0.3":{"dependencies":{"@types/unist":"3.0.3","vfile-message":"4.0.2"}},"vite-node@3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"cac":"6.7.14","debug":"4.4.3","es-module-lexer":"1.7.0","pathe":"2.0.3","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"},"transitivePeerDependencies":["@types/node","jiti","less","lightningcss","sass","sass-embedded","stylus","sugarss","supports-color","terser","tsx","yaml"]},"vite-plugin-static-copy@4.1.0(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"chokidar":"3.6.0","p-map":"7.0.4","picocolors":"1.1.1","tinyglobby":"0.2.16","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"esbuild":"0.25.12","fdir":"6.5.0(picomatch@4.0.4)","picomatch":"4.0.4","postcss":"8.5.6","rollup":"4.53.2","tinyglobby":"0.2.16"},"optionalDependencies":{"@types/node":"24.10.2","fsevents":"2.3.3","jiti":"2.6.1","lightningcss":"1.32.0","sass-embedded":"1.89.2","terser":"5.36.0","tsx":"4.20.5","yaml":"2.8.1"}},"vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"lightningcss":"1.32.0","picomatch":"4.0.4","postcss":"8.5.14","rolldown":"1.0.0-rc.17","tinyglobby":"0.2.16"},"optionalDependencies":{"@types/node":"22.15.33","esbuild":"0.27.3","fsevents":"2.3.3","jiti":"2.6.1","sass-embedded":"1.89.2","terser":"5.36.0","tsx":"4.20.5","yaml":"2.8.1"}},"vitefu@1.1.1(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"optionalDependencies":{"vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)"}},"vitest@3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@26.1.0)(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"@types/chai":"5.2.2","@vitest/expect":"3.2.4","@vitest/mocker":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/pretty-format":"3.2.4","@vitest/runner":"3.2.4","@vitest/snapshot":"3.2.4","@vitest/spy":"3.2.4","@vitest/utils":"3.2.4","chai":"5.3.3","debug":"4.4.1","expect-type":"1.2.2","magic-string":"0.30.18","pathe":"2.0.3","picomatch":"4.0.3","std-env":"3.9.0","tinybench":"2.9.0","tinyexec":"0.3.2","tinyglobby":"0.2.14","tinypool":"1.1.1","tinyrainbow":"2.0.0","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","vite-node":"3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","why-is-node-running":"2.3.0"},"optionalDependencies":{"@types/debug":"4.1.12","@types/node":"24.10.2","@vitest/browser":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.8.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)","happy-dom":"18.0.1","jsdom":"26.1.0"},"transitivePeerDependencies":["jiti","less","lightningcss","msw","sass","sass-embedded","stylus","sugarss","supports-color","terser","tsx","yaml"]},"vitest@3.2.4(@types/debug@4.1.12)(@types/node@24.10.2)(@vitest/browser@3.2.4)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"@types/chai":"5.2.2","@vitest/expect":"3.2.4","@vitest/mocker":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/pretty-format":"3.2.4","@vitest/runner":"3.2.4","@vitest/snapshot":"3.2.4","@vitest/spy":"3.2.4","@vitest/utils":"3.2.4","chai":"5.3.3","debug":"4.4.1","expect-type":"1.2.2","magic-string":"0.30.18","pathe":"2.0.3","picomatch":"4.0.3","std-env":"3.9.0","tinybench":"2.9.0","tinyexec":"0.3.2","tinyglobby":"0.2.14","tinypool":"1.1.1","tinyrainbow":"2.0.0","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","vite-node":"3.2.4(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","why-is-node-running":"2.3.0"},"optionalDependencies":{"@types/debug":"4.1.12","@types/node":"24.10.2","@vitest/browser":"3.2.4(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(playwright@1.55.0)(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))(vitest@3.2.4)(webdriverio@9.2.1)","happy-dom":"18.0.1","jsdom":"27.3.0(postcss@8.5.14)"},"transitivePeerDependencies":["jiti","less","lightningcss","msw","sass","sass-embedded","stylus","sugarss","supports-color","terser","tsx","yaml"]},"vitest@4.0.18(@opentelemetry/api@1.9.0)(@types/node@24.10.2)(happy-dom@18.0.1)(jiti@2.6.1)(jsdom@27.3.0(postcss@8.5.14))(lightningcss@1.32.0)(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)":{"dependencies":{"@vitest/expect":"4.0.18","@vitest/mocker":"4.0.18(msw@2.10.2(@types/node@24.10.2)(typescript@5.9.3))(vite@7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/pretty-format":"4.0.18","@vitest/runner":"4.0.18","@vitest/snapshot":"4.0.18","@vitest/spy":"4.0.18","@vitest/utils":"4.0.18","es-module-lexer":"1.7.0","expect-type":"1.2.2","magic-string":"0.30.21","obug":"2.1.1","pathe":"2.0.3","picomatch":"4.0.3","std-env":"3.10.0","tinybench":"2.9.0","tinyexec":"1.0.2","tinyglobby":"0.2.15","tinyrainbow":"3.0.3","vite":"7.2.7(@types/node@24.10.2)(jiti@2.6.1)(lightningcss@1.32.0)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","why-is-node-running":"2.3.0"},"optionalDependencies":{"@opentelemetry/api":"1.9.0","@types/node":"24.10.2","happy-dom":"18.0.1","jsdom":"27.3.0(postcss@8.5.14)"},"transitivePeerDependencies":["jiti","less","lightningcss","msw","sass","sass-embedded","stylus","sugarss","terser","tsx","yaml"]},"vitest@4.1.5(@opentelemetry/api@1.9.0)(@types/node@22.15.33)(@vitest/coverage-v8@4.1.5)(happy-dom@18.0.1)(jsdom@27.3.0(postcss@8.5.14))(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))":{"dependencies":{"@vitest/expect":"4.1.5","@vitest/mocker":"4.1.5(msw@2.10.2(@types/node@22.15.33)(typescript@5.8.3))(vite@8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1))","@vitest/pretty-format":"4.1.5","@vitest/runner":"4.1.5","@vitest/snapshot":"4.1.5","@vitest/spy":"4.1.5","@vitest/utils":"4.1.5","es-module-lexer":"2.1.0","expect-type":"1.3.0","magic-string":"0.30.21","obug":"2.1.1","pathe":"2.0.3","picomatch":"4.0.4","std-env":"4.1.0","tinybench":"2.9.0","tinyexec":"1.0.2","tinyglobby":"0.2.16","tinyrainbow":"3.1.0","vite":"8.0.10(@types/node@22.15.33)(esbuild@0.27.3)(jiti@2.6.1)(sass-embedded@1.89.2)(terser@5.36.0)(tsx@4.20.5)(yaml@2.8.1)","why-is-node-running":"2.3.0"},"optionalDependencies":{"@opentelemetry/api":"1.9.0","@types/node":"22.15.33","@vitest/coverage-v8":"4.1.5(@vitest/browser@4.1.5)(vitest@4.1.5)","happy-dom":"18.0.1","jsdom":"27.3.0(postcss@8.5.14)"},"transitivePeerDependencies":["msw"]},"vscode-jsonrpc@8.2.0":{},"vscode-languageserver-protocol@3.17.5":{"dependencies":{"vscode-jsonrpc":"8.2.0","vscode-languageserver-types":"3.17.5"}},"vscode-languageserver-textdocument@1.0.12":{},"vscode-languageserver-types@3.17.5":{},"vscode-languageserver@9.0.1":{"dependencies":{"vscode-languageserver-protocol":"3.17.5"}},"vscode-uri@3.0.8":{},"w3c-xmlserializer@5.0.0":{"dependencies":{"xml-name-validator":"5.0.0"}},"wait-port@1.1.0":{"dependencies":{"chalk":"4.1.2","commander":"9.5.0","debug":"4.4.3"},"transitivePeerDependencies":["supports-color"],"optional":true},"watchpack@2.5.1":{"dependencies":{"glob-to-regexp":"0.4.1","graceful-fs":"4.2.11"},"optional":true},"wcwidth@1.0.1":{"dependencies":{"defaults":"1.0.4"}},"web-namespaces@2.0.1":{},"web-streams-polyfill@3.3.3":{"optional":true},"web-vitals@4.2.4":{},"web-vitals@5.1.0":{},"webdriver@9.2.0":{"dependencies":{"@types/node":"20.19.39","@types/ws":"8.18.1","@wdio/config":"9.1.3","@wdio/logger":"9.1.3","@wdio/protocols":"9.2.0","@wdio/types":"9.1.3","@wdio/utils":"9.1.3","deepmerge-ts":"7.1.5","ws":"8.20.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","bufferutil","react-native-b4a","supports-color","utf-8-validate"],"optional":true},"webdriverio@9.2.1":{"dependencies":{"@types/node":"20.19.39","@types/sinonjs__fake-timers":"8.1.5","@wdio/config":"9.1.3","@wdio/logger":"9.1.3","@wdio/protocols":"9.2.0","@wdio/repl":"9.0.8","@wdio/types":"9.1.3","@wdio/utils":"9.1.3","archiver":"7.0.1","aria-query":"5.3.2","cheerio":"1.2.0","css-shorthand-properties":"1.1.2","css-value":"0.0.1","grapheme-splitter":"1.0.4","htmlfy":"0.3.2","import-meta-resolve":"4.2.0","is-plain-obj":"4.1.0","jszip":"3.10.1","lodash.clonedeep":"4.5.0","lodash.zip":"4.2.0","minimatch":"9.0.9","query-selector-shadow-dom":"1.0.1","resq":"1.11.0","rgb2hex":"0.2.5","serialize-error":"11.0.3","urlpattern-polyfill":"10.1.0","webdriver":"9.2.0"},"transitivePeerDependencies":["bare-abort-controller","bare-buffer","bufferutil","react-native-b4a","supports-color","utf-8-validate"],"optional":true},"webidl-conversions@7.0.0":{},"webidl-conversions@8.0.0":{},"webpack-sources@3.4.1":{"optional":true},"webpack-virtual-modules@0.6.2":{},"webpack@5.99.9(esbuild@0.27.3)":{"dependencies":{"@types/eslint-scope":"3.7.7","@types/estree":"1.0.9","@types/json-schema":"7.0.15","@webassemblyjs/ast":"1.14.1","@webassemblyjs/wasm-edit":"1.14.1","@webassemblyjs/wasm-parser":"1.14.1","acorn":"8.16.0","browserslist":"4.28.2","chrome-trace-event":"1.0.4","enhanced-resolve":"5.21.0","es-module-lexer":"1.7.0","eslint-scope":"5.1.1","events":"3.3.0","glob-to-regexp":"0.4.1","graceful-fs":"4.2.11","json-parse-even-better-errors":"2.3.1","loader-runner":"4.3.2","mime-types":"2.1.35","neo-async":"2.6.2","schema-utils":"4.3.3","tapable":"2.3.3","terser-webpack-plugin":"5.5.0(esbuild@0.27.3)(webpack@5.99.9(esbuild@0.27.3))","watchpack":"2.5.1","webpack-sources":"3.4.1"},"transitivePeerDependencies":["@swc/core","esbuild","uglify-js"],"optional":true},"whatwg-encoding@3.1.1":{"dependencies":{"iconv-lite":"0.6.3"}},"whatwg-mimetype@3.0.0":{"optional":true},"whatwg-mimetype@4.0.0":{},"whatwg-url@14.2.0":{"dependencies":{"tr46":"5.1.1","webidl-conversions":"7.0.0"}},"whatwg-url@15.1.0":{"dependencies":{"tr46":"6.0.0","webidl-conversions":"8.0.0"}},"which@2.0.2":{"dependencies":{"isexe":"2.0.0"}},"which@4.0.0":{"dependencies":{"isexe":"3.1.5"},"optional":true},"why-is-node-running@2.3.0":{"dependencies":{"siginfo":"2.0.0","stackback":"0.0.2"}},"workerd@1.20260504.1":{"optionalDependencies":{"@cloudflare/workerd-darwin-64":"1.20260504.1","@cloudflare/workerd-darwin-arm64":"1.20260504.1","@cloudflare/workerd-linux-64":"1.20260504.1","@cloudflare/workerd-linux-arm64":"1.20260504.1","@cloudflare/workerd-windows-64":"1.20260504.1"}},"wrangler@4.88.0":{"dependencies":{"@cloudflare/kv-asset-handler":"0.5.0","@cloudflare/unenv-preset":"2.16.1(unenv@2.0.0-rc.24)(workerd@1.20260504.1)","blake3-wasm":"2.1.5","esbuild":"0.27.3","miniflare":"4.20260504.0","path-to-regexp":"6.3.0","unenv":"2.0.0-rc.24","workerd":"1.20260504.1"},"optionalDependencies":{"fsevents":"2.3.3"},"transitivePeerDependencies":["bufferutil","utf-8-validate"]},"wrap-ansi@6.2.0":{"dependencies":{"ansi-styles":"4.3.0","string-width":"4.2.3","strip-ansi":"6.0.1"},"optional":true},"wrap-ansi@7.0.0":{"dependencies":{"ansi-styles":"4.3.0","string-width":"4.2.3","strip-ansi":"6.0.1"}},"wrap-ansi@8.1.0":{"dependencies":{"ansi-styles":"6.2.1","string-width":"5.1.2","strip-ansi":"7.1.2"}},"wrappy@1.0.2":{},"ws@8.18.0":{},"ws@8.18.3":{},"ws@8.20.0":{},"xml-name-validator@5.0.0":{},"xmlbuilder2@4.0.3":{"dependencies":{"@oozcitak/dom":"2.0.2","@oozcitak/infra":"2.0.2","@oozcitak/util":"10.0.0","js-yaml":"4.1.1"}},"xmlchars@2.2.0":{},"y18n@5.0.8":{},"yallist@3.1.1":{},"yallist@4.0.0":{},"yaml@2.8.1":{},"yargs-parser@21.1.1":{},"yargs-parser@22.0.0":{},"yargs@17.7.2":{"dependencies":{"cliui":"8.0.1","escalade":"3.2.0","get-caller-file":"2.0.5","require-directory":"2.1.1","string-width":"4.2.3","y18n":"5.0.8","yargs-parser":"21.1.1"}},"yauzl@2.10.0":{"dependencies":{"buffer-crc32":"0.2.13","fd-slicer":"1.1.0"},"optional":true},"yoctocolors-cjs@2.1.3":{"optional":true},"youch-core@0.3.3":{"dependencies":{"@poppinss/exception":"1.2.2","error-stack-parser-es":"1.0.5"}},"youch@4.1.0-beta.10":{"dependencies":{"@poppinss/colors":"4.1.5","@poppinss/dumper":"0.6.5","@speed-highlight/core":"1.2.12","cookie":"1.0.2","youch-core":"0.3.3"}},"zip-stream@6.0.1":{"dependencies":{"archiver-utils":"5.0.2","compress-commons":"6.0.2","readable-stream":"4.7.0"},"optional":true},"zod@3.25.76":{},"zwitch@2.0.4":{}}} ================================================ FILE: packages/engine/benches/physical_layout/backend_kv.rs ================================================ use std::sync::Arc; use criterion::{black_box, BatchSize, Criterion}; use lix_engine::storage_bench::{self, StorageBenchSelectivity}; use lix_engine::Backend; use tokio::runtime::Runtime; use crate::{Args, RocksDbBenchBackend, SqliteBenchBackend}; type BackendFactory = fn() -> Arc; #[derive(Clone, Copy)] struct BackendProfile { name: &'static str, create: BackendFactory, } pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { for profile in physical_backends() { bench_fast(c, runtime, args, profile); bench_full(c, runtime, args, profile); } } fn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("physical_layout/backend_kv/fast/{}", profile.name)); group.bench_function("write_batch_put/10k", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_puts( backend, args.rows, )) .expect("physical_layout/backend_kv write_batch_put succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("mixed_put_delete/10k", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_mixed_put_delete( backend, args.rows, )) .expect("physical_layout/backend_kv mixed_put_delete succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("get_values_hit/10k", |b| { b.iter_batched( || prepare_read(runtime, profile, args.rows), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_get_values_hits_prepared( &fixture, args.rows, )) .expect("physical_layout/backend_kv get_values_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_keys_prefix/10k", |b| { b.iter_batched( || prepare_read(runtime, profile, args.rows), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_scan_keys_prefix_prepared( &fixture, args.rows, )) .expect("physical_layout/backend_kv scan_keys_prefix succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_full(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("physical_layout/backend_kv/full/{}", profile.name)); for rows in [1_000usize, 10_000, 50_000] { group.bench_function(format!("write_batch_put/{}", label(rows)), |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_puts( backend, rows, )) .expect("physical_layout/backend_kv full write_batch_put succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("write_batch_value_size_1k/10k", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_value_size( backend, args.rows, 1024, )) .expect("physical_layout/backend_kv value_size_1k succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("get_values_miss/10k", |b| { b.iter_batched( || prepare_read(runtime, profile, args.rows), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_get_values_misses_prepared( &fixture, args.rows, )) .expect("physical_layout/backend_kv get_values_miss succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("get_values_mixed_hit_miss/10k", |b| { b.iter_batched( || prepare_read(runtime, profile, args.rows), |fixture| { black_box( runtime .block_on( storage_bench::storage_api_get_values_mixed_hit_miss_prepared( &fixture, args.rows, ), ) .expect("physical_layout/backend_kv get_values_mixed succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_keys_after_pages/10k", |b| { b.iter_batched( || prepare_read(runtime, profile, args.rows), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_scan_keys_after_pages_prepared( &fixture, 1024, )) .expect("physical_layout/backend_kv scan_keys_after_pages succeeds"), ) }, BatchSize::LargeInput, ) }); for selectivity in [ StorageBenchSelectivity::Percent1, StorageBenchSelectivity::Percent10, ] { let label = match selectivity { StorageBenchSelectivity::Percent1 => "1pct", StorageBenchSelectivity::Percent10 => "10pct", StorageBenchSelectivity::Percent100 => "100pct", }; group.bench_function(format!("scan_keys_selective_prefix_{label}/10k"), |b| { b.iter_batched( || prepare_selective_scan(runtime, profile, args.rows, selectivity), |fixture| { black_box( runtime .block_on( storage_bench::storage_api_scan_keys_selective_prefix_prepared( &fixture, selectivity, ), ) .expect("physical_layout/backend_kv selective scan succeeds"), ) }, BatchSize::LargeInput, ) }); } group.finish(); } fn prepare_read( runtime: &Runtime, profile: BackendProfile, rows: usize, ) -> storage_bench::StorageApiFixture { runtime .block_on(storage_bench::prepare_storage_api_read( (profile.create)(), rows, )) .expect("prepare physical_layout/backend_kv read") } fn prepare_selective_scan( runtime: &Runtime, profile: BackendProfile, rows: usize, selectivity: StorageBenchSelectivity, ) -> storage_bench::StorageApiFixture { runtime .block_on(storage_bench::prepare_storage_api_selective_scan( (profile.create)(), rows, selectivity, )) .expect("prepare physical_layout/backend_kv selective scan") } fn physical_backends() -> [BackendProfile; 2] { [ BackendProfile { name: "sqlite_tempfile", create: sqlite_tempfile_backend, }, BackendProfile { name: "rocksdb_tempdir", create: rocksdb_backend, }, ] } fn sqlite_tempfile_backend() -> Arc { Arc::new(SqliteBenchBackend::tempfile().expect("create sqlite tempfile bench backend")) } fn rocksdb_backend() -> Arc { Arc::new(RocksDbBenchBackend::new().expect("create rocksdb bench backend")) } fn label(rows: usize) -> &'static str { match rows { 1_000 => "1k", 10_000 => "10k", 50_000 => "50k", _ => "rows", } } ================================================ FILE: packages/engine/benches/physical_layout/changelog.rs ================================================ use std::sync::Arc; use std::time::Duration; use criterion::{black_box, BatchSize, Criterion}; use lix_engine::storage_bench::{self, StorageBenchConfig}; use lix_engine::Backend; use tokio::runtime::Runtime; use crate::{Args, RocksDbBenchBackend, SqliteBenchBackend}; type BackendFactory = fn() -> Arc; #[derive(Clone, Copy)] struct BackendProfile { name: &'static str, create: BackendFactory, } pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { for profile in physical_backends() { bench_smoke(c, runtime, args, profile); bench_fast(c, runtime, args, profile); bench_full(c, runtime, args, profile); } } fn bench_smoke(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let smoke = args.config().with_rows(1_000); let mut group = c.benchmark_group(format!("physical_layout/changelog/smoke/{}", profile.name)); group.sample_size(10); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); group.bench_function("append_changes/1k", |b| { b.iter_batched( || prepare_append(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("physical_layout/changelog smoke append_changes succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_change_set/1k", |b| { b.iter_batched( || prepare_read(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_change_set_prepared( &backend, &fixture, )) .expect("physical_layout/changelog smoke scan_change_set succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("load_changes_hit/1k", |b| { b.iter_batched( || prepare_read(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_load_changes_hit_prepared( &backend, &fixture, )) .expect("physical_layout/changelog smoke load_changes_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("physical_layout/changelog/fast/{}", profile.name)); group.bench_function("append_changes/10k", |b| { b.iter_batched( || prepare_append(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("physical_layout/changelog append_changes succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_change_set/10k", |b| { b.iter_batched( || prepare_read(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_change_set_prepared( &backend, &fixture, )) .expect("physical_layout/changelog scan_change_set succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("load_changes_hit/10k", |b| { b.iter_batched( || prepare_read(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_load_changes_hit_prepared( &backend, &fixture, )) .expect("physical_layout/changelog load_changes_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_full(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("physical_layout/changelog/full/{}", profile.name)); for rows in [1_000usize, 10_000, 50_000] { let config = args.config().with_rows(rows); group.bench_function(format!("append_changes/{}", label(rows)), |b| { b.iter_batched( || prepare_append(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("physical_layout/changelog full append succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("scan_change_set/{}", label(rows)), |b| { b.iter_batched( || prepare_read(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_change_set_prepared( &backend, &fixture, )) .expect("physical_layout/changelog full scan_change_set succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("scan_all/{}", label(rows)), |b| { b.iter_batched( || prepare_read(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_all_prepared( &backend, &fixture, )) .expect("physical_layout/changelog full scan_all succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("load_changes_hit/{}", label(rows)), |b| { b.iter_batched( || prepare_read(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_load_changes_hit_prepared( &backend, &fixture, )) .expect("physical_layout/changelog full load_changes_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("load_changes_miss/{}", label(rows)), |b| { b.iter_batched( || prepare_read(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_load_changes_miss_prepared( &backend, &fixture, )) .expect("physical_layout/changelog full load_changes_miss succeeds"), ) }, BatchSize::LargeInput, ) }); } group.finish(); } fn prepare_append( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> ( Arc, storage_bench::ChangelogAppendFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_changes(config)) .expect("prepare physical_layout/changelog append"); (backend, fixture) } fn prepare_read( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> ( Arc, storage_bench::ChangelogReadFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_changelog_read(&backend, config)) .expect("prepare physical_layout/changelog read"); (backend, fixture) } fn physical_backends() -> [BackendProfile; 2] { [ BackendProfile { name: "sqlite_tempfile", create: sqlite_tempfile_backend, }, BackendProfile { name: "rocksdb_tempdir", create: rocksdb_backend, }, ] } fn sqlite_tempfile_backend() -> Arc { Arc::new(SqliteBenchBackend::tempfile().expect("create sqlite tempfile bench backend")) } fn rocksdb_backend() -> Arc { Arc::new(RocksDbBenchBackend::new().expect("create rocksdb bench backend")) } fn label(rows: usize) -> &'static str { match rows { 1_000 => "1k", 10_000 => "10k", 50_000 => "50k", _ => "rows", } } ================================================ FILE: packages/engine/benches/physical_layout/json_store.rs ================================================ use std::sync::Arc; use criterion::{black_box, BatchSize, Criterion}; use lix_engine::storage_bench::{ self, JsonStorePayloadShape, JsonStoreProjectionShape, JsonStoreReadFixture, }; use lix_engine::Backend; use tokio::runtime::Runtime; use crate::{Args, RocksDbBenchBackend, SqliteBenchBackend}; type BackendFactory = fn() -> Arc; #[derive(Clone, Copy)] struct BackendProfile { name: &'static str, create: BackendFactory, } pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { for profile in physical_backends() { bench_fast(c, runtime, args, profile); bench_full(c, runtime, args, profile); } } fn bench_fast(c: &mut Criterion, runtime: &Runtime, _args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("physical_layout/json_store/fast/{}", profile.name)); group.bench_function("write_unique_1k/10k", |b| { b.iter_batched( || prepare_write(runtime, JsonStorePayloadShape::SmallRaw1k, 10_000), |fixture| { let backend = (profile.create)(); black_box( runtime .block_on(storage_bench::json_store_write_prepared(&backend, &fixture)) .expect("physical_layout/json_store write_unique_1k succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write_same_1k/10k", |b| { b.iter_batched( || prepare_write_dedupe(runtime, JsonStorePayloadShape::SmallRaw1k, 10_000), |fixture| { let backend = (profile.create)(); black_box( runtime .block_on(storage_bench::json_store_write_prepared(&backend, &fixture)) .expect("physical_layout/json_store write_same_1k succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_bytes_1k/10k", |b| { b.iter_batched( || { prepare_read( runtime, profile, JsonStorePayloadShape::SmallRaw1k, 10_000, JsonStoreProjectionShape::TopLevelTarget, ) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_read_bytes_prepared( &backend, &fixture, )) .expect("physical_layout/json_store read_bytes_1k succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_full(c: &mut Criterion, runtime: &Runtime, _args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("physical_layout/json_store/full/{}", profile.name)); for (name, shape, rows, dedupe) in [ ( "write_unique_1k/10k", JsonStorePayloadShape::SmallRaw1k, 10_000usize, false, ), ( "write_same_1k/10k", JsonStorePayloadShape::SmallRaw1k, 10_000, true, ), ( "write_unique_16k/1k", JsonStorePayloadShape::MediumStructured16k, 1_000, false, ), ( "write_same_16k/1k", JsonStorePayloadShape::MediumStructured16k, 1_000, true, ), ] { group.bench_function(name, |b| { b.iter_batched( || { if dedupe { prepare_write_dedupe(runtime, shape, rows) } else { prepare_write(runtime, shape, rows) } }, |fixture| { let backend = (profile.create)(); black_box( runtime .block_on(storage_bench::json_store_write_prepared(&backend, &fixture)) .expect("physical_layout/json_store full write succeeds"), ) }, BatchSize::LargeInput, ) }); } for (name, shape, rows) in [ ( "read_bytes_1k/10k", JsonStorePayloadShape::SmallRaw1k, 10_000usize, ), ( "read_bytes_16k/1k", JsonStorePayloadShape::MediumStructured16k, 1_000, ), ] { group.bench_function(name, |b| { b.iter_batched( || { prepare_read( runtime, profile, shape, rows, JsonStoreProjectionShape::TopLevelTarget, ) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_read_bytes_prepared( &backend, &fixture, )) .expect("physical_layout/json_store full read_bytes succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("read_projection_top_level_128k/50", |b| { b.iter_batched( || { prepare_read( runtime, profile, JsonStorePayloadShape::LargeStructured128k, 50, JsonStoreProjectionShape::TopLevelTarget, ) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_read_projection_prepared( &backend, &fixture, )) .expect("physical_layout/json_store projection succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write_against_base_object_update_1_of_1000/50", |b| { b.iter_batched( || { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_json_store_base_update_object( &backend, 50, )) .expect("prepare physical_layout/json_store base update object"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_store_write_against_base_object_prepared( &backend, &fixture, ), ) .expect("physical_layout/json_store base update object succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn prepare_write( runtime: &Runtime, shape: JsonStorePayloadShape, rows: usize, ) -> storage_bench::JsonStoreWriteFixture { runtime .block_on(storage_bench::prepare_json_store_write(shape, rows)) .expect("prepare physical_layout/json_store write") } fn prepare_write_dedupe( runtime: &Runtime, shape: JsonStorePayloadShape, rows: usize, ) -> storage_bench::JsonStoreWriteFixture { runtime .block_on(storage_bench::prepare_json_store_write_dedupe(shape, rows)) .expect("prepare physical_layout/json_store write dedupe") } fn prepare_read( runtime: &Runtime, profile: BackendProfile, shape: JsonStorePayloadShape, rows: usize, projection: JsonStoreProjectionShape, ) -> (Arc, JsonStoreReadFixture) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_json_store_projection_read( &backend, shape, rows, projection, )) .expect("prepare physical_layout/json_store read"); (backend, fixture) } fn physical_backends() -> [BackendProfile; 2] { [ BackendProfile { name: "sqlite_tempfile", create: sqlite_tempfile_backend, }, BackendProfile { name: "rocksdb_tempdir", create: rocksdb_backend, }, ] } fn sqlite_tempfile_backend() -> Arc { Arc::new(SqliteBenchBackend::tempfile().expect("create sqlite tempfile bench backend")) } fn rocksdb_backend() -> Arc { Arc::new(RocksDbBenchBackend::new().expect("create rocksdb bench backend")) } ================================================ FILE: packages/engine/benches/physical_layout/main.rs ================================================ use criterion::{criterion_group, criterion_main, Criterion}; use lix_engine::storage_bench::{ StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity, StorageBenchUpdateFraction, }; #[path = "../storage/rocksdb_backend.rs"] mod rocksdb_backend; #[path = "../storage/sqlite_backend.rs"] mod sqlite_backend; mod backend_kv; mod changelog; mod json_store; mod tracked_state; mod workflow; use rocksdb_backend::RocksDbBenchBackend; use sqlite_backend::SqliteBenchBackend; const BENCH_ROWS: usize = 10_000; const BENCH_BLOB_BYTES: usize = 1024; const BENCH_STATE_PAYLOAD_BYTES: usize = 256; #[derive(Debug, Clone, Copy)] pub(crate) struct Args { pub(crate) rows: usize, pub(crate) blob_bytes: usize, pub(crate) state_payload_bytes: usize, } impl Default for Args { fn default() -> Self { Self { rows: BENCH_ROWS, blob_bytes: BENCH_BLOB_BYTES, state_payload_bytes: BENCH_STATE_PAYLOAD_BYTES, } } } impl Args { pub(crate) fn config(self) -> StorageBenchConfig { StorageBenchConfig { rows: self.rows, blob_bytes: self.blob_bytes, state_payload_bytes: self.state_payload_bytes, key_pattern: StorageBenchKeyPattern::Sequential, selectivity: StorageBenchSelectivity::Percent100, update_fraction: StorageBenchUpdateFraction::Percent100, } } } fn physical_layout_benches(c: &mut Criterion) { let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("create tokio runtime for physical layout benchmarks"); let args = Args::default(); backend_kv::bench(c, &runtime, args); changelog::bench(c, &runtime, args); tracked_state::bench(c, &runtime, args); json_store::bench(c, &runtime, args); workflow::bench(c, &runtime, args); } criterion_group!(benches, physical_layout_benches); criterion_main!(benches); ================================================ FILE: packages/engine/benches/physical_layout/tracked_state.rs ================================================ use std::sync::Arc; use std::time::Duration; use criterion::{black_box, BatchSize, Criterion}; use lix_engine::storage_bench::{self, StorageBenchConfig, StorageBenchSelectivity}; use lix_engine::Backend; use tokio::runtime::Runtime; use crate::{Args, RocksDbBenchBackend, SqliteBenchBackend}; type BackendFactory = fn() -> Arc; #[derive(Clone, Copy)] struct BackendProfile { name: &'static str, create: BackendFactory, } pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { for profile in physical_backends() { bench_smoke(c, runtime, args, profile); bench_fast(c, runtime, args, profile); bench_full(c, runtime, args, profile); } } fn bench_smoke(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let smoke = args .config() .with_rows(1_000) .with_state_payload_bytes(1024); let mut group = c.benchmark_group(format!( "physical_layout/tracked_state/smoke/{}", profile.name )); group.sample_size(10); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); group.bench_function("write_root_payload_1k/1k", |b| { b.iter_batched( || prepare_write_root(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state smoke write_root succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_headers_only_payload_1k/1k", |b| { b.iter_batched( || prepare_read(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state smoke headers succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_full_rows_payload_1k/1k", |b| { b.iter_batched( || prepare_read(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state smoke full rows succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_file_header_selective_10pct_payload_1k/1k", |b| { b.iter_batched( || { prepare_read_file_selective( runtime, smoke.with_selectivity(StorageBenchSelectivity::Percent10), profile, ) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::tracked_state_scan_file_header_selective_prepared( &backend, &fixture, ), ) .expect("physical_layout/tracked_state smoke file headers succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("diff_update_1pct_payload_1k/1k", |b| { b.iter_batched( || prepare_diff_update_rows(runtime, smoke, profile, 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state smoke diff succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!( "physical_layout/tracked_state/fast/{}", profile.name )); group.bench_function("write_root/10k", |b| { b.iter_batched( || prepare_write_root(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state write_root succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write_root_payload_1k/10k", |b| { b.iter_batched( || { prepare_write_root( runtime, args.config().with_state_payload_bytes(1024), profile, ) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state write_root_payload_1k succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("update_existing_1pct/10k", |b| { b.iter_batched( || prepare_update_rows(runtime, args.config(), profile, args.rows / 100), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state update_existing_1pct succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("update_existing_10pct/10k", |b| { b.iter_batched( || prepare_update_rows(runtime, args.config(), profile, args.rows / 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state update_existing_10pct succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("tombstone_10pct/10k", |b| { b.iter_batched( || prepare_tombstone_rows(runtime, args.config(), profile, args.rows / 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state tombstone_10pct succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_point_hit/10k", |b| { b.iter_batched( || prepare_read(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_read_point_hit_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state read_point_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_headers_only/10k", |b| { b.iter_batched( || prepare_read(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state scan_headers_only succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_full_rows/10k", |b| { b.iter_batched( || prepare_read(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state scan_full_rows succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_file_header_selective_10pct_payload_1k/10k", |b| { b.iter_batched( || { prepare_read_file_selective( runtime, args.config() .with_state_payload_bytes(1024) .with_selectivity(StorageBenchSelectivity::Percent10), profile, ) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::tracked_state_scan_file_header_selective_prepared( &backend, &fixture, ), ) .expect("physical_layout/tracked_state file header scan succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("diff_update_1pct/10k", |b| { b.iter_batched( || prepare_diff_update_rows(runtime, args.config(), profile, args.rows / 100), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state diff_update_1pct succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_full(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!( "physical_layout/tracked_state/full/{}", profile.name )); for rows in [1_000usize, 10_000, 50_000] { let config = args.config().with_rows(rows); group.bench_function(format!("write_root/{}", label(rows)), |b| { b.iter_batched( || prepare_write_root(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state full write_root succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("read_point_hit/{}", label(rows)), |b| { b.iter_batched( || prepare_read(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_read_point_hit_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state full point_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("scan_headers_only/{}", label(rows)), |b| { b.iter_batched( || prepare_read(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state full headers succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function(format!("scan_full_rows/{}", label(rows)), |b| { b.iter_batched( || prepare_read(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state full full_rows succeeds"), ) }, BatchSize::LargeInput, ) }); } for (name, config) in [ ( "write_root_payload_1k/10k", args.config().with_state_payload_bytes(1024), ), ( "write_root_payload_16k/1k", args.config() .with_rows(1_000) .with_state_payload_bytes(16 * 1024), ), ] { group.bench_function(name, |b| { b.iter_batched( || prepare_write_root(runtime, config, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state full payload write succeeds"), ) }, BatchSize::LargeInput, ) }); } for (name, changed_rows, tombstone) in [ ("diff_equal/10k", 0usize, false), ("diff_update_1pct/10k", args.rows / 100, false), ("diff_update_10pct/10k", args.rows / 10, false), ("diff_tombstone_10pct/10k", args.rows / 10, true), ] { group.bench_function(name, |b| { b.iter_batched( || { if changed_rows == 0 { prepare_diff_equal(runtime, args.config(), profile) } else if tombstone { prepare_diff_tombstone_rows(runtime, args.config(), profile, changed_rows) } else { prepare_diff_update_rows(runtime, args.config(), profile, changed_rows) } }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("physical_layout/tracked_state full diff succeeds"), ) }, BatchSize::LargeInput, ) }); } group.finish(); } fn prepare_write_root( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> ( Arc, storage_bench::TrackedStateWriteRootFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_write_root(config)) .expect("prepare physical_layout/tracked_state write root"); (backend, fixture) } fn prepare_read( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> ( Arc, storage_bench::TrackedStateReadFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read(&backend, config)) .expect("prepare physical_layout/tracked_state read"); (backend, fixture) } fn prepare_read_file_selective( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> ( Arc, storage_bench::TrackedStateReadFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read_file_selective( &backend, config, )) .expect("prepare physical_layout/tracked_state file-selective read"); (backend, fixture) } fn prepare_update_rows( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, rows: usize, ) -> ( Arc, storage_bench::TrackedStateUpdateFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_update_rows( &backend, config, rows, )) .expect("prepare physical_layout/tracked_state update rows"); (backend, fixture) } fn prepare_tombstone_rows( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, rows: usize, ) -> ( Arc, storage_bench::TrackedStateUpdateFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_tombstone_rows( &backend, config, rows, )) .expect("prepare physical_layout/tracked_state tombstone rows"); (backend, fixture) } fn prepare_diff_equal( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> ( Arc, storage_bench::TrackedStateDiffFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_diff_equal( &backend, config, )) .expect("prepare physical_layout/tracked_state diff equal"); (backend, fixture) } fn prepare_diff_update_rows( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, rows: usize, ) -> ( Arc, storage_bench::TrackedStateDiffFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_diff_update_rows( &backend, config, rows, )) .expect("prepare physical_layout/tracked_state diff update"); (backend, fixture) } fn prepare_diff_tombstone_rows( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, rows: usize, ) -> ( Arc, storage_bench::TrackedStateDiffFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_diff_tombstone_rows( &backend, config, rows, )) .expect("prepare physical_layout/tracked_state diff tombstone"); (backend, fixture) } fn physical_backends() -> [BackendProfile; 2] { [ BackendProfile { name: "sqlite_tempfile", create: sqlite_tempfile_backend, }, BackendProfile { name: "rocksdb_tempdir", create: rocksdb_backend, }, ] } fn sqlite_tempfile_backend() -> Arc { Arc::new(SqliteBenchBackend::tempfile().expect("create sqlite tempfile bench backend")) } fn rocksdb_backend() -> Arc { Arc::new(RocksDbBenchBackend::new().expect("create rocksdb bench backend")) } fn label(rows: usize) -> &'static str { match rows { 1_000 => "1k", 10_000 => "10k", 50_000 => "50k", _ => "rows", } } ================================================ FILE: packages/engine/benches/physical_layout/workflow.rs ================================================ use std::sync::Arc; use std::time::Duration; use criterion::{black_box, BatchSize, Criterion}; use lix_engine::storage_bench::{self, StorageBenchConfig}; use lix_engine::Backend; use tokio::runtime::Runtime; use crate::{Args, RocksDbBenchBackend, SqliteBenchBackend}; type BackendFactory = fn() -> Arc; #[derive(Clone, Copy)] struct BackendProfile { name: &'static str, create: BackendFactory, } pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { for profile in physical_backends() { bench_smoke(c, runtime, args, profile); bench_fast(c, runtime, args, profile); bench_full(c, runtime, args, profile); } } fn bench_smoke(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let smoke = args .config() .with_rows(1_000) .with_state_payload_bytes(1024); let mut group = c.benchmark_group(format!("physical_layout/workflow/smoke/{}", profile.name)); group.sample_size(10); group.warm_up_time(Duration::from_millis(250)); group.measurement_time(Duration::from_secs(1)); group.bench_function("insert_tracked_commit_payload_1k/1k", |b| { b.iter_batched( || prepare_insert_tracked_commit(runtime, smoke, profile), |fixture| { black_box( runtime .block_on(run_insert_tracked_commit(fixture)) .expect("physical_layout/workflow smoke insert succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("update_tracked_commit_1pct_payload_1k/1k", |b| { b.iter_batched( || prepare_update_tracked_commit(runtime, smoke, profile, 10), |fixture| { black_box( runtime .block_on(run_update_tracked_commit(fixture)) .expect("physical_layout/workflow smoke update succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("diff_update_1pct_payload_1k/1k", |b| { b.iter_batched( || prepare_diff_update(runtime, smoke, profile, 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("physical_layout/workflow smoke diff succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_tracked_commit_point_hit_payload_1k/1k", |b| { b.iter_batched( || prepare_select_tracked_commit(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_read_point_hit_prepared( &backend, &fixture, )) .expect("physical_layout/workflow smoke point select succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_tracked_commit_headers_only_payload_1k/1k", |b| { b.iter_batched( || prepare_select_tracked_commit(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("physical_layout/workflow smoke header select succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_tracked_commit_full_rows_payload_1k/1k", |b| { b.iter_batched( || prepare_select_tracked_commit(runtime, smoke, profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/workflow smoke full-row select succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function( "select_tracked_commit_file_selective_10pct_payload_1k/1k", |b| { b.iter_batched( || { prepare_select_tracked_commit_file_selective( runtime, smoke.with_selectivity(storage_bench::StorageBenchSelectivity::Percent10), profile, ) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::tracked_state_scan_file_header_selective_prepared( &backend, &fixture, ), ) .expect( "physical_layout/workflow smoke file-selective select succeeds", ), ) }, BatchSize::LargeInput, ) }, ); group.bench_function("select_after_1pct_update_payload_1k/1k", |b| { b.iter_batched( || prepare_select_after_update(runtime, smoke, profile, 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/workflow smoke select-after-update succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_delta_chain_10x1pct_payload_1k/1k", |b| { b.iter_batched( || prepare_select_delta_chain(runtime, smoke, profile, 10, 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/workflow smoke select delta chain succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_materialized_delta_chain_10x1pct_payload_1k/1k", |b| { b.iter_batched( || prepare_select_materialized_delta_chain(runtime, smoke, profile, 10, 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect( "physical_layout/workflow smoke select materialized delta chain succeeds", ), ) }, BatchSize::LargeInput, ) }); group.bench_function("diff_delta_chain_10x1pct_payload_1k/1k", |b| { b.iter_batched( || prepare_diff_delta_chain(runtime, smoke, profile, 10, 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("physical_layout/workflow smoke diff delta chain succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("materialize_delta_chain_10x1pct_payload_1k/1k", |b| { b.iter_batched( || prepare_materialize_delta_chain(runtime, smoke, profile, 10, 10), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_materialize_root_prepared( &backend, &fixture, )) .expect("physical_layout/workflow smoke materialize delta chain succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("physical_layout/workflow/fast/{}", profile.name)); group.bench_function("insert_tracked_commit_payload_1k/10k", |b| { b.iter_batched( || { prepare_insert_tracked_commit( runtime, args.config().with_state_payload_bytes(1024), profile, ) }, |fixture| { black_box( runtime .block_on(run_insert_tracked_commit(fixture)) .expect("physical_layout/workflow insert tracked commit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("update_tracked_commit_1pct/10k", |b| { b.iter_batched( || prepare_update_tracked_commit(runtime, args.config(), profile, args.rows / 100), |fixture| { black_box( runtime .block_on(run_update_tracked_commit(fixture)) .expect("physical_layout/workflow update tracked commit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("diff_update_1pct/10k", |b| { b.iter_batched( || prepare_diff_update(runtime, args.config(), profile, args.rows / 100), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("physical_layout/workflow diff update succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_tracked_commit_point_hit/10k", |b| { b.iter_batched( || prepare_select_tracked_commit(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_read_point_hit_prepared( &backend, &fixture, )) .expect("physical_layout/workflow point select succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_tracked_commit_headers_only/10k", |b| { b.iter_batched( || prepare_select_tracked_commit(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("physical_layout/workflow header select succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_tracked_commit_full_rows/10k", |b| { b.iter_batched( || prepare_select_tracked_commit(runtime, args.config(), profile), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/workflow full-row select succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_after_1pct_update/10k", |b| { b.iter_batched( || prepare_select_after_update(runtime, args.config(), profile, args.rows / 100), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/workflow select-after-update succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("select_delta_chain_10x1pct/10k", |b| { b.iter_batched( || prepare_select_delta_chain(runtime, args.config(), profile, 10, args.rows / 100), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("physical_layout/workflow select delta chain succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("diff_delta_chain_10x1pct/10k", |b| { b.iter_batched( || prepare_diff_delta_chain(runtime, args.config(), profile, 10, args.rows / 100), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("physical_layout/workflow diff delta chain succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn bench_full(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("physical_layout/workflow/full/{}", profile.name)); for (name, config) in [ ("insert_tracked_commit_no_payload/10k", args.config()), ( "insert_tracked_commit_payload_1k/10k", args.config().with_state_payload_bytes(1024), ), ] { group.bench_function(name, |b| { b.iter_batched( || prepare_insert_tracked_commit(runtime, config, profile), |fixture| { black_box( runtime .block_on(run_insert_tracked_commit(fixture)) .expect("physical_layout/workflow full insert succeeds"), ) }, BatchSize::LargeInput, ) }); } for (name, changed_rows, tombstone) in [ ("update_tracked_commit_1pct/10k", args.rows / 100, false), ("update_tracked_commit_10pct/10k", args.rows / 10, false), ("delete_tracked_commit_10pct/10k", args.rows / 10, true), ] { group.bench_function(name, |b| { b.iter_batched( || { if tombstone { prepare_delete_tracked_commit(runtime, args.config(), profile, changed_rows) } else { prepare_update_tracked_commit(runtime, args.config(), profile, changed_rows) } }, |fixture| { black_box( runtime .block_on(run_update_tracked_commit(fixture)) .expect("physical_layout/workflow full update/delete succeeds"), ) }, BatchSize::LargeInput, ) }); } group.finish(); } struct InsertTrackedCommitFixture { backend: Arc, changelog: storage_bench::ChangelogAppendFixture, tracked_state: storage_bench::TrackedStateWriteRootFixture, } struct UpdateTrackedCommitFixture { backend: Arc, changelog: storage_bench::ChangelogAppendFixture, tracked_state: storage_bench::TrackedStateUpdateFixture, } async fn run_insert_tracked_commit( fixture: InsertTrackedCommitFixture, ) -> Result< ( storage_bench::StorageBenchReport, storage_bench::StorageBenchReport, ), lix_engine::LixError, > { let changelog = storage_bench::changelog_append_changes_prepared(&fixture.backend, &fixture.changelog) .await?; let tracked_state = storage_bench::tracked_state_write_root_prepared(&fixture.backend, &fixture.tracked_state) .await?; Ok((changelog, tracked_state)) } async fn run_update_tracked_commit( fixture: UpdateTrackedCommitFixture, ) -> Result< ( storage_bench::StorageBenchReport, storage_bench::StorageBenchReport, ), lix_engine::LixError, > { let changelog = storage_bench::changelog_append_changes_prepared(&fixture.backend, &fixture.changelog) .await?; let tracked_state = storage_bench::tracked_state_update_existing_prepared( &fixture.backend, &fixture.tracked_state, ) .await?; Ok((changelog, tracked_state)) } fn prepare_insert_tracked_commit( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> InsertTrackedCommitFixture { let backend = (profile.create)(); let changelog = runtime .block_on(storage_bench::prepare_changelog_append_changes(config)) .expect("prepare physical_layout/workflow insert changelog"); let tracked_state = runtime .block_on(storage_bench::prepare_tracked_state_write_root(config)) .expect("prepare physical_layout/workflow insert tracked_state"); InsertTrackedCommitFixture { backend, changelog, tracked_state, } } fn prepare_update_tracked_commit( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, changed_rows: usize, ) -> UpdateTrackedCommitFixture { let backend = (profile.create)(); let changelog = runtime .block_on(storage_bench::prepare_changelog_append_changes( config.with_rows(changed_rows), )) .expect("prepare physical_layout/workflow update changelog"); let tracked_state = runtime .block_on(storage_bench::prepare_tracked_state_update_rows( &backend, config, changed_rows, )) .expect("prepare physical_layout/workflow update tracked_state"); UpdateTrackedCommitFixture { backend, changelog, tracked_state, } } fn prepare_delete_tracked_commit( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, changed_rows: usize, ) -> UpdateTrackedCommitFixture { let backend = (profile.create)(); let changelog = runtime .block_on(storage_bench::prepare_changelog_append_tombstones( config.with_rows(changed_rows), )) .expect("prepare physical_layout/workflow delete changelog"); let tracked_state = runtime .block_on(storage_bench::prepare_tracked_state_tombstone_rows( &backend, config, changed_rows, )) .expect("prepare physical_layout/workflow delete tracked_state"); UpdateTrackedCommitFixture { backend, changelog, tracked_state, } } fn prepare_diff_update( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, changed_rows: usize, ) -> ( Arc, storage_bench::TrackedStateDiffFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_diff_update_rows( &backend, config, changed_rows, )) .expect("prepare physical_layout/workflow diff update"); (backend, fixture) } fn prepare_select_tracked_commit( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> ( Arc, storage_bench::TrackedStateReadFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read(&backend, config)) .expect("prepare physical_layout/workflow select tracked commit"); (backend, fixture) } fn prepare_select_tracked_commit_file_selective( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, ) -> ( Arc, storage_bench::TrackedStateReadFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read_file_selective( &backend, config, )) .expect("prepare physical_layout/workflow file-selective select"); (backend, fixture) } fn prepare_select_after_update( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, changed_rows: usize, ) -> ( Arc, storage_bench::TrackedStateReadFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read_after_update_rows( &backend, config, changed_rows, )) .expect("prepare physical_layout/workflow select after update"); (backend, fixture) } fn prepare_select_delta_chain( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, delta_commits: usize, updated_rows_per_commit: usize, ) -> ( Arc, storage_bench::TrackedStateReadFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read_delta_chain( &backend, config, delta_commits, updated_rows_per_commit, )) .expect("prepare physical_layout/workflow select delta chain"); (backend, fixture) } fn prepare_select_materialized_delta_chain( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, delta_commits: usize, updated_rows_per_commit: usize, ) -> ( Arc, storage_bench::TrackedStateReadFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on( storage_bench::prepare_tracked_state_read_materialized_delta_chain( &backend, config, delta_commits, updated_rows_per_commit, ), ) .expect("prepare physical_layout/workflow select materialized delta chain"); (backend, fixture) } fn prepare_diff_delta_chain( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, delta_commits: usize, updated_rows_per_commit: usize, ) -> ( Arc, storage_bench::TrackedStateDiffFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_diff_delta_chain( &backend, config, delta_commits, updated_rows_per_commit, )) .expect("prepare physical_layout/workflow diff delta chain"); (backend, fixture) } fn prepare_materialize_delta_chain( runtime: &Runtime, config: StorageBenchConfig, profile: BackendProfile, delta_commits: usize, updated_rows_per_commit: usize, ) -> ( Arc, storage_bench::TrackedStateMaterializeFixture, ) { let backend = (profile.create)(); let fixture = runtime .block_on( storage_bench::prepare_tracked_state_materialize_delta_chain( &backend, config, delta_commits, updated_rows_per_commit, ), ) .expect("prepare physical_layout/workflow materialize delta chain"); (backend, fixture) } fn physical_backends() -> [BackendProfile; 2] { [ BackendProfile { name: "sqlite_tempfile", create: sqlite_tempfile_backend, }, BackendProfile { name: "rocksdb_tempdir", create: rocksdb_backend, }, ] } fn sqlite_tempfile_backend() -> Arc { Arc::new(SqliteBenchBackend::tempfile().expect("create sqlite tempfile bench backend")) } fn rocksdb_backend() -> Arc { Arc::new(RocksDbBenchBackend::new().expect("create rocksdb bench backend")) } ================================================ FILE: packages/engine/benches/storage/README.md ================================================ # Engine Storage Benchmarks These Criterion benchmarks measure engine-owned storage layers directly, without going through SQL or the SDK: - `tracked_state` - `untracked_state` - `changelog` - `binary_cas` - `json_store` - `storage/api` The benchmark target uses `codspeed-criterion-compat`, so it works with normal `cargo bench` and with CodSpeed. ## Run ```bash cargo bench -p lix_engine --features storage-benches --bench storage ``` Run one benchmark by filter: ```bash cargo bench -p lix_engine --features storage-benches --bench storage -- \ storage/tracked_state/read_point_hit/10k ``` CodSpeed: ```bash cargo codspeed build -p lix_engine --features storage-benches --bench storage cargo codspeed run ``` Storage accounting report: ```bash cargo test -p lix_engine --features storage-benches storage_accounting -- --ignored --nocapture ``` ## Benchmarks The checked-in baseline size is stable: `10k` logical rows or blobs, with `1KiB` binary payloads for Binary CAS and small JSON payloads for state rows. Large payload variants intentionally use fewer rows so a full benchmark run does not allocate multi-gigabyte fixtures. ```text storage/tracked_state/write_root/10k storage/tracked_state/read_point_hit/10k storage/tracked_state/read_point_miss/10k storage/tracked_state/scan_all/10k storage/tracked_state/scan_schema/10k storage/tracked_state/scan_file/10k storage/tracked_state/update_existing/10k storage/untracked_state/write_rows/10k storage/untracked_state/read_point_hit/10k storage/untracked_state/read_point_miss/10k storage/untracked_state/scan_all/10k storage/untracked_state/scan_version/10k storage/untracked_state/scan_schema/10k storage/untracked_state/overwrite_existing/10k storage/changelog/append_changes/10k storage/changelog/load_change_hit/10k storage/changelog/load_change_miss/10k storage/changelog/scan_all/10k storage/changelog/scan_limit_100/10k storage/changelog/scan_change_set/10k commit_graph/change_history_from_commit/10k storage/binary_cas/write_blobs_1k/10k storage/binary_cas/read_blob_hit_1k/10k storage/binary_cas/read_blob_miss_1k/10k storage/binary_cas/write_duplicate_payload_1k/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/1 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/10 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/100 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/1k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_put/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_mixed_put_delete/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_multi_namespace/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_duplicate_keys/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_value_size/64b storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_value_size/1k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_value_size/16k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/write_kv_batch_value_size/128k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_write_and_commit/1 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_write_and_commit/100 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_write_and_commit/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_rollback_after_write/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_hit/100 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_hit/1k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_hit/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_miss/100 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_miss/1k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_miss/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_mixed_hit_miss/100 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_mixed_hit_miss/1k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_mixed_hit_miss/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_duplicate_keys/100 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_duplicate_keys/1k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_duplicate_keys/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/get_values_multi_namespace/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix/100 storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix/1k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_after_pages/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_small_limit_of_large_range/100_of_10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_empty_range/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix_selectivity_1pct/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix_selectivity_10pct/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/scan_keys_prefix_selectivity_100pct/10k storage/api/{in_memory,sqlite_tempfile,rocksdb_tempdir}/transaction_commit_empty ``` Additional high-signal variants are registered for: - batch sizes: `1`, `10`, `100`, `1k`, `10k` - state payload sizes: `small/10k`, `1k/10k`, `16k/1k`, `128k/100` - binary payload sizes: `small/10k`, `1k/10k`, `16k/1k`, `128k/100` - changelog shared JSON payloads: shared snapshot, shared metadata, and shared snapshot+metadata workloads for measuring JsonStore writer dedupe - key distribution: `sequential_keys`, `random_keys` - scan selectivity: `1pct`, `10pct`, `100pct` - projection-aware scans: file-selective header scans that omit `snapshot_content`, including `1KiB` out-of-line snapshot variants - point-read scaling: `100` point reads over `1k`, `10k`, and `100k` rows - update shape: update/overwrite `10pct`, update/overwrite all, append or insert new keys - prolly-style tracked-state cases: single-row update in `10k`/`100k` roots, single-row append in `10k`/`100k` roots, tombstone/delete writes, and root diff traversal for equal/update/delete shapes - partial snapshot-content update baselines: one logical field changed in a `1KiB` snapshot over `100k` rows and a `16KiB` snapshot over `10k` rows - Binary CAS dedupe: unique payloads, all duplicate payloads, half duplicate payloads The ignored `storage_accounting` test prints deterministic byte/chunk tables for the tracked-state physical format: primary tree, header-covering by-file tree, and snapshot CAS. ================================================ FILE: packages/engine/benches/storage/backend.rs ================================================ use async_trait::async_trait; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError, }; use std::collections::BTreeMap; use std::sync::{Arc, Mutex}; type Store = BTreeMap<(String, Vec), Vec>; #[derive(Clone, Default)] pub(crate) struct BenchBackend { store: Arc>, } pub(crate) struct BenchTransaction { store: Arc>, finalized: bool, } impl BenchBackend { pub(crate) fn new() -> Arc { Arc::new(Self::default()) } } #[async_trait] impl Backend for BenchBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { Ok(Box::new(BenchTransaction { store: Arc::clone(&self.store), finalized: false, })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { Ok(Box::new(BenchTransaction { store: Arc::clone(&self.store), finalized: false, })) } } #[async_trait] impl BackendReadTransaction for BenchTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { let store = self.lock_store()?; let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { if let Some(value) = store.get(&(namespace.clone(), key)) { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } Ok(BackendKvValueBatch { groups }) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { let store = self.lock_store()?; let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let exists = group .keys .into_iter() .map(|key| store.contains_key(&(namespace.clone(), key))) .collect(); groups.push(BackendKvExistsGroup { namespace, exists }); } Ok(BackendKvExistsBatch { groups }) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { let store = self.lock_store()?; Ok(scan_store_keys(&store, request)) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { let store = self.lock_store()?; Ok(scan_store_values(&store, request)) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { let store = self.lock_store()?; Ok(scan_store_entries(&store, request)) } async fn rollback(mut self: Box) -> Result<(), LixError> { self.finalized = true; Ok(()) } } #[async_trait] impl BackendWriteTransaction for BenchTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let mut store = self.lock_store()?; let mut stats = BackendKvWriteStats::default(); for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); store.insert((namespace.clone(), key.to_vec()), value.to_vec()); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; stats.deletes += 1; stats.bytes_written += key.len(); store.remove(&(namespace.clone(), key.to_vec())); } } Ok(stats) } async fn commit(mut self: Box) -> Result<(), LixError> { self.finalized = true; Ok(()) } } impl BenchTransaction { fn lock_store(&self) -> Result, LixError> { self.store .lock() .map_err(|_| LixError::new("LIX_ERROR_UNKNOWN", "bench store mutex poisoned")) } } fn scan_store_keys(store: &Store, request: BackendKvScanRequest) -> BackendKvKeyPage { let start_key = scan_start_key(&request); let lower_bound = (request.namespace.clone(), start_key); let mut keys = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; for ((row_namespace, key), _value) in store.range(lower_bound..) { if row_namespace != &request.namespace { break; } if let Some(after) = request.after.as_deref() { if key.as_slice() <= after { continue; } } if !key_matches_range(key, &request.range) { break; } if count < request.limit { resume_after_candidate = Some(key.clone()); keys.push(key); } count += 1; if count > request.limit { break; } } let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); BackendKvKeyPage { keys: keys.finish(), resume_after, } } fn scan_store_values(store: &Store, request: BackendKvScanRequest) -> BackendKvValuePage { let start_key = scan_start_key(&request); let lower_bound = (request.namespace.clone(), start_key); let mut values = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; for ((row_namespace, key), value) in store.range(lower_bound..) { if row_namespace != &request.namespace { break; } if let Some(after) = request.after.as_deref() { if key.as_slice() <= after { continue; } } if !key_matches_range(key, &request.range) { break; } if count < request.limit { resume_after_candidate = Some(key.clone()); values.push(value); } count += 1; if count > request.limit { break; } } let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); BackendKvValuePage { values: values.finish(), resume_after, } } fn scan_store_entries(store: &Store, request: BackendKvScanRequest) -> BackendKvEntryPage { let start_key = scan_start_key(&request); let lower_bound = (request.namespace.clone(), start_key); let mut keys = BytePageBuilder::new(); let mut values = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; for ((row_namespace, key), value) in store.range(lower_bound..) { if row_namespace != &request.namespace { break; } if let Some(after) = request.after.as_deref() { if key.as_slice() <= after { continue; } } if !key_matches_range(key, &request.range) { break; } if count < request.limit { resume_after_candidate = Some(key.clone()); keys.push(key); values.push(value); } count += 1; if count > request.limit { break; } } let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, } } fn key_matches_range(key: &[u8], range: &BackendKvScanRange) -> bool { match range { BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix), BackendKvScanRange::Range { start, end } => key >= start.as_slice() && key < end.as_slice(), } } fn scan_start_key(request: &BackendKvScanRequest) -> Vec { let range_start = match &request.range { BackendKvScanRange::Prefix(prefix) => prefix.as_slice(), BackendKvScanRange::Range { start, .. } => start.as_slice(), }; match request.after.as_deref() { Some(after) if after > range_start => after.to_vec(), _ => range_start.to_vec(), } } ================================================ FILE: packages/engine/benches/storage/binary_cas.rs ================================================ use lix_engine::storage_bench::{self, StorageBenchConfig}; use crate::{Args, BenchBackend}; use criterion::{black_box, BatchSize, Criterion}; use tokio::runtime::Runtime; pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { let mut group = c.benchmark_group("storage/binary_cas"); group.bench_function("write_blobs_1k/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_binary_cas_write_blobs(config(&args))) .expect("prepare binary_cas/write_blobs"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::binary_cas_write_blobs_prepared( &backend, &fixture, )) .expect("binary_cas/write_blobs succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_blob_hit_1k/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::binary_cas_read_blob_hit_prepared( &backend, &fixture, )) .expect("binary_cas/read_blob_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_blob_miss_1k/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::binary_cas_read_blob_miss_prepared( &backend, &fixture, )) .expect("binary_cas/read_blob_miss succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write_duplicate_payload_1k/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_binary_cas_write_duplicate_payload( config(&args), )) .expect("prepare binary_cas/write_duplicate_payload"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::binary_cas_write_blobs_prepared( &backend, &fixture, )) .expect("binary_cas/write_duplicate_payload succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write_half_duplicate_payload_1k/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on( storage_bench::prepare_binary_cas_write_half_duplicate_payload(config( &args, )), ) .expect("prepare binary_cas/write_half_duplicate_payload"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::binary_cas_write_blobs_prepared( &backend, &fixture, )) .expect("binary_cas/write_half_duplicate_payload succeeds"), ) }, BatchSize::LargeInput, ) }); for rows in [1, 10, 100, 1_000] { let name = format!("write_blobs_1k/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_binary_cas_write_blobs( config(&args).with_rows(rows), )) .expect("prepare binary_cas/write_blobs batch"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::binary_cas_write_blobs_prepared( &backend, &fixture, )) .expect("binary_cas/write_blobs batch succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, bytes, rows) in [ ("small", 16, 10_000), ("1k", 1024, 10_000), ("16k", 16 * 1024, 1_000), ("128k", 128 * 1024, 100), ] { let name = format!("write_blobs_payload_{label}/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_binary_cas_write_blobs( config(&args).with_blob_bytes(bytes).with_rows(rows), )) .expect("prepare binary_cas/write_blobs payload"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::binary_cas_write_blobs_prepared( &backend, &fixture, )) .expect("binary_cas/write_blobs payload succeeds"), ) }, BatchSize::LargeInput, ) }); } group.finish(); } fn prepare_read( runtime: &Runtime, args: Args, ) -> ( std::sync::Arc, lix_engine::storage_bench::BinaryCasReadFixture, ) { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_binary_cas_read( &backend, config(&args), )) .expect("prepare binary_cas/read"); (backend, fixture) } fn config(args: &Args) -> StorageBenchConfig { args.config() } ================================================ FILE: packages/engine/benches/storage/changelog.rs ================================================ use lix_engine::storage_bench::{ self, StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity, }; use crate::{Args, BenchBackend}; use criterion::{black_box, BatchSize, Criterion}; use tokio::runtime::Runtime; pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { let mut group = c.benchmark_group("storage/changelog"); group.bench_function("encode_only/full_row/10k", |b| { b.iter_batched( || { runtime .block_on(storage_bench::prepare_changelog_codec(config(&args))) .expect("prepare changelog/encode_only") }, |fixture| { black_box( runtime .block_on(storage_bench::changelog_encode_only_prepared(&fixture)) .expect("changelog/encode_only succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("decode_only/full_row/10k", |b| { b.iter_batched( || { runtime .block_on(storage_bench::prepare_changelog_codec(config(&args))) .expect("prepare changelog/decode_only") }, |fixture| { black_box( runtime .block_on(storage_bench::changelog_decode_only_prepared(&fixture)) .expect("changelog/decode_only succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("append_changes/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_changes(config( &args, ))) .expect("prepare changelog/append_changes"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append_changes succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("load_changes_hit/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_load_changes_hit_prepared( &backend, &fixture, )) .expect("changelog/load_changes_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("load_changes_miss/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_load_changes_miss_prepared( &backend, &fixture, )) .expect("changelog/load_changes_miss succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_all/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_all_prepared( &backend, &fixture, )) .expect("changelog/scan_all succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_full_changes/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_full_changes_prepared( &backend, &fixture, )) .expect("changelog/scan_full_changes succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, bytes, rows, row_label) in [("1k", 1024, 10_000, "10k"), ("16k", 16 * 1024, 1_000, "1k")] { let config = config(&args) .with_state_payload_bytes(bytes) .with_rows(rows); let name = format!("scan_full_changes_payload_{label}/{row_label}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, config), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_full_changes_prepared( &backend, &fixture, )) .expect("changelog/scan_full_changes payload succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("scan_limit_100/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_limit_100_prepared( &backend, &fixture, )) .expect("changelog/scan_limit_100 succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_change_set/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_change_set_prepared( &backend, &fixture, )) .expect("changelog/scan_change_set succeeds"), ) }, BatchSize::LargeInput, ) }); for rows in [1, 10, 100, 1_000] { let name = format!("append_changes/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_changes( config(&args).with_rows(rows), )) .expect("prepare changelog/append_changes batch"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append_changes batch succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, bytes, rows) in [ ("small", 0, 10_000), ("1k", 1024, 10_000), ("16k", 16 * 1024, 1_000), ("128k", 128 * 1024, 100), ] { let name = format!("append_changes_payload_{label}/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_changes( config(&args) .with_state_payload_bytes(bytes) .with_rows(rows), )) .expect("prepare changelog/append_changes payload"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append_changes payload succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("append_changes_metadata_1k/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_metadata( config(&args).with_state_payload_bytes(1024), )) .expect("prepare changelog/append metadata"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append metadata succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, bytes, rows) in [("1k", 1024, 10_000), ("16k", 16 * 1024, 1_000)] { let name = format!("append_changes_shared_payload_{label}/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_shared_payload( config(&args) .with_state_payload_bytes(bytes) .with_rows(rows), )) .expect("prepare changelog/append shared payload"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append shared payload succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, bytes, rows) in [("1k", 1024, 10_000), ("16k", 16 * 1024, 1_000)] { let name = format!("append_changes_shared_metadata_{label}/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_shared_metadata( config(&args) .with_state_payload_bytes(bytes) .with_rows(rows), )) .expect("prepare changelog/append shared metadata"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append shared metadata succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("append_changes_shared_payload_and_metadata_1k/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on( storage_bench::prepare_changelog_append_shared_payload_and_metadata( config(&args).with_state_payload_bytes(1024), ), ) .expect("prepare changelog/append shared payload and metadata"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append shared payload and metadata succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("append_changes_tombstone/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_tombstones(config( &args, ))) .expect("prepare changelog/append tombstones"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append tombstones succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("append_changes_composite_entity_id/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on( storage_bench::prepare_changelog_append_composite_entity_ids(config(&args)), ) .expect("prepare changelog/append composite entity ids"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append composite entity ids succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, selectivity) in [ ("1pct", StorageBenchSelectivity::Percent1), ("10pct", StorageBenchSelectivity::Percent10), ("100pct", StorageBenchSelectivity::Percent100), ] { let name = format!("scan_schema_selectivity_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_read_with_selectivity( &backend, config(&args).with_selectivity(selectivity), )) .expect("prepare changelog/scan schema selectivity"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_schema_prepared( &backend, &fixture, selectivity, )) .expect("changelog/scan schema selectivity succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("scan_entity_history/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_read_entity_history( &backend, config(&args), )) .expect("prepare changelog/scan entity history"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_scan_entity_history_prepared( &backend, &fixture, )) .expect("changelog/scan entity history succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, key_pattern) in [ ("sequential_keys", StorageBenchKeyPattern::Sequential), ("random_keys", StorageBenchKeyPattern::Random), ] { let name = format!("append_changes_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_append_changes( config(&args).with_key_pattern(key_pattern), )) .expect("prepare changelog/append_changes key pattern"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::changelog_append_changes_prepared( &backend, &fixture, )) .expect("changelog/append_changes key pattern succeeds"), ) }, BatchSize::LargeInput, ) }); } group.finish(); } fn prepare_read( runtime: &Runtime, args: Args, ) -> ( std::sync::Arc, lix_engine::storage_bench::ChangelogReadFixture, ) { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_read( &backend, config(&args), )) .expect("prepare changelog/read"); (backend, fixture) } fn prepare_read_with( runtime: &Runtime, config: StorageBenchConfig, ) -> ( std::sync::Arc, lix_engine::storage_bench::ChangelogReadFixture, ) { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_changelog_read(&backend, config)) .expect("prepare changelog/read variant"); (backend, fixture) } fn config(args: &Args) -> StorageBenchConfig { args.config() } ================================================ FILE: packages/engine/benches/storage/commit_graph.rs ================================================ use lix_engine::storage_bench::{self, StorageBenchConfig}; use crate::{Args, BenchBackend}; use criterion::{black_box, BatchSize, Criterion}; use tokio::runtime::Runtime; pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { let mut group = c.benchmark_group("commit_graph"); group.bench_function("change_history_from_commit/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::commit_graph_change_history_from_commit_prepared( &backend, &fixture, ), ) .expect("commit_graph/change_history_from_commit succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn prepare_read( runtime: &Runtime, args: Args, ) -> ( std::sync::Arc, lix_engine::storage_bench::CommitGraphReadFixture, ) { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_commit_graph_read( &backend, config(&args), )) .expect("prepare commit_graph/read"); (backend, fixture) } fn config(args: &Args) -> StorageBenchConfig { args.config() } ================================================ FILE: packages/engine/benches/storage/json_store.rs ================================================ use lix_engine::storage_bench::{ self, JsonStorePayloadShape, JsonStoreProjectionShape, JsonStoreReadFixture, }; use crate::{Args, BenchBackend}; use criterion::{black_box, BatchSize, Criterion}; use tokio::runtime::Runtime; pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, _args: Args) { let mut group = c.benchmark_group("storage/json_store"); for (name, shape, rows) in [ ( "write/small_raw_1k/1000", JsonStorePayloadShape::SmallRaw1k, 1_000, ), ( "write/medium_structured_16k/200", JsonStorePayloadShape::MediumStructured16k, 200, ), ( "write/large_structured_128k/50", JsonStorePayloadShape::LargeStructured128k, 50, ), ( "write/large_array_128k/50", JsonStorePayloadShape::LargeArray128k, 50, ), ] { group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_json_store_write(shape, rows)) .expect("prepare json_store/write"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_write_prepared(&backend, &fixture)) .expect("json_store/write succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("write/dedupe_same_16k/1000", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_json_store_write_dedupe( JsonStorePayloadShape::MediumStructured16k, 1_000, )) .expect("prepare json_store/write_dedupe"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_write_prepared(&backend, &fixture)) .expect("json_store/write_dedupe succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write/against_base_object_update_1_of_1000/50", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_json_store_base_update_object( &backend, 50, )) .expect("prepare json_store/base_update_object"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::json_store_write_against_base_object_prepared( &backend, &fixture, ), ) .expect("json_store/base_update_object succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write/against_base_array_update_1_of_1000/50", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_json_store_base_update_array( &backend, 50, )) .expect("prepare json_store/base_update_array"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_write_against_base_array_prepared( &backend, &fixture, )) .expect("json_store/base_update_array succeeds"), ) }, BatchSize::LargeInput, ) }); for (name, shape, rows) in [ ( "read_bytes/small_raw_1k/1000", JsonStorePayloadShape::SmallRaw1k, 1_000, ), ( "read_bytes/medium_structured_16k/200", JsonStorePayloadShape::MediumStructured16k, 200, ), ( "read_bytes/large_structured_128k/50", JsonStorePayloadShape::LargeStructured128k, 50, ), ( "read_bytes/large_array_128k/50", JsonStorePayloadShape::LargeArray128k, 50, ), ] { group.bench_function(name, |b| { b.iter_batched( || { prepare_read( runtime, shape, rows, JsonStoreProjectionShape::TopLevelTarget, ) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_read_bytes_prepared( &backend, &fixture, )) .expect("json_store/read_bytes succeeds"), ) }, BatchSize::LargeInput, ) }); } for (name, shape, rows) in [ ( "read_value/small_raw_1k/1000", JsonStorePayloadShape::SmallRaw1k, 1_000, ), ( "read_value/large_structured_128k/50", JsonStorePayloadShape::LargeStructured128k, 50, ), ] { group.bench_function(name, |b| { b.iter_batched( || { prepare_read( runtime, shape, rows, JsonStoreProjectionShape::TopLevelTarget, ) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_read_value_prepared( &backend, &fixture, )) .expect("json_store/read_value succeeds"), ) }, BatchSize::LargeInput, ) }); } for (name, shape, projection, rows) in [ ( "read_projection/top_level_1_prop_1k/1000", JsonStorePayloadShape::SmallRaw1k, JsonStoreProjectionShape::TopLevelTarget, 1_000, ), ( "read_projection/top_level_1_prop_128k/50", JsonStorePayloadShape::LargeStructured128k, JsonStoreProjectionShape::TopLevelTarget, 50, ), ( "read_projection/top_level_10_props_128k/50", JsonStorePayloadShape::LargeStructured128k, JsonStoreProjectionShape::TopLevelTenProps, 50, ), ( "read_projection/nested_prop_128k/50", JsonStorePayloadShape::LargeStructured128k, JsonStoreProjectionShape::NestedTarget, 50, ), ( "read_projection/array_item_1_of_1000/50", JsonStorePayloadShape::LargeArray128k, JsonStoreProjectionShape::ArrayItem999, 50, ), ( "read_projection/filter_prop_status_128k/50", JsonStorePayloadShape::LargeStructured128k, JsonStoreProjectionShape::Status, 50, ), ] { group.bench_function(name, |b| { b.iter_batched( || prepare_read(runtime, shape, rows, projection), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::json_store_read_projection_prepared( &backend, &fixture, )) .expect("json_store/read_projection succeeds"), ) }, BatchSize::LargeInput, ) }); } group.finish(); } fn prepare_read( runtime: &Runtime, shape: JsonStorePayloadShape, rows: usize, projection: JsonStoreProjectionShape, ) -> ( std::sync::Arc, JsonStoreReadFixture, ) { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_json_store_projection_read( &backend, shape, rows, projection, )) .expect("prepare json_store/read"); (backend, fixture) } ================================================ FILE: packages/engine/benches/storage/main.rs ================================================ use criterion::{criterion_group, criterion_main, Criterion}; use lix_engine::storage_bench::{ StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity, StorageBenchUpdateFraction, }; mod backend; mod binary_cas; mod changelog; mod commit_graph; mod json_store; mod rocksdb_backend; mod sqlite_backend; mod storage_api; mod tracked_state; mod untracked_state; use backend::BenchBackend; use rocksdb_backend::RocksDbBenchBackend; use sqlite_backend::SqliteBenchBackend; const BENCH_ROWS: usize = 10_000; const BENCH_BLOB_BYTES: usize = 1024; const BENCH_STATE_PAYLOAD_BYTES: usize = 256; #[derive(Debug, Clone, Copy)] pub(crate) struct Args { pub(crate) rows: usize, pub(crate) blob_bytes: usize, pub(crate) state_payload_bytes: usize, } impl Default for Args { fn default() -> Self { Self { rows: BENCH_ROWS, blob_bytes: BENCH_BLOB_BYTES, state_payload_bytes: BENCH_STATE_PAYLOAD_BYTES, } } } impl Args { pub(crate) fn config(self) -> StorageBenchConfig { StorageBenchConfig { rows: self.rows, blob_bytes: self.blob_bytes, state_payload_bytes: self.state_payload_bytes, key_pattern: StorageBenchKeyPattern::Sequential, selectivity: StorageBenchSelectivity::Percent100, update_fraction: StorageBenchUpdateFraction::Percent100, } } } fn storage_benches(c: &mut Criterion) { let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("create tokio runtime for storage benchmarks"); let args = Args::default(); storage_api::bench(c, &runtime, args); tracked_state::bench(c, &runtime, args); tracked_state::bench_fast(c, &runtime, args); untracked_state::bench(c, &runtime, args); changelog::bench(c, &runtime, args); commit_graph::bench(c, &runtime, args); binary_cas::bench(c, &runtime, args); json_store::bench(c, &runtime, args); } criterion_group!(benches, storage_benches); criterion_main!(benches); ================================================ FILE: packages/engine/benches/storage/rocksdb_backend.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use std::path::Path; use std::sync::Arc; use async_trait::async_trait; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError, }; use rocksdb::{Direction, IteratorMode, Options, WriteBatch, DB}; use tempfile::TempDir; #[derive(Clone)] pub(crate) struct RocksDbBenchBackend { inner: Arc, } struct RocksDbBenchInner { db: DB, _dir: TempDir, } pub(crate) struct RocksDbBenchTransaction { inner: Arc, pending: BTreeMap, PendingWrite>, } enum PendingWrite { Put(Vec), Delete, } impl RocksDbBenchBackend { pub(crate) fn new() -> Result { let dir = TempDir::new().map_err(io_error)?; let db = open_rocksdb(dir.path())?; Ok(Self { inner: Arc::new(RocksDbBenchInner { db, _dir: dir }), }) } #[allow(dead_code)] pub(crate) fn path(&self) -> &Path { self.inner._dir.path() } } #[async_trait] impl Backend for RocksDbBenchBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { Ok(Box::new(RocksDbBenchTransaction { inner: Arc::clone(&self.inner), pending: BTreeMap::new(), })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { Ok(Box::new(RocksDbBenchTransaction { inner: Arc::clone(&self.inner), pending: BTreeMap::new(), })) } } #[async_trait] impl BackendReadTransaction for RocksDbBenchTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut resolved_values = vec![None; group.keys.len()]; let mut committed_keys = Vec::new(); let mut committed_positions = Vec::new(); for (position, key) in group.keys.into_iter().enumerate() { let encoded_key = encode_key(namespace.as_str(), &key); match self.pending.get(&encoded_key) { Some(PendingWrite::Put(value)) => { resolved_values[position] = Some(value.clone()) } Some(PendingWrite::Delete) => {} None => { committed_positions.push(position); committed_keys.push(encoded_key); } } } let committed_values = self.inner.db.multi_get(committed_keys); for (position, value) in committed_positions.into_iter().zip(committed_values) { match value.map_err(rocksdb_error)? { Some(value) => resolved_values[position] = Some(value), None => {} } } let mut values = BytePageBuilder::with_capacity(resolved_values.len(), 0); let mut present = Vec::with_capacity(resolved_values.len()); for value in resolved_values { if let Some(value) = value { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } Ok(BackendKvValueBatch { groups }) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { rocksdb_get_exists_many(&self.inner.db, &self.pending, request) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { rocksdb_scan_keys(&self.inner.db, &self.pending, request) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { rocksdb_scan_values(&self.inner.db, &self.pending, request) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { rocksdb_scan_entries(&self.inner.db, &self.pending, request) } async fn rollback(self: Box) -> Result<(), LixError> { Ok(()) } } #[async_trait] impl BackendWriteTransaction for RocksDbBenchTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let mut stats = BackendKvWriteStats::default(); for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); self.pending.insert( encode_key(namespace.as_str(), key), PendingWrite::Put(value.to_vec()), ); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; stats.deletes += 1; stats.bytes_written += key.len(); self.pending .insert(encode_key(namespace.as_str(), key), PendingWrite::Delete); } } Ok(stats) } async fn commit(self: Box) -> Result<(), LixError> { let mut write_batch = WriteBatch::default(); for (key, write) in self.pending { match write { PendingWrite::Put(value) => write_batch.put(key, value), PendingWrite::Delete => write_batch.delete(key), } } self.inner.db.write(write_batch).map_err(rocksdb_error)?; Ok(()) } } fn open_rocksdb(path: &Path) -> Result { let mut options = Options::default(); options.create_if_missing(true); options.set_use_fsync(false); options.set_write_buffer_size(64 * 1024 * 1024); DB::open(&options, path).map_err(rocksdb_error) } fn rocksdb_get_exists_many( db: &DB, pending: &BTreeMap, PendingWrite>, request: BackendKvGetRequest, ) -> Result { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut exists = vec![false; group.keys.len()]; let mut committed = Vec::new(); for (position, key) in group.keys.into_iter().enumerate() { let encoded_key = encode_key(namespace.as_str(), &key); match pending.get(&encoded_key) { Some(PendingWrite::Put(_)) => exists[position] = true, Some(PendingWrite::Delete) => {} None => { committed.push((encoded_key, position)); } } } fill_committed_exists(db, &mut exists, committed)?; groups.push(BackendKvExistsGroup { namespace, exists }); } Ok(BackendKvExistsBatch { groups }) } fn fill_committed_exists( db: &DB, exists: &mut [bool], mut committed: Vec<(Vec, usize)>, ) -> Result<(), LixError> { if committed.is_empty() { return Ok(()); } committed.sort_by(|left, right| left.0.cmp(&right.0)); let mut iter = db.raw_iterator(); iter.seek(&committed[0].0); for (target_key, position) in committed { while iter.valid() { let Some(current_key) = iter.key() else { break; }; if current_key >= target_key.as_slice() { break; } iter.next(); } if !iter.valid() { iter.status().map_err(rocksdb_error)?; break; } if iter .key() .is_some_and(|current_key| current_key == target_key.as_slice()) { exists[position] = true; } } iter.status().map_err(rocksdb_error)?; Ok(()) } fn rocksdb_scan_keys( db: &DB, pending: &BTreeMap, PendingWrite>, request: BackendKvScanRequest, ) -> Result { let bounds = ScanBounds::new(&request); if pending.is_empty() { return rocksdb_scan_committed_keys(db, request, bounds); } let mut merged = BTreeSet::new(); let mut iter = db.raw_iterator(); iter.seek(&bounds.start_encoded); while iter.valid() { let Some(encoded_key) = iter.key() else { break; }; if !bounds.contains_encoded(encoded_key) { break; } let logical_key = decode_key(&request.namespace, encoded_key)?; if !key_after_cursor(&request, &logical_key) { iter.next(); continue; } merged.insert(logical_key); iter.next(); } iter.status().map_err(rocksdb_error)?; for (encoded_key, write) in pending.range(bounds.start_encoded.clone()..bounds.end_encoded.clone()) { if !bounds.contains_encoded(encoded_key) { continue; } let logical_key = decode_key(&request.namespace, encoded_key)?; if !key_in_range(&logical_key, &request.range) || !key_after_cursor(&request, &logical_key) { continue; } match write { PendingWrite::Put(_) => { merged.insert(logical_key); } PendingWrite::Delete => { merged.remove(&logical_key); } } } Ok(key_page_from_iter(merged, request.limit)) } fn rocksdb_scan_values( db: &DB, pending: &BTreeMap, PendingWrite>, request: BackendKvScanRequest, ) -> Result { let bounds = ScanBounds::new(&request); if pending.is_empty() { return rocksdb_scan_committed_values(db, request, bounds); } let mut merged = BTreeMap::new(); for item in db.iterator(IteratorMode::From( &bounds.start_encoded, Direction::Forward, )) { let (encoded_key, value) = item.map_err(rocksdb_error)?; let encoded_key = encoded_key.as_ref(); if !bounds.contains_encoded(encoded_key) { break; } let logical_key = decode_key(&request.namespace, encoded_key)?; if !key_after_cursor(&request, &logical_key) { continue; } merged.insert(logical_key, value.to_vec()); } overlay_pending_values(&mut merged, pending, &request, &bounds)?; Ok(value_page_from_iter(merged, request.limit)) } fn rocksdb_scan_entries( db: &DB, pending: &BTreeMap, PendingWrite>, request: BackendKvScanRequest, ) -> Result { let bounds = ScanBounds::new(&request); if pending.is_empty() { return rocksdb_scan_committed_entries(db, request, bounds); } let mut merged = BTreeMap::new(); for item in db.iterator(IteratorMode::From( &bounds.start_encoded, Direction::Forward, )) { let (key, value) = item.map_err(rocksdb_error)?; let key = key.as_ref(); if !bounds.contains_encoded(key) { break; } let logical_key = decode_key(&request.namespace, key)?; if !key_after_cursor(&request, &logical_key) { continue; } merged.insert(logical_key, value.to_vec()); } overlay_pending_values(&mut merged, pending, &request, &bounds)?; Ok(entry_page_from_iter(merged, request.limit)) } struct ScanBounds { start_encoded: Vec, end_encoded: Vec, namespace_prefix: Vec, } impl ScanBounds { fn new(request: &BackendKvScanRequest) -> Self { let start = scan_start_key(request); let start_encoded = encode_key(&request.namespace, &start); let end = scan_end_key(&request.range); let end_encoded = end .as_ref() .map(|end| encode_key(&request.namespace, end)) .unwrap_or_else(|| namespace_end_key(&request.namespace)); let namespace_prefix = namespace_prefix(&request.namespace); Self { start_encoded, end_encoded, namespace_prefix, } } fn contains_encoded(&self, encoded_key: &[u8]) -> bool { encoded_key < self.end_encoded.as_slice() && encoded_key.starts_with(self.namespace_prefix.as_slice()) } } fn rocksdb_scan_committed_keys( db: &DB, request: BackendKvScanRequest, bounds: ScanBounds, ) -> Result { let mut keys = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; let mut iter = db.raw_iterator(); iter.seek(&bounds.start_encoded); while iter.valid() { let Some(encoded_key) = iter.key() else { break; }; if !bounds.contains_encoded(encoded_key) { break; } let logical_key = decode_key(&request.namespace, encoded_key)?; if !key_after_cursor(&request, &logical_key) { iter.next(); continue; } if count < request.limit { resume_after_candidate = Some(logical_key.clone()); keys.push(&logical_key); } count += 1; if count > request.limit { break; } iter.next(); } iter.status().map_err(rocksdb_error)?; let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); Ok(BackendKvKeyPage { keys: keys.finish(), resume_after, }) } fn rocksdb_scan_committed_values( db: &DB, request: BackendKvScanRequest, bounds: ScanBounds, ) -> Result { let mut values = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; for item in db.iterator(IteratorMode::From( &bounds.start_encoded, Direction::Forward, )) { let (encoded_key, value) = item.map_err(rocksdb_error)?; let encoded_key = encoded_key.as_ref(); if !bounds.contains_encoded(encoded_key) { break; } let logical_key = decode_key(&request.namespace, encoded_key)?; if !key_after_cursor(&request, &logical_key) { continue; } if count < request.limit { resume_after_candidate = Some(logical_key); values.push(value.as_ref()); } count += 1; if count > request.limit { break; } } let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); Ok(BackendKvValuePage { values: values.finish(), resume_after, }) } fn rocksdb_scan_committed_entries( db: &DB, request: BackendKvScanRequest, bounds: ScanBounds, ) -> Result { let mut keys = BytePageBuilder::new(); let mut values = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; for item in db.iterator(IteratorMode::From( &bounds.start_encoded, Direction::Forward, )) { let (key, value) = item.map_err(rocksdb_error)?; let key = key.as_ref(); if !bounds.contains_encoded(key) { break; } let logical_key = decode_key(&request.namespace, key)?; if !key_after_cursor(&request, &logical_key) { continue; } if count < request.limit { resume_after_candidate = Some(logical_key.clone()); keys.push(&logical_key); values.push(value.as_ref()); } count += 1; if count > request.limit { break; } } let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); Ok(BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, }) } fn overlay_pending_values( merged: &mut BTreeMap, Vec>, pending: &BTreeMap, PendingWrite>, request: &BackendKvScanRequest, bounds: &ScanBounds, ) -> Result<(), LixError> { for (encoded_key, write) in pending.range(bounds.start_encoded.clone()..bounds.end_encoded.clone()) { if !bounds.contains_encoded(encoded_key) { continue; } let logical_key = decode_key(&request.namespace, encoded_key)?; if !key_in_range(&logical_key, &request.range) || !key_after_cursor(request, &logical_key) { continue; } match write { PendingWrite::Put(value) => { merged.insert(logical_key, value.clone()); } PendingWrite::Delete => { merged.remove(&logical_key); } } } Ok(()) } fn key_page_from_iter( keys_iter: impl IntoIterator>, limit: usize, ) -> BackendKvKeyPage { let mut keys = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; for key in keys_iter { if count < limit { resume_after_candidate = Some(key.clone()); keys.push(&key); } count += 1; if count > limit { break; } } let resume_after = (count > limit).then_some(resume_after_candidate).flatten(); BackendKvKeyPage { keys: keys.finish(), resume_after, } } fn value_page_from_iter( values_iter: impl IntoIterator, Vec)>, limit: usize, ) -> BackendKvValuePage { let mut values = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; for (key, value) in values_iter { if count < limit { resume_after_candidate = Some(key); values.push(&value); } count += 1; if count > limit { break; } } let resume_after = (count > limit).then_some(resume_after_candidate).flatten(); BackendKvValuePage { values: values.finish(), resume_after, } } fn entry_page_from_iter( entries_iter: impl IntoIterator, Vec)>, limit: usize, ) -> BackendKvEntryPage { let mut keys = BytePageBuilder::new(); let mut values = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; for (key, value) in entries_iter { if count < limit { resume_after_candidate = Some(key.clone()); keys.push(&key); values.push(&value); } count += 1; if count > limit { break; } } let resume_after = (count > limit).then_some(resume_after_candidate).flatten(); BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, } } fn scan_start_key(request: &BackendKvScanRequest) -> Vec { let range_start = match &request.range { BackendKvScanRange::Prefix(prefix) => prefix.as_slice(), BackendKvScanRange::Range { start, .. } => start.as_slice(), }; match request.after.as_deref() { Some(after) if after > range_start => after.to_vec(), _ => range_start.to_vec(), } } fn scan_end_key(range: &BackendKvScanRange) -> Option> { match range { BackendKvScanRange::Prefix(prefix) => prefix_end(prefix), BackendKvScanRange::Range { end, .. } => Some(end.clone()), } } fn key_in_range(key: &[u8], range: &BackendKvScanRange) -> bool { match range { BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix), BackendKvScanRange::Range { start, end } => key >= start.as_slice() && key < end.as_slice(), } } fn key_after_cursor(request: &BackendKvScanRequest, key: &[u8]) -> bool { request.after.as_deref().is_none_or(|after| key > after) } fn encode_key(namespace: &str, key: &[u8]) -> Vec { let namespace = namespace.as_bytes(); let len = u32::try_from(namespace.len()).expect("bench namespace fits u32"); let mut encoded = Vec::with_capacity(4 + namespace.len() + key.len()); encoded.extend_from_slice(&len.to_be_bytes()); encoded.extend_from_slice(namespace); encoded.extend_from_slice(key); encoded } fn namespace_prefix(namespace: &str) -> Vec { encode_key(namespace, &[]) } fn namespace_end_key(namespace: &str) -> Vec { let mut end = namespace_prefix(namespace); end.push(0xFF); end } fn decode_key(namespace: &str, encoded: &[u8]) -> Result, LixError> { let prefix = namespace_prefix(namespace); encoded .strip_prefix(prefix.as_slice()) .map(|key| key.to_vec()) .ok_or_else(|| LixError::new("LIX_ERROR_UNKNOWN", "rocksdb bench key prefix mismatch")) } fn prefix_end(prefix: &[u8]) -> Option> { let mut end = prefix.to_vec(); for index in (0..end.len()).rev() { if end[index] != u8::MAX { end[index] += 1; end.truncate(index + 1); return Some(end); } } None } fn rocksdb_error(error: rocksdb::Error) -> LixError { LixError::new( "LIX_ERROR_UNKNOWN", format!("rocksdb bench backend: {error}"), ) } fn io_error(error: std::io::Error) -> LixError { LixError::new( "LIX_ERROR_UNKNOWN", format!("rocksdb bench backend: {error}"), ) } ================================================ FILE: packages/engine/benches/storage/sqlite_backend.rs ================================================ use std::sync::{Arc, Mutex}; use async_trait::async_trait; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError, }; use rusqlite::{params, Connection, OptionalExtension}; use std::path::{Path, PathBuf}; use tempfile::TempDir; #[derive(Clone)] pub(crate) struct SqliteBenchBackend { connection: Arc>, #[allow(dead_code)] path: Option>, _temp_dir: Option>, } pub(crate) struct SqliteBenchTransaction { connection: Arc>, finalized: bool, } impl SqliteBenchBackend { pub(crate) fn tempfile() -> Result { let temp_dir = Arc::new(TempDir::new().map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("sqlite bench tempdir: {error}"), ) })?); let path = Arc::new(temp_dir.path().join("bench.sqlite")); let connection = Connection::open(path.as_path()).map_err(sqlite_error)?; configure_connection(&connection)?; Ok(Self { connection: Arc::new(Mutex::new(connection)), path: Some(path), _temp_dir: Some(temp_dir), }) } #[allow(dead_code)] pub(crate) fn path(&self) -> Option<&Path> { self.path.as_deref().map(PathBuf::as_path) } fn lock_connection(&self) -> Result, LixError> { self.connection .lock() .map_err(|_| LixError::new("LIX_ERROR_UNKNOWN", "sqlite bench connection poisoned")) } } fn configure_connection(connection: &Connection) -> Result<(), LixError> { connection .execute_batch( " PRAGMA journal_mode = WAL; PRAGMA synchronous = NORMAL; PRAGMA temp_store = MEMORY; PRAGMA foreign_keys = ON; CREATE TABLE kv ( namespace TEXT NOT NULL, key BLOB NOT NULL, value BLOB NOT NULL, PRIMARY KEY (namespace, key) ) WITHOUT ROWID; ", ) .map_err(sqlite_error)?; Ok(()) } #[async_trait] impl Backend for SqliteBenchBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { let connection = self.lock_connection()?; connection .execute_batch("BEGIN DEFERRED") .map_err(sqlite_error)?; drop(connection); Ok(Box::new(SqliteBenchTransaction { connection: Arc::clone(&self.connection), finalized: false, })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { let connection = self.lock_connection()?; connection .execute_batch("BEGIN IMMEDIATE") .map_err(sqlite_error)?; drop(connection); Ok(Box::new(SqliteBenchTransaction { connection: Arc::clone(&self.connection), finalized: false, })) } } #[async_trait] impl BackendReadTransaction for SqliteBenchTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { let connection = self.lock_connection()?; let mut statement = connection .prepare_cached("SELECT value FROM kv WHERE namespace = ?1 AND key = ?2") .map_err(sqlite_error)?; let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { let value = statement .query_row(params![namespace.as_str(), key.as_slice()], |row| { row.get::<_, Vec>(0) }) .optional() .map_err(sqlite_error)?; if let Some(value) = value { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } Ok(BackendKvValueBatch { groups }) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { let connection = self.lock_connection()?; let mut statement = connection .prepare_cached("SELECT 1 FROM kv WHERE namespace = ?1 AND key = ?2") .map_err(sqlite_error)?; let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut exists = Vec::with_capacity(group.keys.len()); for key in group.keys { exists.push( statement .query_row(params![namespace.as_str(), key.as_slice()], |_| Ok(())) .optional() .map_err(sqlite_error)? .is_some(), ); } groups.push(BackendKvExistsGroup { namespace, exists }); } Ok(BackendKvExistsBatch { groups }) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { let connection = self.lock_connection()?; sqlite_scan_keys(&connection, request) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { let connection = self.lock_connection()?; sqlite_scan_values(&connection, request) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { let connection = self.lock_connection()?; sqlite_scan_entries(&connection, request) } async fn rollback(mut self: Box) -> Result<(), LixError> { self.lock_connection()? .execute_batch("ROLLBACK") .map_err(sqlite_error)?; self.finalized = true; Ok(()) } } #[async_trait] impl BackendWriteTransaction for SqliteBenchTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let connection = self.lock_connection()?; let mut put_statement = connection .prepare_cached( " INSERT INTO kv (namespace, key, value) VALUES (?1, ?2, ?3) ON CONFLICT(namespace, key) DO UPDATE SET value = excluded.value ", ) .map_err(sqlite_error)?; let mut delete_statement = connection .prepare_cached("DELETE FROM kv WHERE namespace = ?1 AND key = ?2") .map_err(sqlite_error)?; let mut stats = BackendKvWriteStats::default(); for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; put_statement .execute(params![namespace.as_str(), key, value]) .map_err(sqlite_error)?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; delete_statement .execute(params![namespace.as_str(), key]) .map_err(sqlite_error)?; stats.deletes += 1; stats.bytes_written += key.len(); } } Ok(stats) } async fn commit(mut self: Box) -> Result<(), LixError> { self.lock_connection()? .execute_batch("COMMIT") .map_err(sqlite_error)?; self.finalized = true; Ok(()) } } impl SqliteBenchTransaction { fn lock_connection(&self) -> Result, LixError> { self.connection .lock() .map_err(|_| LixError::new("LIX_ERROR_UNKNOWN", "sqlite bench connection poisoned")) } } impl Drop for SqliteBenchTransaction { fn drop(&mut self) { if !self.finalized { if let Ok(connection) = self.connection.lock() { let _ = connection.execute_batch("ROLLBACK"); } } } } fn sqlite_scan_keys( connection: &Connection, request: BackendKvScanRequest, ) -> Result { let start = scan_start_key(&request); let end = scan_end_key(&request.range); let limit = sqlite_fetch_limit(request.limit)?; let mut statement = connection .prepare_cached( " SELECT key FROM kv WHERE namespace = ?1 AND (?2 IS NULL OR key > ?2) AND key >= ?3 AND (?4 IS NULL OR key < ?4) ORDER BY key LIMIT ?5 ", ) .map_err(sqlite_error)?; let mut cursor = statement .query(params![ request.namespace.as_str(), request.after.as_deref(), start.as_slice(), end.as_deref(), limit, ]) .map_err(sqlite_error)?; let mut keys = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; while let Some(row) = cursor.next().map_err(sqlite_error)? { let key = row.get::<_, Vec>(0).map_err(sqlite_error)?; if count < request.limit { resume_after_candidate = Some(key.clone()); keys.push(&key); } count += 1; } let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); Ok(BackendKvKeyPage { keys: keys.finish(), resume_after, }) } fn sqlite_scan_values( connection: &Connection, request: BackendKvScanRequest, ) -> Result { let start = scan_start_key(&request); let end = scan_end_key(&request.range); let limit = sqlite_fetch_limit(request.limit)?; let mut statement = connection .prepare_cached( " SELECT key, value FROM kv WHERE namespace = ?1 AND (?2 IS NULL OR key > ?2) AND key >= ?3 AND (?4 IS NULL OR key < ?4) ORDER BY key LIMIT ?5 ", ) .map_err(sqlite_error)?; let mut cursor = statement .query(params![ request.namespace.as_str(), request.after.as_deref(), start.as_slice(), end.as_deref(), limit, ]) .map_err(sqlite_error)?; let mut values = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; while let Some(row) = cursor.next().map_err(sqlite_error)? { if count < request.limit { resume_after_candidate = Some(row.get::<_, Vec>(0).map_err(sqlite_error)?); let value = row.get::<_, Vec>(1).map_err(sqlite_error)?; values.push(&value); } count += 1; } let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); Ok(BackendKvValuePage { values: values.finish(), resume_after, }) } fn sqlite_scan_entries( connection: &Connection, request: BackendKvScanRequest, ) -> Result { let start = scan_start_key(&request); let end = scan_end_key(&request.range); let limit = sqlite_fetch_limit(request.limit)?; let mut statement = connection .prepare_cached( " SELECT key, value FROM kv WHERE namespace = ?1 AND (?2 IS NULL OR key > ?2) AND key >= ?3 AND (?4 IS NULL OR key < ?4) ORDER BY key LIMIT ?5 ", ) .map_err(sqlite_error)?; let mut cursor = statement .query(params![ request.namespace.as_str(), request.after.as_deref(), start.as_slice(), end.as_deref(), limit, ]) .map_err(sqlite_error)?; let mut keys = BytePageBuilder::new(); let mut values = BytePageBuilder::new(); let mut count = 0; let mut resume_after_candidate = None; while let Some(row) = cursor.next().map_err(sqlite_error)? { let key = row.get::<_, Vec>(0).map_err(sqlite_error)?; if count < request.limit { let value = row.get::<_, Vec>(1).map_err(sqlite_error)?; resume_after_candidate = Some(key.clone()); keys.push(&key); values.push(&value); } count += 1; } let resume_after = (count > request.limit) .then_some(resume_after_candidate) .flatten(); Ok(BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, }) } fn sqlite_fetch_limit(limit: usize) -> Result { if limit == usize::MAX { return Ok(i64::MAX); } let fetch_limit = limit.checked_add(1).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "storage scan limit overflow while checking for next page", ) })?; i64::try_from(fetch_limit).map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "storage scan limit does not fit into sqlite i64", ) }) } fn scan_start_key(request: &BackendKvScanRequest) -> Vec { let range_start = match &request.range { BackendKvScanRange::Prefix(prefix) => prefix.as_slice(), BackendKvScanRange::Range { start, .. } => start.as_slice(), }; match request.after.as_deref() { Some(after) if after > range_start => after.to_vec(), _ => range_start.to_vec(), } } fn scan_end_key(range: &BackendKvScanRange) -> Option> { match range { BackendKvScanRange::Prefix(prefix) => prefix_end(prefix), BackendKvScanRange::Range { end, .. } => Some(end.clone()), } } fn prefix_end(prefix: &[u8]) -> Option> { let mut end = prefix.to_vec(); for index in (0..end.len()).rev() { if end[index] != u8::MAX { end[index] += 1; end.truncate(index + 1); return Some(end); } } None } fn sqlite_error(error: rusqlite::Error) -> LixError { LixError::new( "LIX_ERROR_UNKNOWN", format!("sqlite bench backend: {error}"), ) } ================================================ FILE: packages/engine/benches/storage/storage_api.rs ================================================ use std::sync::Arc; use criterion::{black_box, BatchSize, Criterion}; use lix_engine::storage_bench::{self, StorageApiFixture, StorageBenchSelectivity}; use lix_engine::Backend; use tokio::runtime::Runtime; use crate::{Args, BenchBackend, RocksDbBenchBackend, SqliteBenchBackend}; type BackendFactory = fn() -> Arc; #[derive(Clone, Copy)] struct BackendProfile { name: &'static str, create: BackendFactory, } pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { for profile in [ BackendProfile { name: "in_memory", create: in_memory_backend, }, BackendProfile { name: "sqlite_tempfile", create: sqlite_tempfile_backend, }, BackendProfile { name: "rocksdb_tempdir", create: rocksdb_backend, }, ] { bench_backend(c, runtime, args, profile); } } fn bench_backend(c: &mut Criterion, runtime: &Runtime, args: Args, profile: BackendProfile) { let mut group = c.benchmark_group(format!("storage/api/{}", profile.name)); for rows in [1usize, 10, 100, 1_000, args.rows] { group.bench_function( format!("write_kv_batch_put/{rows_label}", rows_label = label(rows)), |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_puts( backend, rows, )) .expect("storage/api write_kv_batch_put succeeds"), ) }, BatchSize::LargeInput, ) }, ); } group.bench_function("write_kv_batch_mixed_put_delete/10k", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_mixed_put_delete( backend, args.rows, )) .expect("storage/api write_kv_batch_mixed_put_delete succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write_kv_batch_multi_namespace/10k", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_multi_namespace( backend, args.rows, )) .expect("storage/api write_kv_batch_multi_namespace succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("write_kv_batch_duplicate_keys/10k", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_duplicate_keys( backend, args.rows, )) .expect("storage/api write_kv_batch_duplicate_keys succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, rows, value_bytes) in [ ("64b", args.rows, 64usize), ("1k", args.rows, 1_024), ("16k", 1_000, 16 * 1024), ("128k", 100, 128 * 1024), ] { group.bench_function(format!("write_kv_batch_value_size/{label}"), |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_kv_batch_value_size( backend, rows, value_bytes, )) .expect("storage/api write_kv_batch_value_size succeeds"), ) }, BatchSize::LargeInput, ) }); } for rows in [1usize, 100, args.rows] { group.bench_function( format!( "transaction_write_and_commit/{rows_label}", rows_label = label(rows) ), |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_write_and_commit( backend, rows, )) .expect("storage/api transaction_write_and_commit succeeds"), ) }, BatchSize::LargeInput, ) }, ); } group.bench_function("transaction_rollback_after_write/10k", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_rollback_after_write( backend, args.rows, )) .expect("storage/api transaction_rollback_after_write succeeds"), ) }, BatchSize::LargeInput, ) }); for reads in [100usize, 1_000, args.rows] { group.bench_function( format!("get_values_hit/{reads_label}", reads_label = label(reads)), |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_get_values_hits_prepared( &fixture, reads, )) .expect("storage/api get_values_hit succeeds"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("exists_many/{reads_label}", reads_label = label(reads)), |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_exists_many_prepared( &fixture, reads, )) .expect("storage/api exists_many succeeds"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!("get_values_miss/{reads_label}", reads_label = label(reads)), |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_get_values_misses_prepared( &fixture, reads, )) .expect("storage/api get_values_miss succeeds"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!( "get_values_mixed_hit_miss/{reads_label}", reads_label = label(reads) ), |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on( storage_bench::storage_api_get_values_mixed_hit_miss_prepared( &fixture, reads, ), ) .expect("storage/api get_values_mixed_hit_miss succeeds"), ) }, BatchSize::LargeInput, ) }, ); group.bench_function( format!( "get_values_duplicate_keys/{reads_label}", reads_label = label(reads) ), |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on( storage_bench::storage_api_get_values_duplicate_keys_prepared( &fixture, reads, ), ) .expect("storage/api get_values_duplicate_keys succeeds"), ) }, BatchSize::LargeInput, ) }, ); } group.bench_function("get_values_multi_namespace/10k", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_get_values_multi_namespace( backend, args.rows, )) .expect("storage/api get_values_multi_namespace succeeds"), ) }, BatchSize::LargeInput, ) }); for limit in [100usize, 1_000, args.rows] { group.bench_function( format!("scan_keys_prefix/{limit_label}", limit_label = label(limit)), |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_scan_keys_prefix_prepared( &fixture, limit, )) .expect("storage/api scan_keys_prefix succeeds"), ) }, BatchSize::LargeInput, ) }, ); } group.bench_function("scan_keys_after_pages/10k", |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_scan_keys_after_pages_prepared( &fixture, 100, )) .expect("storage/api scan_keys_after_pages succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_keys_small_limit_of_large_range/100_of_10k", |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_scan_keys_prefix_prepared( &fixture, 100, )) .expect("storage/api scan_keys_small_limit_of_large_range succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_keys_empty_range/10k", |b| { b.iter_batched( || prepare_read(runtime, args.rows, profile.create), |fixture| { black_box( runtime .block_on(storage_bench::storage_api_scan_keys_empty_range_prepared( &fixture, )) .expect("storage/api scan_keys_empty_range succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, selectivity) in [ ("1pct", StorageBenchSelectivity::Percent1), ("10pct", StorageBenchSelectivity::Percent10), ("100pct", StorageBenchSelectivity::Percent100), ] { group.bench_function(format!("scan_keys_prefix_selectivity_{label}/10k"), |b| { b.iter_batched( || prepare_selective_scan(runtime, args.rows, selectivity, profile.create), |fixture| { black_box( runtime .block_on( storage_bench::storage_api_scan_keys_selective_prefix_prepared( &fixture, selectivity, ), ) .expect("storage/api scan_keys_prefix_selectivity succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("transaction_commit_empty", |b| { b.iter_batched( || (profile.create)(), |backend| { black_box( runtime .block_on(storage_bench::storage_api_transaction_commit_empty(backend)) .expect("storage/api transaction_commit_empty succeeds"), ) }, BatchSize::SmallInput, ) }); group.finish(); } fn prepare_read( runtime: &Runtime, rows: usize, create_backend: BackendFactory, ) -> StorageApiFixture { let backend = create_backend(); runtime .block_on(storage_bench::prepare_storage_api_read(backend, rows)) .expect("prepare storage/api read fixture") } fn prepare_selective_scan( runtime: &Runtime, rows: usize, selectivity: StorageBenchSelectivity, create_backend: BackendFactory, ) -> StorageApiFixture { let backend = create_backend(); runtime .block_on(storage_bench::prepare_storage_api_selective_scan( backend, rows, selectivity, )) .expect("prepare storage/api selective scan fixture") } fn in_memory_backend() -> Arc { BenchBackend::new() } fn sqlite_tempfile_backend() -> Arc { Arc::new(SqliteBenchBackend::tempfile().expect("create sqlite tempfile bench backend")) } fn rocksdb_backend() -> Arc { Arc::new(RocksDbBenchBackend::new().expect("create rocksdb bench backend")) } fn label(rows: usize) -> String { if rows >= 1_000 { format!("{}k", rows / 1_000) } else { rows.to_string() } } ================================================ FILE: packages/engine/benches/storage/tracked_state.rs ================================================ use lix_engine::storage_bench::{ self, StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity, StorageBenchUpdateFraction, }; use crate::{Args, BenchBackend}; use criterion::{black_box, BatchSize, Criterion}; use tokio::runtime::Runtime; pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { let mut group = c.benchmark_group("storage/tracked_state"); group.bench_function("write_root/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_write_root(config( &args, ))) .expect("prepare tracked_state/write_root"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("tracked_state/write_root succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_point_hit/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_read_point_hit_prepared( &backend, &fixture, )) .expect("tracked_state/read_point_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_point_miss/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_read_point_miss_prepared( &backend, &fixture, )) .expect("tracked_state/read_point_miss succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_all/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_all_prepared( &backend, &fixture, )) .expect("tracked_state/scan_all succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_keys_only/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_keys_only_prepared( &backend, &fixture, )) .expect("tracked_state/scan_keys_only succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_headers_only/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("tracked_state/scan_headers_only succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_full_rows/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("tracked_state/scan_full_rows succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, bytes, rows, row_label) in [("1k", 1024, 10_000, "10k"), ("16k", 16 * 1024, 1_000, "1k")] { let config = config(&args) .with_state_payload_bytes(bytes) .with_rows(rows); let name = format!("scan_keys_only_payload_{label}/{row_label}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, args, config), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_keys_only_prepared( &backend, &fixture, )) .expect("tracked_state/scan_keys_only payload succeeds"), ) }, BatchSize::LargeInput, ) }); let name = format!("scan_headers_only_payload_{label}/{row_label}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, args, config), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("tracked_state/scan_headers_only payload succeeds"), ) }, BatchSize::LargeInput, ) }); let name = format!("scan_full_rows_payload_{label}/{row_label}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, args, config), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("tracked_state/scan_full_rows payload succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("scan_schema/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_schema_prepared( &backend, &fixture, )) .expect("tracked_state/scan_schema succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_file/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_file_prepared( &backend, &fixture, )) .expect("tracked_state/scan_file succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("update_existing/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_update( &backend, config(&args), )) .expect("prepare tracked_state/update_existing"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state/update_existing succeeds"), ) }, BatchSize::LargeInput, ) }); for rows in [1, 10, 100, 1_000] { let name = format!("write_root/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_write_root( config(&args).with_rows(rows), )) .expect("prepare tracked_state/write_root batch"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("tracked_state/write_root batch succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, bytes, rows) in [ ("small", 0, 10_000), ("1k", 1024, 10_000), ("16k", 16 * 1024, 1_000), ("128k", 128 * 1024, 100), ] { let name = format!("write_root_payload_{label}/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_write_root( config(&args) .with_state_payload_bytes(bytes) .with_rows(rows), )) .expect("prepare tracked_state/write_root payload"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("tracked_state/write_root payload succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, key_pattern) in [ ("sequential_keys", StorageBenchKeyPattern::Sequential), ("random_keys", StorageBenchKeyPattern::Random), ] { let name = format!("write_root_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_write_root( config(&args).with_key_pattern(key_pattern), )) .expect("prepare tracked_state/write_root key pattern"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("tracked_state/write_root key pattern succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, selectivity) in [ ("1pct", StorageBenchSelectivity::Percent1), ("10pct", StorageBenchSelectivity::Percent10), ("100pct", StorageBenchSelectivity::Percent100), ] { let name = format!("scan_schema_selectivity_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, args, config(&args).with_selectivity(selectivity)), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_schema_selective_prepared( &backend, &fixture, )) .expect("tracked_state/scan_schema selectivity succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, selectivity) in [ ("1pct", StorageBenchSelectivity::Percent1), ("10pct", StorageBenchSelectivity::Percent10), ] { let name = format!("scan_file_selectivity_payload_1k_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { prepare_read_file_selective_with( runtime, args, config(&args) .with_state_payload_bytes(1024) .with_selectivity(selectivity), ) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_file_selective_prepared( &backend, &fixture, )) .expect("tracked_state/scan_file payload selectivity succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, selectivity) in [ ("1pct", StorageBenchSelectivity::Percent1), ("10pct", StorageBenchSelectivity::Percent10), ("100pct", StorageBenchSelectivity::Percent100), ] { let name = format!("scan_file_selectivity_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { prepare_read_file_selective_with( runtime, args, config(&args).with_selectivity(selectivity), ) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_scan_file_selective_prepared( &backend, &fixture, )) .expect("tracked_state/scan_file selectivity succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, selectivity) in [ ("1pct", StorageBenchSelectivity::Percent1), ("10pct", StorageBenchSelectivity::Percent10), ("100pct", StorageBenchSelectivity::Percent100), ] { let name = format!("scan_file_header_selectivity_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { prepare_read_file_selective_with( runtime, args, config(&args).with_selectivity(selectivity), ) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::tracked_state_scan_file_header_selective_prepared( &backend, &fixture, ), ) .expect("tracked_state/scan_file header selectivity succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, selectivity) in [ ("1pct", StorageBenchSelectivity::Percent1), ("10pct", StorageBenchSelectivity::Percent10), ] { let name = format!("scan_file_header_selectivity_payload_1k_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { prepare_read_file_selective_with( runtime, args, config(&args) .with_state_payload_bytes(1024) .with_selectivity(selectivity), ) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::tracked_state_scan_file_header_selective_prepared( &backend, &fixture, ), ) .expect("tracked_state/scan_file header payload selectivity succeeds"), ) }, BatchSize::LargeInput, ) }); } for rows in [1_000, 10_000, 100_000] { let name = format!("read_point_hit_100_reads/{rows}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, args, config(&args).with_rows(rows)), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::tracked_state_read_point_hit_constant_prepared( &backend, &fixture, 100, ), ) .expect("tracked_state/read_point_hit scaling succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, fraction) in [ ( "update_10pct_existing", StorageBenchUpdateFraction::Percent10, ), ( "update_all_existing", StorageBenchUpdateFraction::Percent100, ), ] { let name = format!("{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_update( &backend, config(&args).with_update_fraction(fraction), )) .expect("prepare tracked_state/update shape"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state/update shape succeeds"), ) }, BatchSize::LargeInput, ) }); } for rows in [10_000, 100_000] { let name = format!("update_1_existing/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_update_rows( &backend, config(&args).with_rows(rows), 1, )) .expect("prepare tracked_state/update_1_existing"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state/update_1_existing succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, rows, payload_bytes) in [ ("partial_snapshot_update_1_payload_1k", 100_000, 1024), ("partial_snapshot_update_1_payload_16k", 10_000, 16 * 1024), ] { let name = format!("{label}/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on( storage_bench::prepare_tracked_state_partial_snapshot_update_rows( &backend, config(&args) .with_rows(rows) .with_state_payload_bytes(payload_bytes), 1, ), ) .expect("prepare tracked_state/partial_snapshot_update"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state/partial_snapshot_update succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("append_new_child_commit/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_append_child( &backend, config(&args), )) .expect("prepare tracked_state/append_new_child_commit"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state/append_new_child_commit succeeds"), ) }, BatchSize::LargeInput, ) }); for rows in [10_000, 100_000] { let name = format!("append_1_new_child_commit/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_append_child_rows( &backend, config(&args).with_rows(rows), 1, )) .expect("prepare tracked_state/append_1_new_child_commit"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state/append_1_new_child_commit succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, rows) in [("delete_1", 1), ("delete_10pct", args.rows / 10)] { let name = format!("{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_tombstone_rows( &backend, config(&args), rows, )) .expect("prepare tracked_state/delete tombstones"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state/delete tombstones succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("diff_equal/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_diff_equal( &backend, config(&args), )) .expect("prepare tracked_state/diff_equal"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("tracked_state/diff_equal succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, changed_rows) in [ ("diff_update_1", 1), ("diff_update_10pct", args.rows / 10), ("diff_delete_1", 1), ("diff_delete_10pct", args.rows / 10), ] { let name = format!("{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let config = config(&args); let fixture = if label.starts_with("diff_delete") { runtime .block_on(storage_bench::prepare_tracked_state_diff_tombstone_rows( &backend, config, changed_rows, )) .expect("prepare tracked_state/diff_delete") } else { runtime .block_on(storage_bench::prepare_tracked_state_diff_update_rows( &backend, config, changed_rows, )) .expect("prepare tracked_state/diff_update") }; (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_diff_commits_prepared( &backend, &fixture, )) .expect("tracked_state/diff shape succeeds"), ) }, BatchSize::LargeInput, ) }); } group.finish(); } pub(crate) fn bench_fast(c: &mut Criterion, runtime: &Runtime, args: Args) { let mut group = c.benchmark_group("storage/tracked_state_fast"); group.bench_function("write_root_payload_small/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_write_root( config(&args).with_state_payload_bytes(0), )) .expect("prepare tracked_state_fast/write_root_payload_small"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("tracked_state_fast/write_root_payload_small succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, bytes, rows) in [("1k", 1024, 10_000), ("16k", 16 * 1024, 1_000)] { let name = format!("write_root_payload_{label}/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_write_root( config(&args) .with_state_payload_bytes(bytes) .with_rows(rows), )) .expect("prepare tracked_state_fast/write_root payload"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_write_root_prepared( &backend, &fixture, )) .expect("tracked_state_fast/write_root payload succeeds"), ) }, BatchSize::LargeInput, ) }); } for name in [ "scan_keys_only_payload_1k/10k", "scan_headers_only_payload_1k/10k", "scan_full_rows_payload_1k/10k", ] { group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, args, config(&args).with_state_payload_bytes(1024)), |(backend, fixture)| { let result = match name { "scan_keys_only_payload_1k/10k" => { runtime.block_on(storage_bench::tracked_state_scan_keys_only_prepared( &backend, &fixture, )) } "scan_headers_only_payload_1k/10k" => runtime.block_on( storage_bench::tracked_state_scan_headers_only_prepared( &backend, &fixture, ), ), "scan_full_rows_payload_1k/10k" => { runtime.block_on(storage_bench::tracked_state_scan_full_rows_prepared( &backend, &fixture, )) } _ => unreachable!("tracked_state_fast payload scan name is static"), }; black_box(result.expect("tracked_state_fast payload scan succeeds")) }, BatchSize::LargeInput, ) }); } group.bench_function("scan_file_header_selectivity_payload_1k_10pct/10k", |b| { b.iter_batched( || { prepare_read_file_selective_with( runtime, args, config(&args) .with_state_payload_bytes(1024) .with_selectivity(StorageBenchSelectivity::Percent10), ) }, |(backend, fixture)| { black_box( runtime .block_on( storage_bench::tracked_state_scan_file_header_selective_prepared( &backend, &fixture, ), ) .expect("tracked_state_fast/file header scan succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_point_hit_100_reads/10k", |b| { b.iter_batched( || prepare_read_with(runtime, args, config(&args).with_rows(10_000)), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::tracked_state_read_point_hit_constant_prepared( &backend, &fixture, 100, ), ) .expect("tracked_state_fast/read_point_hit_100_reads succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("update_1_existing/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_update_rows( &backend, config(&args).with_rows(10_000), 1, )) .expect("prepare tracked_state_fast/update_1_existing"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state_fast/update_1_existing succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("partial_snapshot_update_1_payload_1k/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on( storage_bench::prepare_tracked_state_partial_snapshot_update_rows( &backend, config(&args) .with_rows(10_000) .with_state_payload_bytes(1024), 1, ), ) .expect("prepare tracked_state_fast/partial_snapshot_update"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::tracked_state_update_existing_prepared( &backend, &fixture, )) .expect("tracked_state_fast/partial_snapshot_update succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn prepare_read( runtime: &Runtime, args: Args, ) -> ( std::sync::Arc, lix_engine::storage_bench::TrackedStateReadFixture, ) { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read( &backend, config(&args), )) .expect("prepare tracked_state/read"); (backend, fixture) } fn prepare_read_with( runtime: &Runtime, args: Args, config: StorageBenchConfig, ) -> ( std::sync::Arc, lix_engine::storage_bench::TrackedStateReadFixture, ) { let _ = args; let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read(&backend, config)) .expect("prepare tracked_state/read variant"); (backend, fixture) } fn prepare_read_file_selective_with( runtime: &Runtime, args: Args, config: StorageBenchConfig, ) -> ( std::sync::Arc, lix_engine::storage_bench::TrackedStateReadFixture, ) { let _ = args; let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_tracked_state_read_file_selective( &backend, config, )) .expect("prepare tracked_state/read file-selective variant"); (backend, fixture) } fn config(args: &Args) -> StorageBenchConfig { args.config() } ================================================ FILE: packages/engine/benches/storage/untracked_state.rs ================================================ use lix_engine::storage_bench::{ self, StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity, StorageBenchUpdateFraction, }; use crate::{Args, BenchBackend}; use criterion::{black_box, BatchSize, Criterion}; use tokio::runtime::Runtime; pub(crate) fn bench(c: &mut Criterion, runtime: &Runtime, args: Args) { let mut group = c.benchmark_group("storage/untracked_state"); group.bench_function("write_rows/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_write_rows(config( &args, ))) .expect("prepare untracked_state/write_rows"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_write_rows_prepared( &backend, &fixture, )) .expect("untracked_state/write_rows succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_point_hit/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_read_point_hit_prepared( &backend, &fixture, )) .expect("untracked_state/read_point_hit succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("read_point_miss/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_read_point_miss_prepared( &backend, &fixture, )) .expect("untracked_state/read_point_miss succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_all/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_all_prepared( &backend, &fixture, )) .expect("untracked_state/scan_all succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_keys_only/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_keys_only_prepared( &backend, &fixture, )) .expect("untracked_state/scan_keys_only succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_headers_only/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("untracked_state/scan_headers_only succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_full_rows/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("untracked_state/scan_full_rows succeeds"), ) }, BatchSize::LargeInput, ) }); for (label, bytes, rows, row_label) in [("1k", 1024, 10_000, "10k"), ("16k", 16 * 1024, 1_000, "1k")] { let config = config(&args) .with_state_payload_bytes(bytes) .with_rows(rows); let name = format!("scan_keys_only_payload_{label}/{row_label}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, config), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_keys_only_prepared( &backend, &fixture, )) .expect("untracked_state/scan_keys_only payload succeeds"), ) }, BatchSize::LargeInput, ) }); let name = format!("scan_headers_only_payload_{label}/{row_label}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, config), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_headers_only_prepared( &backend, &fixture, )) .expect("untracked_state/scan_headers_only payload succeeds"), ) }, BatchSize::LargeInput, ) }); let name = format!("scan_full_rows_payload_{label}/{row_label}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, config), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_full_rows_prepared( &backend, &fixture, )) .expect("untracked_state/scan_full_rows payload succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("scan_version/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_version_prepared( &backend, &fixture, )) .expect("untracked_state/scan_version succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("scan_schema/10k", |b| { b.iter_batched( || prepare_read(runtime, args), |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_scan_schema_prepared( &backend, &fixture, )) .expect("untracked_state/scan_schema succeeds"), ) }, BatchSize::LargeInput, ) }); group.bench_function("overwrite_existing/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_overwrite( &backend, config(&args), )) .expect("prepare untracked_state/overwrite_existing"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_overwrite_existing_prepared( &backend, &fixture, )) .expect("untracked_state/overwrite_existing succeeds"), ) }, BatchSize::LargeInput, ) }); for rows in [1, 10, 100, 1_000] { let name = format!("write_rows/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_write_rows( config(&args).with_rows(rows), )) .expect("prepare untracked_state/write_rows batch"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_write_rows_prepared( &backend, &fixture, )) .expect("untracked_state/write_rows batch succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, bytes, rows) in [ ("small", 0, 10_000), ("1k", 1024, 10_000), ("16k", 16 * 1024, 1_000), ("128k", 128 * 1024, 100), ] { let name = format!("write_rows_payload_{label}/{rows}"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_write_rows( config(&args) .with_state_payload_bytes(bytes) .with_rows(rows), )) .expect("prepare untracked_state/write_rows payload"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_write_rows_prepared( &backend, &fixture, )) .expect("untracked_state/write_rows payload succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, key_pattern) in [ ("sequential_keys", StorageBenchKeyPattern::Sequential), ("random_keys", StorageBenchKeyPattern::Random), ] { let name = format!("write_rows_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_write_rows( config(&args).with_key_pattern(key_pattern), )) .expect("prepare untracked_state/write_rows key pattern"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_write_rows_prepared( &backend, &fixture, )) .expect("untracked_state/write_rows key pattern succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, selectivity) in [ ("1pct", StorageBenchSelectivity::Percent1), ("10pct", StorageBenchSelectivity::Percent10), ("100pct", StorageBenchSelectivity::Percent100), ] { let name = format!("scan_schema_selectivity_{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, config(&args).with_selectivity(selectivity)), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::untracked_state_scan_schema_selective_prepared( &backend, &fixture, ), ) .expect("untracked_state/scan_schema selectivity succeeds"), ) }, BatchSize::LargeInput, ) }); } for rows in [1_000, 10_000, 100_000] { let name = format!("read_point_hit_100_reads/{rows}"); group.bench_function(name, |b| { b.iter_batched( || prepare_read_with(runtime, config(&args).with_rows(rows)), |(backend, fixture)| { black_box( runtime .block_on( storage_bench::untracked_state_read_point_hit_constant_prepared( &backend, &fixture, 100, ), ) .expect("untracked_state/read_point_hit scaling succeeds"), ) }, BatchSize::LargeInput, ) }); } for (label, fraction) in [ ("overwrite_10pct", StorageBenchUpdateFraction::Percent10), ("overwrite_all", StorageBenchUpdateFraction::Percent100), ] { let name = format!("{label}/10k"); group.bench_function(name, |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_overwrite( &backend, config(&args).with_update_fraction(fraction), )) .expect("prepare untracked_state/overwrite shape"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_overwrite_existing_prepared( &backend, &fixture, )) .expect("untracked_state/overwrite shape succeeds"), ) }, BatchSize::LargeInput, ) }); } group.bench_function("insert_new_keys/10k", |b| { b.iter_batched( || { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_insert_new_keys( &backend, config(&args), )) .expect("prepare untracked_state/insert_new_keys"); (backend, fixture) }, |(backend, fixture)| { black_box( runtime .block_on(storage_bench::untracked_state_write_rows_prepared( &backend, &fixture, )) .expect("untracked_state/insert_new_keys succeeds"), ) }, BatchSize::LargeInput, ) }); group.finish(); } fn prepare_read( runtime: &Runtime, args: Args, ) -> ( std::sync::Arc, lix_engine::storage_bench::UntrackedStateReadFixture, ) { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_read( &backend, config(&args), )) .expect("prepare untracked_state/read"); (backend, fixture) } fn prepare_read_with( runtime: &Runtime, config: StorageBenchConfig, ) -> ( std::sync::Arc, lix_engine::storage_bench::UntrackedStateReadFixture, ) { let backend = BenchBackend::new(); let fixture = runtime .block_on(storage_bench::prepare_untracked_state_read( &backend, config, )) .expect("prepare untracked_state/read variant"); (backend, fixture) } fn config(args: &Args) -> StorageBenchConfig { args.config() } ================================================ FILE: packages/engine/benches/transaction/main.rs ================================================ use async_trait::async_trait; use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion}; use lix_engine::storage_bench::{self, TransactionAccountingReport}; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRequest, BackendKvValueBatch, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, LixError, }; use std::collections::{BTreeMap, HashSet}; use std::sync::OnceLock; use std::sync::{Arc, Mutex}; use std::time::Duration; use tokio::runtime::Runtime; #[path = "../storage/backend.rs"] mod backend; use backend::BenchBackend; const ENTITY_ROWS: usize = 10_000; const LARGE_ENTITY_ROWS: usize = 1_000; const UPDATE_ROWS_SMALL: usize = 1; const UPDATE_ROWS_BATCH: usize = 100; const SCALING_ROWS: &[usize] = &[1_000, 2_000, 5_000, 10_000, 20_000]; fn transaction_benches(c: &mut Criterion) { let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("create tokio runtime for transaction benchmarks"); let mut group = c.benchmark_group("transaction"); group.bench_function("open_empty", |b| { b.iter_batched( || { runtime .block_on(storage_bench::prepare_transaction_commit_empty( BenchBackend::new(), )) .expect("prepare transaction/open_empty") }, |fixture| { black_box( runtime .block_on(storage_bench::transaction_open_empty_prepared(&fixture)) .unwrap_or_else(|error| panic!("transaction/open_empty succeeds: {error}")), ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_only_entities_no_payload/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_no_payload( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/stage_only_entities_no_payload") }, |fixture| { stage_only( &runtime, fixture, "transaction/stage_only_entities_no_payload", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_only_entities_payload_1k_unique/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_unique( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/stage_only_entities_payload_1k_unique") }, |fixture| { stage_only( &runtime, fixture, "transaction/stage_only_entities_payload_1k_unique", ) }, BatchSize::LargeInput, ) }); group.bench_function("commit_only_entities_no_payload/10k", |b| { b.iter_batched( || { let fixture = runtime .block_on( storage_bench::prepare_transaction_commit_entities_no_payload( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/commit_only_entities_no_payload fixture"); runtime .block_on(storage_bench::prepare_transaction_commit_only(fixture)) .expect("prepare transaction/commit_only_entities_no_payload") }, |fixture| { commit_only( &runtime, fixture, "transaction/commit_only_entities_no_payload", ) }, BatchSize::LargeInput, ) }); group.bench_function("commit_only_entities_payload_1k_same/10k", |b| { b.iter_batched( || { let fixture = runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_same( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/commit_only_entities_payload_1k_same fixture"); runtime .block_on(storage_bench::prepare_transaction_commit_only(fixture)) .expect("prepare transaction/commit_only_entities_payload_1k_same") }, |fixture| { commit_only( &runtime, fixture, "transaction/commit_only_entities_payload_1k_same", ) }, BatchSize::LargeInput, ) }); group.bench_function("commit_only_entities_payload_1k_unique/10k", |b| { b.iter_batched( || { let fixture = runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_unique( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/commit_only_entities_payload_1k_unique fixture"); runtime .block_on(storage_bench::prepare_transaction_commit_only(fixture)) .expect("prepare transaction/commit_only_entities_payload_1k_unique") }, |fixture| { commit_only( &runtime, fixture, "transaction/commit_only_entities_payload_1k_unique", ) }, BatchSize::LargeInput, ) }); group.bench_function("accounting_entities_no_payload/10k", |b| { b.iter_batched( || { prepare_accounting(&runtime, |backend| { storage_bench::prepare_transaction_commit_entities_no_payload( backend, ENTITY_ROWS, ) }) }, |fixture| { accounting( &runtime, fixture, "transaction/accounting_entities_no_payload", ) }, BatchSize::LargeInput, ) }); group.bench_function("accounting_entities_payload_1k_unique/10k", |b| { b.iter_batched( || { prepare_accounting(&runtime, |backend| { storage_bench::prepare_transaction_commit_entities_payload_1k_unique( backend, ENTITY_ROWS, ) }) }, |fixture| { accounting( &runtime, fixture, "transaction/accounting_entities_payload_1k_unique", ) }, BatchSize::LargeInput, ) }); group.bench_function("accounting_entities_payload_1k_same/10k", |b| { b.iter_batched( || { prepare_accounting(&runtime, |backend| { storage_bench::prepare_transaction_commit_entities_payload_1k_same( backend, ENTITY_ROWS, ) }) }, |fixture| { accounting( &runtime, fixture, "transaction/accounting_entities_payload_1k_same", ) }, BatchSize::LargeInput, ) }); group.bench_function("accounting_untracked_payload_1k_same/10k", |b| { b.iter_batched( || { prepare_accounting(&runtime, |backend| { storage_bench::prepare_transaction_commit_untracked_payload_1k_same( backend, ENTITY_ROWS, ) }) }, |fixture| { accounting( &runtime, fixture, "transaction/accounting_untracked_payload_1k_same", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_empty", |b| { b.iter_batched( || { runtime .block_on(storage_bench::prepare_transaction_commit_empty( BenchBackend::new(), )) .expect("prepare transaction/stage_plus_commit_empty") }, |fixture| commit(&runtime, fixture, "transaction/stage_plus_commit_empty"), BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_schema_only/1", |b| { b.iter_batched( || { runtime .block_on(storage_bench::prepare_transaction_commit_schema_only( BenchBackend::new(), )) .expect("prepare transaction/stage_plus_commit_schema_only") }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_schema_only", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_entities_no_payload/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_no_payload( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/stage_plus_commit_entities_no_payload") }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_entities_no_payload", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_entities_payload_1k_unique/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_unique( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/stage_plus_commit_entities_payload_1k_unique") }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_entities_payload_1k_unique", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_entities_payload_1k_same/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_same( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/stage_plus_commit_entities_payload_1k_same") }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_entities_payload_1k_same", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_entities_payload_1k_half_duplicate/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_half_duplicate( BenchBackend::new(), ENTITY_ROWS, ), ) .expect( "prepare transaction/stage_plus_commit_entities_payload_1k_half_duplicate", ) }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_entities_payload_1k_half_duplicate", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_entities_metadata_1k_same/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_metadata_1k_same( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/stage_plus_commit_entities_metadata_1k_same") }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_entities_metadata_1k_same", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_entities_payload_16k_unique/1k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_16k_unique( BenchBackend::new(), LARGE_ENTITY_ROWS, ), ) .expect("prepare transaction/stage_plus_commit_entities_payload_16k_unique") }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_entities_payload_16k_unique", ) }, BatchSize::LargeInput, ) }); group.bench_function("stage_plus_commit_untracked_payload_1k_same/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_untracked_payload_1k_same( BenchBackend::new(), ENTITY_ROWS, ), ) .expect("prepare transaction/stage_plus_commit_untracked_payload_1k_same") }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_untracked_payload_1k_same", ) }, BatchSize::LargeInput, ) }); group.bench_function( "stage_plus_commit_update_1_existing_payload_1k/root_10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_update_existing_payload_1k( BenchBackend::new(), ENTITY_ROWS, UPDATE_ROWS_SMALL, ), ) .expect( "prepare transaction/stage_plus_commit_update_1_existing_payload_1k", ) }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_update_1_existing_payload_1k", ) }, BatchSize::LargeInput, ) }, ); group.bench_function( "stage_plus_commit_update_100_existing_payload_1k/root_10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_update_existing_payload_1k( BenchBackend::new(), ENTITY_ROWS, UPDATE_ROWS_BATCH, ), ) .expect( "prepare transaction/stage_plus_commit_update_100_existing_payload_1k", ) }, |fixture| { commit( &runtime, fixture, "transaction/stage_plus_commit_update_100_existing_payload_1k", ) }, BatchSize::LargeInput, ) }, ); group.finish(); let mut io_group = c.benchmark_group("transaction_io_100us"); io_group.bench_function("stage_plus_commit_entities_no_payload/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_no_payload( latency_backend(), ENTITY_ROWS, ), ) .expect("prepare transaction_io_100us/stage_plus_commit_entities_no_payload") }, |fixture| { commit( &runtime, fixture, "transaction_io_100us/stage_plus_commit_entities_no_payload", ) }, BatchSize::LargeInput, ) }); io_group.bench_function("stage_plus_commit_entities_payload_1k_same/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_same( latency_backend(), ENTITY_ROWS, ), ) .expect( "prepare transaction_io_100us/stage_plus_commit_entities_payload_1k_same", ) }, |fixture| { commit( &runtime, fixture, "transaction_io_100us/stage_plus_commit_entities_payload_1k_same", ) }, BatchSize::LargeInput, ) }); io_group.bench_function("stage_plus_commit_entities_payload_1k_unique/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_unique( latency_backend(), ENTITY_ROWS, ), ) .expect( "prepare transaction_io_100us/stage_plus_commit_entities_payload_1k_unique", ) }, |fixture| { commit( &runtime, fixture, "transaction_io_100us/stage_plus_commit_entities_payload_1k_unique", ) }, BatchSize::LargeInput, ) }); io_group.bench_function("stage_plus_commit_untracked_payload_1k_same/10k", |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_untracked_payload_1k_same( latency_backend(), ENTITY_ROWS, ), ) .expect( "prepare transaction_io_100us/stage_plus_commit_untracked_payload_1k_same", ) }, |fixture| { commit( &runtime, fixture, "transaction_io_100us/stage_plus_commit_untracked_payload_1k_same", ) }, BatchSize::LargeInput, ) }); io_group.bench_function("commit_only_entities_payload_1k_same/10k", |b| { b.iter_batched( || { let fixture = runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_same( latency_backend(), ENTITY_ROWS, ), ) .expect( "prepare transaction_io_100us/commit_only_entities_payload_1k_same fixture", ); runtime .block_on(storage_bench::prepare_transaction_commit_only(fixture)) .expect("prepare transaction_io_100us/commit_only_entities_payload_1k_same") }, |fixture| { commit_only( &runtime, fixture, "transaction_io_100us/commit_only_entities_payload_1k_same", ) }, BatchSize::LargeInput, ) }); io_group.finish(); let mut scaling_group = c.benchmark_group("transaction_scaling"); for &rows in SCALING_ROWS { let label = row_count_label(rows); scaling_group.bench_function( format!("stage_only_entities_no_payload/{label}"), |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_no_payload( BenchBackend::new(), rows, ), ) .unwrap_or_else(|error| { panic!( "prepare transaction_scaling/stage_only_entities_no_payload/{label}: {error}" ) }) }, |fixture| { stage_only( &runtime, fixture, "transaction_scaling/stage_only_entities_no_payload", ) }, BatchSize::LargeInput, ) }, ); scaling_group.bench_function( format!("commit_only_entities_no_payload/{label}"), |b| { b.iter_batched( || { let fixture = runtime .block_on( storage_bench::prepare_transaction_commit_entities_no_payload( BenchBackend::new(), rows, ), ) .unwrap_or_else(|error| { panic!( "prepare transaction_scaling/commit_only_entities_no_payload/{label} fixture: {error}" ) }); runtime .block_on(storage_bench::prepare_transaction_commit_only(fixture)) .unwrap_or_else(|error| { panic!( "prepare transaction_scaling/commit_only_entities_no_payload/{label}: {error}" ) }) }, |fixture| { commit_only( &runtime, fixture, "transaction_scaling/commit_only_entities_no_payload", ) }, BatchSize::LargeInput, ) }, ); scaling_group.bench_function( format!("stage_plus_commit_entities_payload_1k_same/{label}"), |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_same( BenchBackend::new(), rows, ), ) .unwrap_or_else(|error| { panic!( "prepare transaction_scaling/stage_plus_commit_entities_payload_1k_same/{label}: {error}" ) }) }, |fixture| { commit( &runtime, fixture, "transaction_scaling/stage_plus_commit_entities_payload_1k_same", ) }, BatchSize::LargeInput, ) }, ); scaling_group.bench_function( format!("stage_plus_commit_entities_payload_1k_unique/{label}"), |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_unique( BenchBackend::new(), rows, ), ) .unwrap_or_else(|error| { panic!( "prepare transaction_scaling/stage_plus_commit_entities_payload_1k_unique/{label}: {error}" ) }) }, |fixture| { commit( &runtime, fixture, "transaction_scaling/stage_plus_commit_entities_payload_1k_unique", ) }, BatchSize::LargeInput, ) }, ); } scaling_group.finish(); let mut scaling_io_group = c.benchmark_group("transaction_scaling_io_100us"); for &rows in SCALING_ROWS { let label = row_count_label(rows); scaling_io_group.bench_function( format!("stage_plus_commit_entities_payload_1k_same/{label}"), |b| { b.iter_batched( || { runtime .block_on( storage_bench::prepare_transaction_commit_entities_payload_1k_same( latency_backend(), rows, ), ) .unwrap_or_else(|error| { panic!( "prepare transaction_scaling_io_100us/stage_plus_commit_entities_payload_1k_same/{label}: {error}" ) }) }, |fixture| { commit( &runtime, fixture, "transaction_scaling_io_100us/stage_plus_commit_entities_payload_1k_same", ) }, BatchSize::LargeInput, ) }, ); } scaling_io_group.finish(); } fn row_count_label(rows: usize) -> String { if rows % 1_000 == 0 { format!("{}k", rows / 1_000) } else { rows.to_string() } } fn commit( runtime: &Runtime, fixture: storage_bench::TransactionBenchFixture, label: &str, ) -> storage_bench::StorageBenchReport { black_box( runtime .block_on(storage_bench::transaction_commit_prepared(&fixture)) .unwrap_or_else(|error| panic!("{label} succeeds: {error}")), ) } fn stage_only( runtime: &Runtime, fixture: storage_bench::TransactionBenchFixture, label: &str, ) -> storage_bench::StorageBenchReport { black_box( runtime .block_on(storage_bench::transaction_stage_only_prepared(&fixture)) .unwrap_or_else(|error| panic!("{label} succeeds: {error}")), ) } fn commit_only( runtime: &Runtime, fixture: storage_bench::TransactionCommitOnlyFixture, label: &str, ) -> storage_bench::StorageBenchReport { black_box( runtime .block_on(storage_bench::transaction_commit_only_prepared(fixture)) .unwrap_or_else(|error| panic!("{label} succeeds: {error}")), ) } fn latency_backend() -> Arc { Arc::new(LatencyBackend { inner: BenchBackend::new(), read_delay: Duration::from_micros(100), write_delay: Duration::from_micros(250), commit_delay: Duration::from_micros(500), }) } struct AccountingFixture { fixture: storage_bench::TransactionBenchFixture, storage: Arc, } fn prepare_accounting(runtime: &Runtime, prepare: F) -> AccountingFixture where F: FnOnce(Arc) -> Fut, Fut: std::future::Future>, { let (backend, storage) = CountingBackend::new(BenchBackend::new()); let fixture = runtime .block_on(prepare(backend)) .expect("prepare transaction accounting fixture"); storage.reset(); storage_bench::reset_transaction_bench_counters(); AccountingFixture { fixture, storage } } fn accounting( runtime: &Runtime, fixture: AccountingFixture, label: &str, ) -> TransactionAccountingReport { runtime .block_on(storage_bench::transaction_commit_prepared(&fixture.fixture)) .unwrap_or_else(|error| panic!("{label} succeeds: {error}")); let storage = fixture.storage.snapshot(); let report = TransactionAccountingReport { counters: storage_bench::transaction_bench_counters(), storage_write_batches: storage.write_batches, kv_puts_by_namespace: storage.kv_puts_by_namespace, bytes_by_namespace: storage.bytes_by_namespace, }; print_accounting_once(label, &report); black_box(report) } static PRINTED_ACCOUNTING_LABELS: OnceLock>> = OnceLock::new(); fn print_accounting_once(label: &str, report: &TransactionAccountingReport) { if std::env::var("LIX_BENCH_PRINT_ACCOUNTING").ok().as_deref() != Some("1") { return; } let labels = PRINTED_ACCOUNTING_LABELS.get_or_init(|| Mutex::new(HashSet::new())); let mut labels = labels .lock() .expect("printed accounting label mutex should lock"); if !labels.insert(label.to_string()) { return; } eprintln!("{label}: {report:#?}"); } #[derive(Default)] struct StorageAccounting { inner: Mutex, } #[derive(Default)] struct StorageAccountingSnapshot { write_batches: usize, kv_puts_by_namespace: BTreeMap, bytes_by_namespace: BTreeMap, } impl StorageAccounting { fn reset(&self) { *self .inner .lock() .expect("storage accounting mutex should lock") = StorageAccountingSnapshot::default(); } fn record_write_batch(&self, batch: &BackendKvWriteBatch) { let mut inner = self .inner .lock() .expect("storage accounting mutex should lock"); inner.write_batches += 1; for group in &batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let Some(key) = group.put_key(index) else { continue; }; let Some(value) = group.put_value(index) else { continue; }; *inner .kv_puts_by_namespace .entry(namespace.clone()) .or_default() += 1; *inner .bytes_by_namespace .entry(namespace.clone()) .or_default() += key.len() + value.len(); } for index in 0..group.delete_count() { let Some(key) = group.delete_key(index) else { continue; }; *inner .bytes_by_namespace .entry(namespace.clone()) .or_default() += key.len(); } } } fn snapshot(&self) -> StorageAccountingSnapshot { let inner = self .inner .lock() .expect("storage accounting mutex should lock"); StorageAccountingSnapshot { write_batches: inner.write_batches, kv_puts_by_namespace: inner.kv_puts_by_namespace.clone(), bytes_by_namespace: inner.bytes_by_namespace.clone(), } } } struct CountingBackend { inner: Arc, accounting: Arc, } impl CountingBackend { fn new( inner: Arc, ) -> (Arc, Arc) { let accounting = Arc::new(StorageAccounting::default()); ( Arc::new(Self { inner, accounting: Arc::clone(&accounting), }), accounting, ) } } struct LatencyBackend { inner: Arc, read_delay: Duration, write_delay: Duration, commit_delay: Duration, } impl LatencyBackend { fn delay(duration: Duration) { if !duration.is_zero() { std::thread::sleep(duration); } } } #[async_trait] impl Backend for LatencyBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { let transaction = self.inner.begin_read_transaction().await?; Ok(Box::new(LatencyReadTransaction { transaction, read_delay: self.read_delay, })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { let transaction = self.inner.begin_write_transaction().await?; Ok(Box::new(LatencyWriteTransaction { transaction, read_delay: self.read_delay, write_delay: self.write_delay, commit_delay: self.commit_delay, })) } } struct LatencyReadTransaction { transaction: Box, read_delay: Duration, } #[async_trait] impl BackendReadTransaction for LatencyReadTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.get_values(request).await } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.exists_many(request).await } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.scan_keys(request).await } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.scan_values(request).await } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.scan_entries(request).await } async fn rollback(self: Box) -> Result<(), LixError> { self.transaction.rollback().await } } struct LatencyWriteTransaction { transaction: Box, read_delay: Duration, write_delay: Duration, commit_delay: Duration, } #[async_trait] impl BackendReadTransaction for LatencyWriteTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.get_values(request).await } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.exists_many(request).await } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.scan_keys(request).await } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.scan_values(request).await } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { LatencyBackend::delay(self.read_delay); self.transaction.scan_entries(request).await } async fn rollback(self: Box) -> Result<(), LixError> { self.transaction.rollback().await } } #[async_trait] impl BackendWriteTransaction for LatencyWriteTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { LatencyBackend::delay(self.write_delay); self.transaction.write_kv_batch(batch).await } async fn commit(self: Box) -> Result<(), LixError> { LatencyBackend::delay(self.commit_delay); self.transaction.commit().await } } #[async_trait] impl Backend for CountingBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { let transaction = self.inner.begin_read_transaction().await?; Ok(Box::new(CountingReadTransaction { transaction })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { let transaction = self.inner.begin_write_transaction().await?; Ok(Box::new(CountingWriteTransaction { transaction, accounting: Arc::clone(&self.accounting), })) } } struct CountingReadTransaction { transaction: Box, } #[async_trait] impl BackendReadTransaction for CountingReadTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { self.transaction.get_values(request).await } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { self.transaction.exists_many(request).await } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { self.transaction.scan_keys(request).await } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { self.transaction.scan_values(request).await } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { self.transaction.scan_entries(request).await } async fn rollback(self: Box) -> Result<(), LixError> { self.transaction.rollback().await } } struct CountingWriteTransaction { transaction: Box, accounting: Arc, } #[async_trait] impl BackendReadTransaction for CountingWriteTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { self.transaction.get_values(request).await } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { self.transaction.exists_many(request).await } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { self.transaction.scan_keys(request).await } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { self.transaction.scan_values(request).await } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { self.transaction.scan_entries(request).await } async fn rollback(self: Box) -> Result<(), LixError> { self.transaction.rollback().await } } #[async_trait] impl BackendWriteTransaction for CountingWriteTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { self.accounting.record_write_batch(&batch); self.transaction.write_kv_batch(batch).await } async fn commit(self: Box) -> Result<(), LixError> { self.transaction.commit().await } } criterion_group!(benches, transaction_benches); criterion_main!(benches); ================================================ FILE: packages/engine/src/backend/kv.rs ================================================ #[derive(Debug, Clone, PartialEq, Eq, Default)] pub struct BytePage { bytes: Vec, offsets: Vec, } impl BytePage { pub fn new() -> Self { Self { bytes: Vec::new(), offsets: vec![0], } } pub fn len(&self) -> usize { self.offsets.len().saturating_sub(1) } pub fn is_empty(&self) -> bool { self.len() == 0 } pub fn get(&self, index: usize) -> Option<&[u8]> { let start = usize::try_from(*self.offsets.get(index)?).ok()?; let end = usize::try_from(*self.offsets.get(index + 1)?).ok()?; self.bytes.get(start..end) } pub fn iter(&self) -> BytePageIter<'_> { BytePageIter { page: self, index: 0, } } } pub struct BytePageIter<'a> { page: &'a BytePage, index: usize, } impl<'a> Iterator for BytePageIter<'a> { type Item = &'a [u8]; fn next(&mut self) -> Option { let value = self.page.get(self.index)?; self.index += 1; Some(value) } } #[derive(Debug, Clone, PartialEq, Eq, Default)] pub struct BytePageBuilder { bytes: Vec, offsets: Vec, } impl BytePageBuilder { pub fn new() -> Self { Self { bytes: Vec::new(), offsets: vec![0], } } pub fn with_capacity(items: usize, bytes: usize) -> Self { let mut offsets = Vec::with_capacity(items.saturating_add(1)); offsets.push(0); Self { bytes: Vec::with_capacity(bytes), offsets, } } pub fn from_page(page: BytePage) -> Self { Self { bytes: page.bytes, offsets: page.offsets, } } pub fn len(&self) -> usize { self.offsets.len().saturating_sub(1) } pub fn is_empty(&self) -> bool { self.len() == 0 } pub fn get(&self, index: usize) -> Option<&[u8]> { let start = usize::try_from(*self.offsets.get(index)?).ok()?; let end = usize::try_from(*self.offsets.get(index + 1)?).ok()?; self.bytes.get(start..end) } pub fn push(&mut self, value: impl AsRef<[u8]>) { let value = value.as_ref(); self.bytes.extend_from_slice(value); let end = u32::try_from(self.bytes.len()).expect("byte page exceeds u32 offset capacity"); self.offsets.push(end); } pub fn finish(self) -> BytePage { BytePage { bytes: self.bytes, offsets: self.offsets, } } } /// Ordered byte range for backend KV scans. /// /// Ranges are half-open: `start <= key < end`. `Prefix` is explicit because it /// is a common access pattern and lets each backend choose the safest /// implementation for its storage engine. #[derive(Debug, Clone, PartialEq, Eq)] pub enum BackendKvScanRange { Prefix(Vec), Range { start: Vec, end: Vec }, } impl BackendKvScanRange { pub fn prefix(prefix: impl Into>) -> Self { Self::Prefix(prefix.into()) } pub fn range(start: impl Into>, end: impl Into>) -> Self { Self::Range { start: start.into(), end: end.into(), } } } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvGetRequest { pub groups: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvGetGroup { pub namespace: String, pub keys: Vec>, } impl BackendKvGetGroup { pub fn namespace(&self) -> &str { &self.namespace } } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvValueBatch { pub groups: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvValueGroup { namespace: String, values: BytePage, present: Vec, } impl BackendKvValueGroup { pub fn new(namespace: impl Into, values: BytePage, present: Vec) -> Self { assert_eq!( values.len(), present.len(), "backend value batch must have one value slot per presence bit" ); Self { namespace: namespace.into(), values, present, } } pub fn namespace(&self) -> &str { &self.namespace } pub fn len(&self) -> usize { self.present.len() } pub fn is_empty(&self) -> bool { self.present.is_empty() } pub fn value(&self, index: usize) -> Option> { let present = *self.present.get(index)?; if present { Some(Some( self.values .get(index) .expect("backend value batch invariant violated"), )) } else { Some(None) } } pub fn values_iter(&self) -> impl Iterator> { (0..self.len()).filter_map(|index| self.value(index)) } pub fn into_parts(self) -> (String, BytePage, Vec) { (self.namespace, self.values, self.present) } } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvExistsBatch { pub groups: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvExistsGroup { pub namespace: String, pub exists: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvScanRequest { pub namespace: String, pub range: BackendKvScanRange, pub after: Option>, pub limit: usize, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvKeyPage { pub keys: BytePage, pub resume_after: Option>, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvValuePage { pub values: BytePage, pub resume_after: Option>, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvEntryPage { pub keys: BytePage, pub values: BytePage, pub resume_after: Option>, } impl BackendKvEntryPage { pub fn len(&self) -> usize { self.keys.len() } pub fn is_empty(&self) -> bool { self.keys.is_empty() } pub fn key(&self, index: usize) -> Option<&[u8]> { self.keys.get(index) } pub fn value(&self, index: usize) -> Option<&[u8]> { self.values.get(index) } } #[derive(Debug, Clone, PartialEq, Eq, Default)] pub struct BackendKvWriteBatch { pub groups: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct BackendKvWriteGroup { namespace: String, put_keys: BytePageBuilder, put_values: BytePageBuilder, deletes: BytePageBuilder, } impl BackendKvWriteGroup { pub fn new(namespace: impl Into) -> Self { Self { namespace: namespace.into(), put_keys: BytePageBuilder::new(), put_values: BytePageBuilder::new(), deletes: BytePageBuilder::new(), } } pub fn from_pages( namespace: impl Into, put_keys: BytePage, put_values: BytePage, deletes: BytePage, ) -> Self { assert_eq!( put_keys.len(), put_values.len(), "backend write batch must have one value per put key" ); Self { namespace: namespace.into(), put_keys: BytePageBuilder::from_page(put_keys), put_values: BytePageBuilder::from_page(put_values), deletes: BytePageBuilder::from_page(deletes), } } pub fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) { self.put_keys.push(key); self.put_values.push(value); } pub fn delete(&mut self, key: impl AsRef<[u8]>) { self.deletes.push(key); } pub fn namespace(&self) -> &str { &self.namespace } pub fn put_count(&self) -> usize { self.put_keys.len() } pub fn delete_count(&self) -> usize { self.deletes.len() } pub fn put_key(&self, index: usize) -> Option<&[u8]> { self.put_keys.get(index) } pub fn put_value(&self, index: usize) -> Option<&[u8]> { self.put_values.get(index) } pub fn delete_key(&self, index: usize) -> Option<&[u8]> { self.deletes.get(index) } pub fn into_parts(self) -> (String, BytePage, BytePage, BytePage) { ( self.namespace, self.put_keys.finish(), self.put_values.finish(), self.deletes.finish(), ) } } #[derive(Debug, Clone, Copy, PartialEq, Eq, Default)] pub struct BackendKvWriteStats { pub puts: usize, pub deletes: usize, pub bytes_written: usize, } ================================================ FILE: packages/engine/src/backend/mod.rs ================================================ mod kv; #[cfg(test)] pub(crate) mod testing; mod types; pub use kv::{ BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteGroup, BackendKvWriteStats, BytePage, BytePageBuilder, }; pub use types::{Backend, BackendReadTransaction, BackendWriteTransaction}; ================================================ FILE: packages/engine/src/backend/testing.rs ================================================ use std::collections::BTreeMap; use std::sync::{Arc, Mutex}; use async_trait::async_trait; use crate::backend::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, }; use crate::LixError; type KvMap = BTreeMap<(String, Vec), Vec>; /// In-memory backend for unit tests that need backend KV semantics without SQL. /// /// SQL execution intentionally returns an error so new tests do not accidentally /// couple to raw SQL while exercising storage-facing APIs. #[derive(Debug, Clone, Default)] pub(crate) struct UnitTestBackend { kv: Arc>, } impl UnitTestBackend { pub(crate) fn new() -> Self { Self::default() } } #[async_trait] impl Backend for UnitTestBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { let snapshot = self .kv .lock() .map_err(|_| lock_error("unit test backend kv"))? .clone(); Ok(Box::new(UnitTestTransaction { parent: Arc::clone(&self.kv), kv: snapshot, })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { let snapshot = self .kv .lock() .map_err(|_| lock_error("unit test backend kv"))? .clone(); Ok(Box::new(UnitTestTransaction { parent: Arc::clone(&self.kv), kv: snapshot, })) } } struct UnitTestTransaction { parent: Arc>, kv: KvMap, } #[async_trait] impl BackendReadTransaction for UnitTestTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { if let Some(value) = self.kv.get(&(namespace.clone(), key)) { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } Ok(BackendKvValueBatch { groups }) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let exists = group .keys .into_iter() .map(|key| self.kv.contains_key(&(namespace.clone(), key))) .collect(); groups.push(BackendKvExistsGroup { namespace, exists }); } Ok(BackendKvExistsBatch { groups }) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_keys(&self.kv, request)) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_values(&self.kv, request)) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_entries(&self.kv, request)) } async fn rollback(self: Box) -> Result<(), LixError> { Ok(()) } } #[async_trait] impl BackendWriteTransaction for UnitTestTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let mut stats = BackendKvWriteStats::default(); for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); self.kv .insert((namespace.clone(), key.to_vec()), value.to_vec()); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; stats.deletes += 1; stats.bytes_written += key.len(); self.kv.remove(&(namespace.clone(), key.to_vec())); } } Ok(stats) } async fn commit(self: Box) -> Result<(), LixError> { *self .parent .lock() .map_err(|_| lock_error("unit test backend kv"))? = self.kv; Ok(()) } } #[async_trait] impl Backend for Arc { async fn begin_read_transaction( &self, ) -> Result, LixError> { self.as_ref().begin_read_transaction().await } async fn begin_write_transaction( &self, ) -> Result, LixError> { self.as_ref().begin_write_transaction().await } } fn scan_pairs<'a>( kv: &'a KvMap, namespace: &str, range: &BackendKvScanRange, limit: Option, ) -> Vec<(&'a Vec, &'a Vec)> { let pairs = kv .iter() .filter(|((candidate_namespace, key), _)| { candidate_namespace == namespace && key_matches_range(key, range) }) .collect::>(); let mut pairs = pairs; pairs.sort_by(|left, right| left.0 .1.cmp(&right.0 .1)); if let Some(limit) = limit { pairs.truncate(limit); } pairs .into_iter() .map(|((_, key), value)| (key, value)) .collect() } pub(crate) fn scan_map_keys(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvKeyPage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, _)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); keys.push(key); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvKeyPage { keys: keys.finish(), resume_after, } } pub(crate) fn scan_map_values(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvValuePage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, value)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); values.push(value); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvValuePage { values: values.finish(), resume_after, } } pub(crate) fn scan_map_entries(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvEntryPage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, value)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); keys.push(key); values.push(value); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, } } fn scan_filtered_pairs<'a>( kv: &'a KvMap, request: &BackendKvScanRequest, ) -> Vec<(&'a Vec, &'a Vec)> { let scan_limit = request .limit .checked_add(1 + usize::from(request.after.is_some())) .unwrap_or(request.limit); scan_pairs(kv, &request.namespace, &request.range, Some(scan_limit)) .into_iter() .filter(|(key, _)| { request .after .as_deref() .is_none_or(|after| key.as_slice() > after) }) .collect() } fn key_matches_range(key: &[u8], range: &BackendKvScanRange) -> bool { match range { BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix), BackendKvScanRange::Range { start, end } => start.as_slice() <= key && key < end.as_slice(), } } fn lock_error(name: &str) -> LixError { LixError::new("LIX_ERROR_UNKNOWN", format!("{name} lock poisoned")) } #[cfg(test)] mod tests { use super::*; use crate::backend::{ BackendKvGetGroup, BackendKvGetRequest, BackendKvScanRequest, BackendKvWriteBatch, BackendKvWriteGroup, }; async fn put( transaction: &mut (dyn BackendWriteTransaction + Send + Sync), namespace: &str, key: &[u8], value: &[u8], ) { transaction .write_kv_batch(BackendKvWriteBatch { groups: { let mut group = BackendKvWriteGroup::new(namespace); group.put(key, value); vec![group] }, }) .await .expect("put should succeed"); } async fn delete( transaction: &mut (dyn BackendWriteTransaction + Send + Sync), namespace: &str, key: &[u8], ) { transaction .write_kv_batch(BackendKvWriteBatch { groups: { let mut group = BackendKvWriteGroup::new(namespace); group.delete(key); vec![group] }, }) .await .expect("delete should succeed"); } async fn get(backend: &UnitTestBackend, namespace: &str, key: &[u8]) -> Option> { let mut transaction = backend .begin_read_transaction() .await .expect("read transaction should open"); let result = transaction .get_values(BackendKvGetRequest { groups: vec![BackendKvGetGroup { namespace: namespace.to_string(), keys: vec![key.to_vec()], }], }) .await .expect("get should succeed"); transaction .rollback() .await .expect("rollback should succeed"); result .groups .into_iter() .next() .and_then(|group| group.value(0).flatten().map(<[u8]>::to_vec)) } async fn scan( backend: &UnitTestBackend, namespace: &str, range: BackendKvScanRange, limit: usize, ) -> BackendKvEntryPage { let mut transaction = backend .begin_read_transaction() .await .expect("read transaction should open"); let result = transaction .scan_entries(BackendKvScanRequest { namespace: namespace.to_string(), range, after: None, limit, }) .await .expect("scan should succeed"); transaction .rollback() .await .expect("rollback should succeed"); result } fn assert_entries(page: &BackendKvEntryPage, expected: &[(&[u8], &[u8])]) { assert_eq!(page.len(), expected.len()); for (index, (key, value)) in expected.iter().enumerate() { assert_eq!(page.key(index).expect("key exists"), *key); assert_eq!(page.value(index).expect("value exists"), *value); } } async fn scan_entries_request( backend: &UnitTestBackend, after: Option<&[u8]>, limit: usize, ) -> BackendKvEntryPage { let mut transaction = backend .begin_read_transaction() .await .expect("read transaction should open"); let result = transaction .scan_entries(BackendKvScanRequest { namespace: "ns".to_string(), range: BackendKvScanRange::prefix(Vec::new()), after: after.map(Vec::from), limit, }) .await .expect("scan should succeed"); transaction .rollback() .await .expect("rollback should succeed"); result } async fn scan_keys_request( backend: &UnitTestBackend, after: Option<&[u8]>, limit: usize, ) -> BackendKvKeyPage { let mut transaction = backend .begin_read_transaction() .await .expect("read transaction should open"); let result = transaction .scan_keys(BackendKvScanRequest { namespace: "ns".to_string(), range: BackendKvScanRange::prefix(Vec::new()), after: after.map(Vec::from), limit, }) .await .expect("scan should succeed"); transaction .rollback() .await .expect("rollback should succeed"); result } #[tokio::test] async fn committed_put_is_visible_to_backend_reads() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "live_state", b"key", b"value").await; transaction.commit().await.expect("commit should succeed"); assert_eq!( get(&backend, "live_state", b"key").await, Some(b"value".to_vec()) ); } #[tokio::test] async fn rollback_discards_puts() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "live_state", b"key", b"value").await; transaction .rollback() .await .expect("rollback should succeed"); assert_eq!(get(&backend, "live_state", b"key").await, None); } #[tokio::test] async fn close_is_idempotent_and_does_not_destroy_data() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "live_state", b"key", b"value").await; transaction.commit().await.expect("commit should succeed"); backend.close().await.expect("first close should succeed"); backend.close().await.expect("second close should succeed"); assert_eq!( get(&backend, "live_state", b"key").await, Some(b"value".to_vec()) ); } #[tokio::test] async fn delete_removes_key_on_commit() { let backend = UnitTestBackend::new(); let mut seed = backend .begin_write_transaction() .await .expect("seed transaction should open"); put(seed.as_mut(), "live_state", b"key", b"value").await; seed.commit().await.expect("seed commit should succeed"); let mut transaction = backend .begin_write_transaction() .await .expect("delete transaction should open"); delete(transaction.as_mut(), "live_state", b"key").await; transaction.commit().await.expect("commit should succeed"); assert_eq!(get(&backend, "live_state", b"key").await, None); } #[tokio::test] async fn prefix_scan_returns_lexicographic_order_with_limit() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "ns", b"b/2", b"2").await; put(transaction.as_mut(), "ns", b"a/2", b"2").await; put(transaction.as_mut(), "ns", b"a/1", b"1").await; put(transaction.as_mut(), "other", b"a/0", b"0").await; transaction.commit().await.unwrap(); let pairs = scan(&backend, "ns", BackendKvScanRange::prefix(b"a/"), 1).await; assert_entries(&pairs, &[(b"a/1", b"1")]); } #[tokio::test] async fn scan_sets_resume_after_only_when_more_rows_exist() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "ns", b"a", b"1").await; put(transaction.as_mut(), "ns", b"b", b"2").await; put(transaction.as_mut(), "ns", b"c", b"3").await; transaction.commit().await.unwrap(); let first_page = scan_entries_request(&backend, None, 2).await; assert_entries(&first_page, &[(b"a", b"1"), (b"b", b"2")]); assert_eq!(first_page.resume_after, Some(b"b".to_vec())); let second_page = scan_entries_request(&backend, first_page.resume_after.as_deref(), 2).await; assert_entries(&second_page, &[(b"c", b"3")]); assert_eq!(second_page.resume_after, None); } #[tokio::test] async fn scan_exact_page_size_has_no_resume_after() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "ns", b"a", b"1").await; put(transaction.as_mut(), "ns", b"b", b"2").await; transaction.commit().await.unwrap(); let page = scan_entries_request(&backend, None, 2).await; assert_entries(&page, &[(b"a", b"1"), (b"b", b"2")]); assert_eq!(page.resume_after, None); } #[tokio::test] async fn key_only_scan_omits_values() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "ns", b"a", b"1").await; put(transaction.as_mut(), "ns", b"b", b"2").await; transaction.commit().await.unwrap(); let page = scan_keys_request(&backend, None, 2).await; assert_eq!(page.keys.iter().collect::>(), vec![b"a", b"b"]); assert_eq!(page.resume_after, None); } #[tokio::test] async fn existence_get_omits_values() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "ns", b"a", b"1").await; transaction.commit().await.unwrap(); let mut transaction = backend .begin_read_transaction() .await .expect("read transaction should open"); let result = transaction .exists_many(BackendKvGetRequest { groups: vec![BackendKvGetGroup { namespace: "ns".to_string(), keys: vec![b"a".to_vec(), b"missing".to_vec()], }], }) .await .expect("existence get should succeed"); transaction .rollback() .await .expect("rollback should succeed"); assert_eq!(result.groups[0].exists, vec![true, false]); } #[tokio::test] async fn range_scan_is_half_open() { let backend = UnitTestBackend::new(); let mut transaction = backend .begin_write_transaction() .await .expect("transaction should open"); put(transaction.as_mut(), "ns", b"a", b"a").await; put(transaction.as_mut(), "ns", b"b", b"b").await; put(transaction.as_mut(), "ns", b"c", b"c").await; transaction.commit().await.unwrap(); let pairs = scan( &backend, "ns", BackendKvScanRange::range(b"a", b"c"), usize::MAX, ) .await; assert_entries(&pairs, &[(b"a", b"a"), (b"b", b"b")]); } } ================================================ FILE: packages/engine/src/backend/types.rs ================================================ use async_trait::async_trait; use crate::backend::{ BackendKvEntryPage, BackendKvExistsBatch, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRequest, BackendKvValueBatch, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, }; use crate::LixError; #[async_trait] pub trait Backend: Send + Sync { async fn begin_read_transaction( &self, ) -> Result, LixError>; async fn begin_write_transaction( &self, ) -> Result, LixError>; /// Releases physical resources held by this backend handle. /// /// This is a resource lifecycle operation, not a durability boundary and /// not a destructive operation. Successful write transactions are durable /// when their commit returns; callers should not rely on `close` to save /// data. Implementations that do not own external resources may keep the /// default no-op behavior. async fn close(&self) -> Result<(), LixError> { Ok(()) } /// Destroys the physical storage target represented by this backend. /// /// This is a persistence lifecycle operation, not a logical SQL operation. /// /// Callers should treat the backend as the authority for what constitutes /// the full storage target. For example: /// /// - native SQLite may delete the main database file plus WAL/SHM sidecars /// - wasm/opfs SQLite may clear the persisted OPFS target /// - Postgres may drop or clear the configured schema/database target /// /// Callers must not attempt to infer or delete backend-owned physical /// artifacts themselves. /// /// Implementations may choose not to support destroy if the backend /// instance does not have enough information or authority to remove its /// target. async fn destroy(&self) -> Result<(), LixError> { Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "destroy is not supported by this backend".to_string(), hint: None, details: None, }) } } #[async_trait] pub trait BackendReadTransaction: Send + Sync { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result; async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result; async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result; async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result; async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result; async fn rollback(self: Box) -> Result<(), LixError>; } #[async_trait] pub trait BackendWriteTransaction: BackendReadTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result; async fn commit(self: Box) -> Result<(), LixError>; } ================================================ FILE: packages/engine/src/binary_cas/chunking.rs ================================================ const FASTCDC_MIN_CHUNK_BYTES: usize = 16 * 1024; const FASTCDC_AVG_CHUNK_BYTES: usize = 64 * 1024; const FASTCDC_MAX_CHUNK_BYTES: usize = 256 * 1024; const SINGLE_CHUNK_FAST_PATH_MAX_BYTES: usize = 64 * 1024; #[allow(dead_code)] pub(crate) fn should_materialize_chunk_cas(data: &[u8]) -> bool { data.len() > SINGLE_CHUNK_FAST_PATH_MAX_BYTES } pub(crate) fn fastcdc_chunk_ranges(data: &[u8]) -> Vec<(usize, usize)> { if data.is_empty() { return Vec::new(); } if data.len() <= SINGLE_CHUNK_FAST_PATH_MAX_BYTES { return vec![(0, data.len())]; } fastcdc::v2020::FastCDC::new( data, FASTCDC_MIN_CHUNK_BYTES as u32, FASTCDC_AVG_CHUNK_BYTES as u32, FASTCDC_MAX_CHUNK_BYTES as u32, ) .map(|chunk| { let start = chunk.offset as usize; let end = start + (chunk.length as usize); (start, end) }) .collect() } ================================================ FILE: packages/engine/src/binary_cas/codec.rs ================================================ use crate::LixError; // Binary CAS physical rows: // - manifest: BCM2 | kind:u8 | blob_size:u64 | kind payload // - empty payload: [] // - single payload: chunk_hash:[u8;32] // - chunked payload: chunk_count:u32 // - manifest chunk: BCC1 | chunk_hash:[u8;32] | uncompressed_len:u64 // - chunk: BCK1 | codec:u8 | uncompressed_len:u64 | payload:[u8] const MANIFEST_MAGIC: &[u8; 4] = b"BCM2"; const MANIFEST_CHUNK_MAGIC: &[u8; 4] = b"BCC1"; const CHUNK_MAGIC: &[u8; 4] = b"BCK1"; const MANIFEST_KIND_EMPTY: u8 = 0; const MANIFEST_KIND_SINGLE_CHUNK: u8 = 1; const MANIFEST_KIND_CHUNKED: u8 = 2; const CHUNK_CODEC_RAW_TAG: u8 = 0; const HASH_BYTES: usize = 32; const MANIFEST_HEADER_BYTES: usize = 4 + 1 + 8; const EMPTY_MANIFEST_BYTES: usize = MANIFEST_HEADER_BYTES; const SINGLE_CHUNK_MANIFEST_BYTES: usize = MANIFEST_HEADER_BYTES + HASH_BYTES; const CHUNKED_MANIFEST_BYTES: usize = MANIFEST_HEADER_BYTES + 4; const MANIFEST_CHUNK_BYTES: usize = 4 + HASH_BYTES + 8; const CHUNK_HEADER_BYTES: usize = 4 + 1 + 8; #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum BinaryChunkCodec { Raw, } impl BinaryChunkCodec { fn tag(self) -> u8 { match self { Self::Raw => CHUNK_CODEC_RAW_TAG, } } fn from_tag(tag: u8) -> Result { match tag { CHUNK_CODEC_RAW_TAG => Ok(Self::Raw), other => Err(codec_error(format!( "unsupported binary CAS chunk codec tag {other}" ))), } } } #[derive(Debug, Clone)] pub(crate) struct EncodedBinaryChunkPayload { pub(crate) codec: BinaryChunkCodec, pub(crate) data: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) enum BinaryCasManifest { Empty { size_bytes: u64, }, SingleChunk { size_bytes: u64, chunk_hash: [u8; HASH_BYTES], }, Chunked { size_bytes: u64, chunk_count: u32, }, } impl BinaryCasManifest { pub(crate) fn size_bytes(&self) -> u64 { match self { Self::Empty { size_bytes } | Self::SingleChunk { size_bytes, .. } | Self::Chunked { size_bytes, .. } => *size_bytes, } } } #[cfg(test)] pub(crate) fn binary_blob_hash_hex(data: &[u8]) -> String { crate::common::stable_content_fingerprint_hex(data) } pub(crate) fn binary_blob_hash_bytes(data: &[u8]) -> [u8; HASH_BYTES] { *blake3::hash(data).as_bytes() } pub(crate) fn hash_hex_to_bytes(hash_hex: &str, label: &str) -> Result<[u8; HASH_BYTES], LixError> { if hash_hex.len() != HASH_BYTES * 2 { return Err(codec_error(format!( "{label} hash must be {} hex characters, got {}", HASH_BYTES * 2, hash_hex.len() ))); } let mut out = [0u8; HASH_BYTES]; let bytes = hash_hex.as_bytes(); for index in 0..HASH_BYTES { out[index] = (hex_value(bytes[index * 2], label)? << 4) | hex_value(bytes[index * 2 + 1], label)?; } Ok(out) } pub(crate) fn hash_bytes_to_hex(bytes: &[u8; HASH_BYTES]) -> String { blake3::Hash::from_bytes(*bytes).to_hex().to_string() } pub(crate) fn encode_binary_cas_manifest(manifest: &BinaryCasManifest) -> Vec { let capacity = match manifest { BinaryCasManifest::Empty { .. } => EMPTY_MANIFEST_BYTES, BinaryCasManifest::SingleChunk { .. } => SINGLE_CHUNK_MANIFEST_BYTES, BinaryCasManifest::Chunked { .. } => CHUNKED_MANIFEST_BYTES, }; let mut out = Vec::with_capacity(capacity); out.extend_from_slice(MANIFEST_MAGIC); match manifest { BinaryCasManifest::Empty { size_bytes } => { out.push(MANIFEST_KIND_EMPTY); out.extend_from_slice(&size_bytes.to_be_bytes()); } BinaryCasManifest::SingleChunk { size_bytes, chunk_hash, } => { out.push(MANIFEST_KIND_SINGLE_CHUNK); out.extend_from_slice(&size_bytes.to_be_bytes()); out.extend_from_slice(chunk_hash); } BinaryCasManifest::Chunked { size_bytes, chunk_count, } => { out.push(MANIFEST_KIND_CHUNKED); out.extend_from_slice(&size_bytes.to_be_bytes()); out.extend_from_slice(&chunk_count.to_be_bytes()); } } out } pub(crate) fn decode_binary_cas_manifest(bytes: &[u8]) -> Result { if bytes.len() < MANIFEST_HEADER_BYTES { return Err(codec_error(format!( "binary CAS manifest must be at least {MANIFEST_HEADER_BYTES} bytes, got {}", bytes.len() ))); } require_magic(bytes, MANIFEST_MAGIC, "binary CAS manifest")?; let size_bytes = u64::from_be_bytes(bytes[5..13].try_into().expect("fixed slice")); match bytes[4] { MANIFEST_KIND_EMPTY => { require_len(bytes, EMPTY_MANIFEST_BYTES, "binary CAS empty manifest")?; Ok(BinaryCasManifest::Empty { size_bytes }) } MANIFEST_KIND_SINGLE_CHUNK => { require_len( bytes, SINGLE_CHUNK_MANIFEST_BYTES, "binary CAS single-chunk manifest", )?; let chunk_hash = bytes[13..45].try_into().expect("fixed slice"); Ok(BinaryCasManifest::SingleChunk { size_bytes, chunk_hash, }) } MANIFEST_KIND_CHUNKED => { require_len(bytes, CHUNKED_MANIFEST_BYTES, "binary CAS chunked manifest")?; let chunk_count = u32::from_be_bytes(bytes[13..17].try_into().expect("fixed slice")); Ok(BinaryCasManifest::Chunked { size_bytes, chunk_count, }) } other => Err(codec_error(format!( "unsupported binary CAS manifest kind {other}" ))), } } pub(crate) fn encode_binary_cas_manifest_chunk( chunk_hash: &[u8; HASH_BYTES], chunk_size: u64, ) -> Vec { let mut out = Vec::with_capacity(MANIFEST_CHUNK_BYTES); out.extend_from_slice(MANIFEST_CHUNK_MAGIC); out.extend_from_slice(chunk_hash); out.extend_from_slice(&chunk_size.to_be_bytes()); out } pub(crate) fn decode_binary_cas_manifest_chunk( bytes: &[u8], ) -> Result<([u8; HASH_BYTES], u64), LixError> { if bytes.len() != MANIFEST_CHUNK_BYTES { return Err(codec_error(format!( "binary CAS manifest chunk must be {MANIFEST_CHUNK_BYTES} bytes, got {}", bytes.len() ))); } require_magic(bytes, MANIFEST_CHUNK_MAGIC, "binary CAS manifest chunk")?; let chunk_hash = bytes[4..36].try_into().expect("fixed slice"); let chunk_size = u64::from_be_bytes(bytes[36..44].try_into().expect("fixed slice")); Ok((chunk_hash, chunk_size)) } pub(crate) fn encode_binary_cas_chunk( codec: BinaryChunkCodec, uncompressed_len: u64, payload: &[u8], ) -> Vec { let mut out = Vec::with_capacity(CHUNK_HEADER_BYTES + payload.len()); out.extend_from_slice(CHUNK_MAGIC); out.push(codec.tag()); out.extend_from_slice(&uncompressed_len.to_be_bytes()); out.extend_from_slice(payload); out } pub(crate) fn decode_binary_cas_chunk( bytes: &[u8], ) -> Result<(BinaryChunkCodec, u64, &[u8]), LixError> { if bytes.len() < CHUNK_HEADER_BYTES { return Err(codec_error(format!( "binary CAS chunk must be at least {CHUNK_HEADER_BYTES} bytes, got {}", bytes.len() ))); } require_magic(bytes, CHUNK_MAGIC, "binary CAS chunk")?; let codec = BinaryChunkCodec::from_tag(bytes[4])?; let uncompressed_len = u64::from_be_bytes(bytes[5..13].try_into().expect("fixed slice")); Ok((codec, uncompressed_len, &bytes[CHUNK_HEADER_BYTES..])) } fn require_magic(bytes: &[u8], expected: &[u8; 4], label: &str) -> Result<(), LixError> { if &bytes[..4] == expected { return Ok(()); } Err(codec_error(format!( "{label} has unsupported binary format" ))) } fn require_len(bytes: &[u8], expected: usize, label: &str) -> Result<(), LixError> { if bytes.len() == expected { return Ok(()); } Err(codec_error(format!( "{label} must be {expected} bytes, got {}", bytes.len() ))) } fn hex_value(byte: u8, label: &str) -> Result { match byte { b'0'..=b'9' => Ok(byte - b'0'), b'a'..=b'f' => Ok(byte - b'a' + 10), b'A'..=b'F' => Ok(byte - b'A' + 10), _ => Err(codec_error(format!("{label} hash contains non-hex bytes"))), } } fn codec_error(message: String) -> LixError { LixError::new("LIX_ERROR_UNKNOWN", message) } pub(crate) fn encode_binary_chunk_payload(chunk_data: &[u8]) -> EncodedBinaryChunkPayload { EncodedBinaryChunkPayload { codec: BinaryChunkCodec::Raw, data: chunk_data.to_vec(), } } #[cfg(test)] mod tests { use super::*; #[test] fn manifests_roundtrip_fixed_binary_rows() { let chunk_hash = binary_blob_hash_bytes(b"chunk"); let cases = vec![ ( BinaryCasManifest::Empty { size_bytes: 0 }, EMPTY_MANIFEST_BYTES, ), ( BinaryCasManifest::SingleChunk { size_bytes: 42, chunk_hash, }, SINGLE_CHUNK_MANIFEST_BYTES, ), ( BinaryCasManifest::Chunked { size_bytes: 42, chunk_count: 7, }, CHUNKED_MANIFEST_BYTES, ), ]; for (manifest, expected_len) in cases { let encoded = encode_binary_cas_manifest(&manifest); assert_eq!(encoded.len(), expected_len); assert_eq!(decode_binary_cas_manifest(&encoded).unwrap(), manifest); } } #[test] fn manifest_chunk_roundtrips_fixed_binary_row() { let hash = binary_blob_hash_bytes(b"chunk"); let encoded = encode_binary_cas_manifest_chunk(&hash, 1024); assert_eq!(encoded.len(), MANIFEST_CHUNK_BYTES); assert_eq!( decode_binary_cas_manifest_chunk(&encoded).unwrap(), (hash, 1024) ); } #[test] fn chunk_roundtrips_payload_as_remaining_bytes() { let payload = b"hello payload"; let encoded = encode_binary_cas_chunk(BinaryChunkCodec::Raw, payload.len() as u64, payload); assert_eq!(&encoded[..4], CHUNK_MAGIC); let (codec, uncompressed_len, decoded_payload) = decode_binary_cas_chunk(&encoded).unwrap(); assert_eq!(codec, BinaryChunkCodec::Raw); assert_eq!(uncompressed_len, payload.len() as u64); assert_eq!(decoded_payload, payload); } #[test] fn wrong_magic_is_rejected() { let mut encoded = encode_binary_cas_manifest(&BinaryCasManifest::Empty { size_bytes: 0 }); encoded[0] = b'X'; let error = decode_binary_cas_manifest(&encoded).unwrap_err(); assert!(error.message.contains("unsupported binary format")); } #[test] fn hex_hashes_roundtrip_to_32_byte_keys() { let hash_hex = binary_blob_hash_hex(b"blob"); let hash_bytes = hash_hex_to_bytes(&hash_hex, "test").unwrap(); assert_eq!(hash_bytes.len(), 32); assert_eq!(hash_bytes_to_hex(&hash_bytes), hash_hex); } } ================================================ FILE: packages/engine/src/binary_cas/context.rs ================================================ use async_trait::async_trait; use crate::binary_cas::{ BlobBytesBatch, BlobExistsBatch, BlobHash, BlobMetadataBatch, BlobWrite, BlobWriteReceipt, }; use crate::storage::{StorageReader, StorageWriteSet}; use crate::LixError; use std::collections::HashSet; #[async_trait] pub(crate) trait BlobDataReader: Send + Sync { async fn load_bytes_many(&self, hashes: &[BlobHash]) -> Result; } /// Long-lived Binary CAS context factory. /// /// The context does not own storage. Callers explicitly provide a KV store via /// `reader(...)` or `writer(...)`, keeping backend and transaction ownership at /// the execution layer. pub(crate) struct BinaryCasContext; impl BinaryCasContext { pub(crate) fn new() -> Self { Self } /// Creates a Binary CAS reader over any storage reader. /// /// The reader can be a read transaction or the active write transaction /// when reads must participate in transaction-local visibility. pub(crate) fn reader(&self, store: S) -> BinaryCasStoreReader where S: StorageReader, { BinaryCasStoreReader { store } } pub(crate) fn writer<'a>(&self, writes: &'a mut StorageWriteSet) -> BinaryCasWriter<'a> { BinaryCasWriter::new(writes) } } #[async_trait] impl BlobDataReader for BinaryCasStoreReader where S: StorageReader + Clone + Send + Sync, { async fn load_bytes_many(&self, hashes: &[BlobHash]) -> Result { let mut reader = BinaryCasStoreReader { store: self.store.clone(), }; BinaryCasStoreReader::load_bytes_many(&mut reader, hashes).await } } /// Binary CAS reader over a caller-supplied KV store. pub(crate) struct BinaryCasStoreReader { store: S, } impl BinaryCasStoreReader where S: StorageReader, { #[allow(dead_code)] pub(crate) async fn exists_many( &mut self, hashes: &[BlobHash], ) -> Result { crate::binary_cas::kv::exists_many(&mut self.store, hashes).await } #[allow(dead_code)] pub(crate) async fn load_metadata_many( &mut self, hashes: &[BlobHash], ) -> Result { crate::binary_cas::kv::load_metadata_many(&mut self.store, hashes).await } pub(crate) async fn load_bytes_many( &mut self, hashes: &[BlobHash], ) -> Result { crate::binary_cas::kv::load_bytes_many(&mut self.store, hashes).await } #[cfg(feature = "storage-benches")] pub(crate) async fn count_blob_manifests(&mut self) -> Result { crate::binary_cas::kv::count_manifests(&mut self.store).await } } /// Transaction-scoped Binary CAS writer. /// /// This type does not begin, commit, or roll back transactions. It only writes /// CAS data into the transaction supplied by the caller. pub(crate) struct BinaryCasWriter<'a> { writes: &'a mut StorageWriteSet, blob_hashes: HashSet<[u8; 32]>, chunk_keys: HashSet>, } impl<'a> BinaryCasWriter<'a> { fn new(writes: &'a mut StorageWriteSet) -> Self { Self { writes, blob_hashes: HashSet::new(), chunk_keys: HashSet::new(), } } pub(crate) fn stage_bytes(&mut self, bytes: &[u8]) -> Result { crate::binary_cas::kv::stage_blob_write( self.writes, &mut self.blob_hashes, &mut self.chunk_keys, &BlobWrite { bytes }, ) } #[allow(dead_code)] pub(crate) fn stage_many( &mut self, writes: &[BlobWrite<'_>], ) -> Result, LixError> { writes .iter() .map(|write| { crate::binary_cas::kv::stage_blob_write( self.writes, &mut self.blob_hashes, &mut self.chunk_keys, write, ) }) .collect() } } ================================================ FILE: packages/engine/src/binary_cas/kv.rs ================================================ #![allow(dead_code)] use crate::binary_cas::chunking::fastcdc_chunk_ranges; use crate::binary_cas::codec::{ decode_binary_cas_chunk, decode_binary_cas_manifest, decode_binary_cas_manifest_chunk, encode_binary_cas_chunk, encode_binary_cas_manifest, encode_binary_cas_manifest_chunk, encode_binary_chunk_payload, BinaryCasManifest, BinaryChunkCodec, }; use crate::binary_cas::{ BlobBytesBatch, BlobExistsBatch, BlobHash, BlobLayout, BlobMetadata, BlobMetadataBatch, BlobWrite, BlobWriteReceipt, }; use crate::storage::{ KvGetGroup, KvGetRequest, KvScanRange, KvScanRequest, StorageReader, StorageWriteSet, }; use crate::LixError; use std::collections::{HashMap, HashSet}; pub(crate) const BINARY_CAS_MANIFEST_NAMESPACE: &str = "binary_cas.manifest"; pub(crate) const BINARY_CAS_MANIFEST_CHUNK_NAMESPACE: &str = "binary_cas.manifest_chunk"; pub(crate) const BINARY_CAS_CHUNK_NAMESPACE: &str = "binary_cas.chunk"; #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvBlobManifestChunk { pub(crate) chunk_hash: [u8; 32], pub(crate) chunk_size: u64, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvChunk { pub(crate) codec: BinaryChunkCodec, pub(crate) uncompressed_len: u64, pub(crate) data: Vec, } pub(crate) async fn load_manifest( store: &mut impl StorageReader, blob_hash: BlobHash, ) -> Result, LixError> { let Some(bytes) = get_one( store, BINARY_CAS_MANIFEST_NAMESPACE, manifest_key(blob_hash), ) .await? else { return Ok(None); }; decode_binary_cas_manifest(&bytes).map(Some) } #[cfg(feature = "storage-benches")] pub(crate) async fn count_manifests(store: &mut impl StorageReader) -> Result { Ok(scan_all_values( store, BINARY_CAS_MANIFEST_NAMESPACE, KvScanRange::Prefix(Vec::new()), ) .await? .len()) } pub(crate) fn stage_manifest( writes: &mut StorageWriteSet, blob_hash: BlobHash, manifest: &BinaryCasManifest, ) { writes.put( BINARY_CAS_MANIFEST_NAMESPACE, manifest_key(blob_hash), encode_binary_cas_manifest(manifest), ); } pub(crate) async fn scan_manifest_chunks( store: &mut impl StorageReader, blob_hash: BlobHash, ) -> Result, LixError> { scan_all_values( store, BINARY_CAS_MANIFEST_CHUNK_NAMESPACE, KvScanRange::Prefix(manifest_chunk_prefix(blob_hash)), ) .await? .into_iter() .map(|value| { let (chunk_hash, chunk_size) = decode_binary_cas_manifest_chunk(&value)?; Ok(KvBlobManifestChunk { chunk_hash, chunk_size, }) }) .collect() } pub(crate) fn stage_manifest_chunk( writes: &mut StorageWriteSet, blob_hash: BlobHash, chunk_index: u64, chunk: &KvBlobManifestChunk, ) { writes.put( BINARY_CAS_MANIFEST_CHUNK_NAMESPACE, manifest_chunk_key(blob_hash, chunk_index), encode_binary_cas_manifest_chunk(&chunk.chunk_hash, chunk.chunk_size), ); } pub(crate) async fn load_chunk( store: &mut impl StorageReader, chunk_hash: BlobHash, ) -> Result, LixError> { let Some(bytes) = get_one(store, BINARY_CAS_CHUNK_NAMESPACE, chunk_key(chunk_hash)).await? else { return Ok(None); }; let (codec, uncompressed_len, payload) = decode_binary_cas_chunk(&bytes)?; Ok(Some(KvChunk { codec, uncompressed_len, data: payload.to_vec(), })) } pub(crate) fn stage_chunk(writes: &mut StorageWriteSet, chunk_hash: BlobHash, chunk: &KvChunk) { writes.put( BINARY_CAS_CHUNK_NAMESPACE, chunk_key(chunk_hash), encode_binary_cas_chunk(chunk.codec, chunk.uncompressed_len, &chunk.data), ); } async fn get_one( store: &mut impl StorageReader, namespace: &str, key: Vec, ) -> Result>, LixError> { Ok(store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: namespace.to_string(), keys: vec![key], }], }) .await? .groups .into_iter() .next() .and_then(|group| group.single_value_owned())) } async fn scan_all_values( store: &mut impl StorageReader, namespace: &str, range: KvScanRange, ) -> Result>, LixError> { let page = store .scan_values(KvScanRequest { namespace: namespace.to_string(), range, after: None, limit: usize::MAX, }) .await? .values; Ok(page.iter().map(<[u8]>::to_vec).collect()) } pub(crate) async fn load_metadata_many( store: &mut impl StorageReader, hashes: &[BlobHash], ) -> Result { if hashes.is_empty() { return Ok(BlobMetadataBatch::new(Vec::new())); } let rows = store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: BINARY_CAS_MANIFEST_NAMESPACE.to_string(), keys: hashes.iter().map(|hash| manifest_key(*hash)).collect(), }], }) .await? .groups .into_iter() .next() .map(|group| { group .values_iter() .map(|value| value.map(<[u8]>::to_vec)) .collect::>() }) .unwrap_or_default(); if rows.len() != hashes.len() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS metadata read expected {} rows, got {}", hashes.len(), rows.len() ), )); } let entries = rows .into_iter() .zip(hashes.iter().copied()) .map(|(row, hash)| { row.map(|bytes| { let manifest = decode_binary_cas_manifest(&bytes)?; metadata_from_manifest(hash, manifest) }) .transpose() }) .collect::, _>>()?; Ok(BlobMetadataBatch::new(entries)) } pub(crate) async fn exists_many( store: &mut impl StorageReader, hashes: &[BlobHash], ) -> Result { Ok(BlobExistsBatch::new( load_metadata_many(store, hashes) .await? .into_vec() .into_iter() .map(|metadata| metadata.is_some()) .collect(), )) } pub(crate) async fn load_bytes_many( store: &mut impl StorageReader, hashes: &[BlobHash], ) -> Result { let metadata = load_metadata_many(store, hashes).await?.into_vec(); let mut chunked_manifests = Vec::new(); let mut requested_chunks = Vec::new(); let mut seen_chunks = HashSet::new(); for (index, metadata) in metadata.iter().enumerate() { let Some(metadata) = metadata else { continue; }; match &metadata.layout { BlobLayout::Empty => {} BlobLayout::SingleChunk { chunk_hash } => { if seen_chunks.insert(*chunk_hash) { requested_chunks.push(*chunk_hash); } } BlobLayout::Chunked { chunk_count } => { let manifest_chunks = scan_manifest_chunks(store, metadata.hash).await?; if manifest_chunks.len() != *chunk_count as usize { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS blob '{}' expected {} chunks, found {}", metadata.hash.to_hex(), chunk_count, manifest_chunks.len() ), )); } for manifest_chunk in &manifest_chunks { let chunk_hash = BlobHash::from_bytes(manifest_chunk.chunk_hash); if seen_chunks.insert(chunk_hash) { requested_chunks.push(chunk_hash); } } chunked_manifests.push((index, manifest_chunks)); } } } let chunk_rows = load_chunk_rows(store, &requested_chunks).await?; let chunk_rows_by_hash = requested_chunks .into_iter() .zip(chunk_rows.into_iter()) .collect::>(); let chunked_manifests_by_index = chunked_manifests .into_iter() .collect::>>(); let entries = metadata .into_iter() .enumerate() .map(|(index, metadata)| { metadata .map(|metadata| { assemble_blob_bytes( &metadata, &chunk_rows_by_hash, chunked_manifests_by_index.get(&index), ) }) .transpose() }) .collect::, _>>()?; Ok(BlobBytesBatch::new(entries)) } async fn load_chunk_rows( store: &mut impl StorageReader, hashes: &[BlobHash], ) -> Result>>, LixError> { if hashes.is_empty() { return Ok(Vec::new()); } Ok(store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: BINARY_CAS_CHUNK_NAMESPACE.to_string(), keys: hashes.iter().map(|hash| chunk_key(*hash)).collect(), }], }) .await? .groups .into_iter() .next() .map(|group| { group .values_iter() .map(|value| value.map(<[u8]>::to_vec)) .collect::>() }) .unwrap_or_default()) } fn assemble_blob_bytes( metadata: &BlobMetadata, chunk_rows_by_hash: &HashMap>>, chunked_manifest: Option<&Vec>, ) -> Result, LixError> { let expected_blob_size = persisted_size_to_usize(metadata.size_bytes, "binary CAS blob")?; let bytes = match &metadata.layout { BlobLayout::Empty => { if metadata.hash != BlobHash::from_content(&[]) { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS blob '{}' failed content-address verification", metadata.hash.to_hex() ), )); } Vec::new() } BlobLayout::SingleChunk { chunk_hash } => { let chunk = decode_chunk_from_map( chunk_rows_by_hash, metadata.hash, *chunk_hash, expected_blob_size, )?; if *chunk_hash != metadata.hash && BlobHash::from_content(&chunk) != metadata.hash { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS blob '{}' failed content-address verification", metadata.hash.to_hex() ), )); } chunk } BlobLayout::Chunked { chunk_count } => { let Some(manifest_chunks) = chunked_manifest else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS blob '{}' missing chunk manifest", metadata.hash.to_hex() ), )); }; if manifest_chunks.len() != *chunk_count as usize { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS blob '{}' expected {} chunks, found {}", metadata.hash.to_hex(), chunk_count, manifest_chunks.len() ), )); } let mut out = Vec::with_capacity(expected_blob_size); for manifest_chunk in manifest_chunks { let chunk_hash = BlobHash::from_bytes(manifest_chunk.chunk_hash); let expected_chunk_size = persisted_size_to_usize(manifest_chunk.chunk_size, "binary CAS chunk")?; let chunk = decode_chunk_from_map( chunk_rows_by_hash, metadata.hash, chunk_hash, expected_chunk_size, )?; out.extend_from_slice(&chunk); } if out.len() != expected_blob_size { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS blob '{}' expected {} bytes, decoded {} bytes", metadata.hash.to_hex(), expected_blob_size, out.len() ), )); } if BlobHash::from_content(&out) != metadata.hash { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS blob '{}' failed content-address verification", metadata.hash.to_hex() ), )); } out } }; Ok(bytes) } fn decode_chunk_from_map( chunk_rows_by_hash: &HashMap>>, blob_hash: BlobHash, chunk_hash: BlobHash, expected_chunk_size: usize, ) -> Result, LixError> { let Some(Some(chunk_bytes)) = chunk_rows_by_hash.get(&chunk_hash) else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS chunk '{}' is missing for blob '{}'", chunk_hash.to_hex(), blob_hash.to_hex() ), )); }; decode_and_verify_chunk(chunk_bytes, expected_chunk_size, blob_hash, chunk_hash) } fn decode_and_verify_chunk( chunk_bytes: &[u8], expected_chunk_size: usize, blob_hash: BlobHash, chunk_hash: BlobHash, ) -> Result, LixError> { let (codec, uncompressed_len, chunk_payload) = decode_binary_cas_chunk(chunk_bytes)?; if uncompressed_len != expected_chunk_size as u64 { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS chunk '{}' for blob '{}' expected {} uncompressed bytes, row says {}", chunk_hash.to_hex(), blob_hash.to_hex(), expected_chunk_size, uncompressed_len ), )); } let BinaryChunkCodec::Raw = codec; if chunk_payload.len() != expected_chunk_size { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS chunk '{}' for blob '{}' expected {} decoded bytes, got {}", chunk_hash.to_hex(), blob_hash.to_hex(), expected_chunk_size, chunk_payload.len() ), )); } if BlobHash::from_content(chunk_payload) != chunk_hash { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS chunk '{}' for blob '{}' failed content-address verification", chunk_hash.to_hex(), blob_hash.to_hex() ), )); } Ok(chunk_payload.to_vec()) } pub(crate) fn stage_blob_write( writes: &mut StorageWriteSet, blob_hashes: &mut HashSet<[u8; 32]>, chunk_keys: &mut HashSet>, write: &BlobWrite<'_>, ) -> Result { let blob_hash = BlobHash::from_content(write.bytes); let chunk_ranges = fastcdc_chunk_ranges(write.bytes); let layout = match chunk_ranges.as_slice() { [] => BlobLayout::Empty, [(start, end)] => BlobLayout::SingleChunk { chunk_hash: BlobHash::from_content(&write.bytes[*start..*end]), }, _ => BlobLayout::Chunked { chunk_count: u32::try_from(chunk_ranges.len()).map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "binary CAS blob has too many chunks for manifest".to_string(), ) })?, }, }; let receipt = BlobWriteReceipt { hash: blob_hash, size_bytes: write.bytes.len() as u64, layout: layout.clone(), }; if !blob_hashes.insert(blob_hash.into_bytes()) { return Ok(receipt); } match &layout { BlobLayout::Empty => { stage_manifest( writes, blob_hash, &BinaryCasManifest::Empty { size_bytes: 0 }, ); } BlobLayout::SingleChunk { chunk_hash } => { let chunk_hash = *chunk_hash; stage_manifest( writes, blob_hash, &BinaryCasManifest::SingleChunk { size_bytes: write.bytes.len() as u64, chunk_hash: chunk_hash.into_bytes(), }, ); if chunk_keys.insert(chunk_key(chunk_hash)) { let encoded_chunk = encode_binary_chunk_payload(write.bytes); stage_chunk( writes, chunk_hash, &KvChunk { codec: encoded_chunk.codec, uncompressed_len: write.bytes.len() as u64, data: encoded_chunk.data, }, ); } } BlobLayout::Chunked { chunk_count } => { stage_manifest( writes, blob_hash, &BinaryCasManifest::Chunked { size_bytes: write.bytes.len() as u64, chunk_count: *chunk_count, }, ); for (chunk_index, (start, end)) in chunk_ranges.into_iter().enumerate() { let chunk_data = &write.bytes[start..end]; let chunk_hash = BlobHash::from_content(chunk_data); let chunk_key = chunk_key(chunk_hash); if chunk_keys.insert(chunk_key.clone()) { let encoded_chunk = encode_binary_chunk_payload(chunk_data); stage_chunk( writes, chunk_hash, &KvChunk { codec: encoded_chunk.codec, uncompressed_len: chunk_data.len() as u64, data: encoded_chunk.data, }, ); } stage_manifest_chunk( writes, blob_hash, chunk_index as u64, &KvBlobManifestChunk { chunk_hash: *chunk_hash.as_bytes(), chunk_size: chunk_data.len() as u64, }, ); } } } Ok(receipt) } fn metadata_from_manifest( hash: BlobHash, manifest: BinaryCasManifest, ) -> Result { let size_bytes = manifest.size_bytes(); let layout = match manifest { BinaryCasManifest::Empty { size_bytes } => { if size_bytes != 0 { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary CAS empty blob '{}' has nonzero size {size_bytes}", hash.to_hex() ), )); } BlobLayout::Empty } BinaryCasManifest::SingleChunk { chunk_hash, .. } => BlobLayout::SingleChunk { chunk_hash: BlobHash::from_bytes(chunk_hash), }, BinaryCasManifest::Chunked { chunk_count, .. } => BlobLayout::Chunked { chunk_count }, }; Ok(BlobMetadata { hash, size_bytes, layout, }) } fn manifest_key(blob_hash: BlobHash) -> Vec { blob_hash.as_bytes().to_vec() } fn manifest_chunk_prefix(blob_hash: BlobHash) -> Vec { blob_hash.as_bytes().to_vec() } fn manifest_chunk_key(blob_hash: BlobHash, chunk_index: u64) -> Vec { let mut out = Vec::with_capacity(40); out.extend_from_slice(blob_hash.as_bytes()); out.extend_from_slice(&chunk_index.to_be_bytes()); out } fn chunk_key(chunk_hash: BlobHash) -> Vec { chunk_hash.as_bytes().to_vec() } fn persisted_size_to_usize(size: u64, label: &str) -> Result { usize::try_from(size).map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", format!("{label} size {size} does not fit in this runtime"), ) }) } #[cfg(test)] mod tests { use super::*; use crate::backend::testing::UnitTestBackend; use crate::binary_cas::BinaryCasContext; use crate::storage::{StorageContext, StorageWriteSet}; fn stage_blob_to_writes(writes: &mut StorageWriteSet, data: &[u8]) { let mut writer = BinaryCasContext::new().writer(writes); writer.stage_bytes(data).expect("blob write should persist"); } #[tokio::test] async fn stores_manifest_chunks_in_scan_order() { let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new())); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let blob_hash = BlobHash::from_content(b"blob-a"); let chunk_a_hash = BlobHash::from_content(b"chunk-a").into_bytes(); let chunk_b_hash = BlobHash::from_content(b"chunk-b").into_bytes(); { let mut writes = StorageWriteSet::new(); stage_manifest( &mut writes, blob_hash, &BinaryCasManifest::Chunked { size_bytes: 12, chunk_count: 2, }, ); stage_manifest_chunk( &mut writes, blob_hash, 1, &KvBlobManifestChunk { chunk_hash: chunk_b_hash, chunk_size: 6, }, ); stage_manifest_chunk( &mut writes, blob_hash, 0, &KvBlobManifestChunk { chunk_hash: chunk_a_hash, chunk_size: 6, }, ); writes .apply(&mut transaction.as_mut()) .await .expect("manifest writes should apply"); } transaction.commit().await.expect("commit should succeed"); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( load_manifest(&mut store, blob_hash) .await .expect("manifest should load"), Some(BinaryCasManifest::Chunked { size_bytes: 12, chunk_count: 2, }) ); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( scan_manifest_chunks(&mut store, blob_hash) .await .expect("manifest chunks should scan"), vec![ KvBlobManifestChunk { chunk_hash: chunk_a_hash, chunk_size: 6, }, KvBlobManifestChunk { chunk_hash: chunk_b_hash, chunk_size: 6, }, ] ); } #[tokio::test] async fn stores_encoded_chunks_by_chunk_hash() { let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new())); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let chunk = KvChunk { codec: BinaryChunkCodec::Raw, uncompressed_len: 5, data: b"hello".to_vec(), }; let chunk_hash = BlobHash::from_content(b"chunk-a"); { let mut writes = StorageWriteSet::new(); stage_chunk(&mut writes, chunk_hash, &chunk); writes .apply(&mut transaction.as_mut()) .await .expect("chunk should apply"); } transaction.commit().await.expect("commit should succeed"); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( load_chunk(&mut store, chunk_hash) .await .expect("chunk should load"), Some(chunk) ); } #[test] fn binary_hash_keys_are_compact_and_manifest_chunks_sort_by_index() { let blob_hash = BlobHash::from_content(b"blob"); let manifest_key = manifest_key(blob_hash); let chunk_key = chunk_key(BlobHash::from_content(b"chunk")); let first = manifest_chunk_key(blob_hash, 1); let second = manifest_chunk_key(blob_hash, 2); let later = manifest_chunk_key(blob_hash, 10); assert_eq!(manifest_key.len(), 32); assert_eq!(chunk_key.len(), 32); assert_eq!(first.len(), 40); assert!(first < second); assert!(second < later); } #[tokio::test] async fn public_kv_api_roundtrips_blob_bytes() { let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new())); let data = b"hello chunked kv cas"; let blob_hash = BlobHash::from_content(data); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); stage_blob_to_writes(&mut writes, data); writes .apply(&mut transaction.as_mut()) .await .expect("blob write should apply"); } transaction.commit().await.expect("commit should succeed"); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( load_bytes_many(&mut store, &[blob_hash]) .await .expect("blob should load") .into_vec(), vec![Some(data.to_vec())] ); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( load_manifest(&mut store, blob_hash) .await .expect("manifest should load"), Some(BinaryCasManifest::SingleChunk { size_bytes: data.len() as u64, chunk_hash: BlobHash::from_content(data).into_bytes(), }) ); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( scan_manifest_chunks(&mut store, blob_hash) .await .expect("single-chunk blob should not spill manifest chunks"), Vec::::new() ); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( exists_many(&mut store, &[blob_hash]) .await .expect("blob exists should succeed") .into_vec(), vec![true] ); } #[tokio::test] async fn read_rejects_chunk_bytes_that_do_not_match_manifest_hash() { let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new())); let data = b"same length"; let corrupted = b"SAME length"; let blob_hash = BlobHash::from_content(data); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); stage_blob_to_writes(&mut writes, data); writes .apply(&mut transaction.as_mut()) .await .expect("blob write should apply"); } transaction.commit().await.expect("commit should succeed"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); writes.put( BINARY_CAS_CHUNK_NAMESPACE, chunk_key(blob_hash), encode_binary_cas_chunk(BinaryChunkCodec::Raw, corrupted.len() as u64, corrupted), ); writes .apply(&mut transaction.as_mut()) .await .expect("corrupt chunk should overwrite"); } transaction.commit().await.expect("commit should succeed"); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); let error = load_bytes_many(&mut store, &[blob_hash]) .await .expect_err("corrupt chunk should be rejected"); assert!(error .message .contains("failed content-address verification")); } #[tokio::test] async fn read_rejects_manifest_that_assembles_wrong_blob_hash() { let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new())); let expected = b"expected bytes"; let substituted = b"different byte"; assert_eq!(expected.len(), substituted.len()); let expected_blob_hash = BlobHash::from_content(expected); let substituted_chunk_hash = BlobHash::from_content(substituted); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); stage_manifest( &mut writes, expected_blob_hash, &BinaryCasManifest::Chunked { size_bytes: expected.len() as u64, chunk_count: 1, }, ); stage_manifest_chunk( &mut writes, expected_blob_hash, 0, &KvBlobManifestChunk { chunk_hash: BlobHash::from_content(substituted).into_bytes(), chunk_size: substituted.len() as u64, }, ); stage_chunk( &mut writes, substituted_chunk_hash, &KvChunk { codec: BinaryChunkCodec::Raw, uncompressed_len: substituted.len() as u64, data: substituted.to_vec(), }, ); writes .apply(&mut transaction.as_mut()) .await .expect("wrong manifest fixture should apply"); } transaction.commit().await.expect("commit should succeed"); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); let error = load_bytes_many(&mut store, &[expected_blob_hash]) .await .expect_err("wrong assembled blob should be rejected"); assert!(error .message .contains("failed content-address verification")); } #[tokio::test] async fn public_kv_api_roundtrips_empty_blob() { let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new())); let data = b""; let blob_hash = BlobHash::from_content(data); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); stage_blob_to_writes(&mut writes, data); writes .apply(&mut transaction.as_mut()) .await .expect("blob write should apply"); } transaction.commit().await.expect("commit should succeed"); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( load_bytes_many(&mut store, &[blob_hash]) .await .expect("empty blob should load") .into_vec(), vec![Some(Vec::new())] ); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( scan_manifest_chunks(&mut store, blob_hash) .await .expect("empty blob chunks should scan"), Vec::::new() ); } #[tokio::test] async fn public_kv_api_roundtrips_multi_chunk_blob() { let storage = StorageContext::new(std::sync::Arc::new(UnitTestBackend::new())); let data = (0..600_000) .map(|index| (index % 251) as u8) .collect::>(); let blob_hash = BlobHash::from_content(&data); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); stage_blob_to_writes(&mut writes, &data); writes .apply(&mut transaction.as_mut()) .await .expect("blob write should apply"); } transaction.commit().await.expect("commit should succeed"); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert_eq!( load_bytes_many(&mut store, &[blob_hash]) .await .expect("large blob should load") .into_vec(), vec![Some(data.clone())] ); let mut store = storage .begin_read_transaction() .await .expect("read transaction should open"); assert!( scan_manifest_chunks(&mut store, blob_hash) .await .expect("large blob chunks should scan") .len() > 1 ); } } ================================================ FILE: packages/engine/src/binary_cas/mod.rs ================================================ mod chunking; mod codec; mod context; pub(crate) mod kv; mod types; pub(crate) use context::{BinaryCasContext, BlobDataReader}; pub(crate) use types::{ BlobBytesBatch, BlobExistsBatch, BlobHash, BlobLayout, BlobMetadata, BlobMetadataBatch, BlobWrite, BlobWriteReceipt, }; ================================================ FILE: packages/engine/src/binary_cas/types.rs ================================================ use crate::binary_cas::codec::{binary_blob_hash_bytes, hash_bytes_to_hex, hash_hex_to_bytes}; use crate::LixError; #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord)] pub(crate) struct BlobHash([u8; 32]); impl BlobHash { pub(crate) fn from_bytes(bytes: [u8; 32]) -> Self { Self(bytes) } pub(crate) fn from_content(content: &[u8]) -> Self { Self(binary_blob_hash_bytes(content)) } pub(crate) fn from_hex(hash_hex: &str) -> Result { Ok(Self(hash_hex_to_bytes(hash_hex, "binary CAS blob")?)) } pub(crate) fn to_hex(self) -> String { hash_bytes_to_hex(&self.0) } pub(crate) fn as_bytes(&self) -> &[u8; 32] { &self.0 } pub(crate) fn into_bytes(self) -> [u8; 32] { self.0 } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) enum BlobLayout { Empty, SingleChunk { chunk_hash: BlobHash }, Chunked { chunk_count: u32 }, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct BlobMetadata { pub(crate) hash: BlobHash, pub(crate) size_bytes: u64, pub(crate) layout: BlobLayout, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct BlobExistsBatch { entries: Vec, } impl BlobExistsBatch { pub(crate) fn new(entries: Vec) -> Self { Self { entries } } #[allow(dead_code)] pub(crate) fn get(&self, index: usize) -> bool { self.entries.get(index).copied().unwrap_or(false) } #[allow(dead_code)] pub(crate) fn into_vec(self) -> Vec { self.entries } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct BlobMetadataBatch { entries: Vec>, } impl BlobMetadataBatch { pub(crate) fn new(entries: Vec>) -> Self { Self { entries } } #[allow(dead_code)] pub(crate) fn get(&self, index: usize) -> Option<&BlobMetadata> { self.entries.get(index).and_then(Option::as_ref) } pub(crate) fn into_vec(self) -> Vec> { self.entries } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct BlobBytesBatch { entries: Vec>>, } impl BlobBytesBatch { pub(crate) fn new(entries: Vec>>) -> Self { Self { entries } } #[allow(dead_code)] pub(crate) fn get(&self, index: usize) -> Option<&[u8]> { self.entries .get(index) .and_then(Option::as_ref) .map(Vec::as_slice) } pub(crate) fn into_vec(self) -> Vec>> { self.entries } } #[derive(Debug, Clone, Copy)] pub(crate) struct BlobWrite<'a> { pub(crate) bytes: &'a [u8], } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct BlobWriteReceipt { pub(crate) hash: BlobHash, pub(crate) size_bytes: u64, pub(crate) layout: BlobLayout, } ================================================ FILE: packages/engine/src/catalog/context.rs ================================================ use std::collections::BTreeMap; use serde_json::Value as JsonValue; use crate::catalog::SchemaCatalogFact; use crate::domain::{committed_row_is_exact_version_scoped, Domain}; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{LiveStateFilter, LiveStateReader, LiveStateScanRequest}; use crate::schema::schema_key_from_definition; use crate::{LixError, NullableKeyFilter}; const REGISTERED_SCHEMA_KEY: &str = "lix_registered_schema"; /// Engine schema visibility boundary. /// /// SQL planning receives a schema snapshot from live state. System schemas are /// seeded as ordinary `lix_registered_schema` rows during initialization, so /// runtime schema visibility has one source of truth. pub(crate) struct CatalogContext; impl CatalogContext { pub(crate) fn new() -> Self { Self } /// Loads schema definitions for SQL surface planning at `version_id`. /// /// SQL surfaces are a read-planning projection over the active untracked /// schema catalog. Validation must use `schema_facts_for_domain` instead so /// schema durability remains explicit. pub(crate) async fn schema_jsons_for_sql_read_planning( &self, live_state: &R, version_id: &str, ) -> Result, LixError> where R: LiveStateReader + ?Sized, { let facts = self .schema_facts_for_domain(live_state, &Domain::schema_catalog(version_id, true)) .await?; let mut schemas = BTreeMap::::new(); for fact in facts { let schema_key = fact.catalog_key().schema_key.clone(); if schemas .insert(schema_key.clone(), fact.schema().clone()) .is_some() { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "SQL surface schema '{}' is visible from more than one schema catalog fact", schema_key ), ) .with_hint("SQL entity surfaces are named by schema_key. Keep exactly one visible schema per schema_key for SQL planning.")); } } Ok(schemas.into_values().collect()) } /// Loads schema facts reachable from a row domain. pub(crate) async fn schema_facts_for_domain( &self, live_state: &R, domain: &Domain, ) -> Result, LixError> where R: LiveStateReader + ?Sized, { let mut facts = Vec::new(); for schema_domain in domain.schema_catalog_domains() { let rows = live_state .scan_rows(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![REGISTERED_SCHEMA_KEY.to_string()], version_ids: vec![schema_domain.version_id().to_string()], file_ids: vec![NullableKeyFilter::Null], untracked: Some(schema_domain.untracked()), include_tombstones: false, ..LiveStateFilter::default() }, ..LiveStateScanRequest::default() }) .await?; for row in rows .into_iter() .filter(|row| row_belongs_to_schema_catalog_domain(row, &schema_domain)) { let Some((key, schema)) = decode_registered_schema_row(&row)? else { continue; }; facts.push(SchemaCatalogFact::new(schema_domain.clone(), key, schema)); } } Ok(facts) } } fn row_belongs_to_schema_catalog_domain(row: &MaterializedLiveStateRow, domain: &Domain) -> bool { row.schema_key == REGISTERED_SCHEMA_KEY && row.file_id.is_none() && row.snapshot_content.is_some() && row.version_id == domain.version_id() && row.untracked == domain.untracked() && committed_row_is_exact_version_scoped(row, domain.version_id()) } fn decode_registered_schema_row( row: &MaterializedLiveStateRow, ) -> Result, LixError> { if row.schema_key != REGISTERED_SCHEMA_KEY { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "expected lix_registered_schema row, got schema_key={}", row.schema_key ), )); } let Some(snapshot_content) = row.snapshot_content.as_deref() else { return Ok(None); }; let snapshot: JsonValue = serde_json::from_str(snapshot_content).map_err(|err| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid registered schema snapshot JSON: {err}"), ) })?; let schema = snapshot.get("value").cloned().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "registered schema snapshot missing value", ) })?; let key = schema_key_from_definition(&schema)?; Ok(Some((key, schema))) } #[cfg(test)] mod tests { use async_trait::async_trait; use serde_json::json; use super::*; use crate::live_state::LiveStateRowRequest; use crate::GLOBAL_VERSION_ID; #[tokio::test] async fn visible_schemas_are_loaded_from_registered_schema_rows() { let context = CatalogContext::new(); let schemas = context .schema_jsons_for_sql_read_planning( &RowsLiveStateReader::new(vec![ registered_schema_row("lix_registered_schema"), registered_schema_row("lix_key_value"), ]), "global", ) .await .expect("schema visibility should load"); assert!(schemas.iter().any(|schema| { schema.get("x-lix-key").and_then(JsonValue::as_str) == Some("lix_registered_schema") })); assert!(schemas.iter().any(|schema| { schema.get("x-lix-key").and_then(JsonValue::as_str) == Some("lix_key_value") })); } #[tokio::test] async fn visible_schemas_include_registered_schema_rows() { let context = CatalogContext::new(); let schemas = context .schema_jsons_for_sql_read_planning( &RowsLiveStateReader::new(vec![registered_schema_row("engine_dynamic_schema")]), "global", ) .await .expect("schema visibility should load"); assert!(schemas.iter().any(|schema| { schema.get("x-lix-key").and_then(JsonValue::as_str) == Some("engine_dynamic_schema") })); } #[tokio::test] async fn sql_read_planning_rejects_multiple_visible_schemas_for_same_surface() { let context = CatalogContext::new(); let error = context .schema_jsons_for_sql_read_planning( &RowsLiveStateReader::new(vec![ registered_schema_row("engine_dynamic_schema"), registered_schema_row("engine_dynamic_schema"), ]), "global", ) .await .expect_err("SQL surfaces must not choose a schema identity implicitly"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!(error.message.contains("SQL surface schema")); } #[tokio::test] async fn tracked_domain_sees_tracked_seed_schemas_but_not_user_untracked_schemas() { let context = CatalogContext::new(); let mut seed_schema = registered_schema_row("lix_key_value"); seed_schema.untracked = false; let facts = context .schema_facts_for_domain( &RowsLiveStateReader::new(vec![ seed_schema, registered_schema_row("engine_dynamic_schema"), ]), &Domain::schema_catalog("global", false), ) .await .expect("schema visibility should load"); let schemas = facts .iter() .map(SchemaCatalogFact::schema) .collect::>(); assert!(schemas.iter().any(|schema| { schema.get("x-lix-key").and_then(JsonValue::as_str) == Some("lix_key_value") })); assert!(!schemas.iter().any(|schema| { schema.get("x-lix-key").and_then(JsonValue::as_str) == Some("engine_dynamic_schema") })); } #[tokio::test] async fn tracked_domain_does_not_see_untracked_seed_schemas() { let context = CatalogContext::new(); let facts = context .schema_facts_for_domain( &RowsLiveStateReader::new(vec![registered_schema_row("lix_key_value")]), &Domain::schema_catalog("global", false), ) .await .expect("schema visibility should load"); let schemas = facts .iter() .map(SchemaCatalogFact::schema) .collect::>(); assert!(!schemas.iter().any(|schema| { schema.get("x-lix-key").and_then(JsonValue::as_str) == Some("lix_key_value") })); } #[tokio::test] async fn visible_schemas_ignore_projected_global_schema_rows_for_version_scope() { let context = CatalogContext::new(); let mut global_only = registered_schema_row("global_only_schema"); global_only.global = true; global_only.version_id = "main".to_string(); let schemas = context .schema_jsons_for_sql_read_planning( &RowsLiveStateReader::new(vec![global_only]), "main", ) .await .expect("schema visibility should load"); assert!(schemas.is_empty()); } #[tokio::test] async fn schema_facts_post_filter_non_catalog_rows_even_if_reader_returns_them() { let context = CatalogContext::new(); let valid_schema = registered_schema_row("valid_schema"); let mut file_scoped_schema = registered_schema_row("file_scoped_schema"); file_scoped_schema.file_id = Some("file-a".to_string()); let mut tombstoned_schema = registered_schema_row("tombstoned_schema"); tombstoned_schema.snapshot_content = None; let facts = context .schema_facts_for_domain( &RowsLiveStateReader::new(vec![ valid_schema, file_scoped_schema, tombstoned_schema, ]), &Domain::schema_catalog("global", true), ) .await .expect("schema facts should load"); let schema_keys = facts .iter() .filter_map(|fact| fact.schema().get("x-lix-key").and_then(JsonValue::as_str)) .collect::>(); assert_eq!(schema_keys, vec!["valid_schema"]); } #[tokio::test] async fn visible_schemas_are_empty_when_no_schema_rows_are_visible() { let context = CatalogContext::new(); let schemas = context .schema_jsons_for_sql_read_planning(&RowsLiveStateReader::new(Vec::new()), "global") .await .expect("schema visibility should load"); assert!(schemas.is_empty()); } struct RowsLiveStateReader { rows: Vec, } impl RowsLiveStateReader { fn new(rows: Vec) -> Self { Self { rows } } } #[async_trait] impl LiveStateReader for RowsLiveStateReader { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self .rows .iter() .filter(|row| { request.filter.schema_keys.is_empty() || request.filter.schema_keys.contains(&row.schema_key) }) .filter(|row| { request.filter.version_ids.is_empty() || request.filter.version_ids.contains(&row.version_id) }) .filter(|row| { request .filter .untracked .is_none_or(|untracked| row.untracked == untracked) }) .cloned() .collect()) } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(self .rows .iter() .find(|row| { row.schema_key == request.schema_key && row.version_id == request.version_id && row.entity_id == request.entity_id }) .cloned()) } } fn registered_schema_row(schema_key: &str) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: registered_schema_entity_id(schema_key), file_id: None, schema_key: REGISTERED_SCHEMA_KEY.to_string(), version_id: GLOBAL_VERSION_ID.to_string(), metadata: None, deleted: false, change_id: Some("change-registered-schema".to_string()), commit_id: None, global: true, untracked: true, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), snapshot_content: Some( json!({ "value": { "x-lix-key": schema_key, "type": "object", "properties": { "id": { "type": "string" } }, "required": ["id"], "additionalProperties": false } }) .to_string(), ), } } fn registered_schema_entity_id(schema_key: &str) -> crate::entity_identity::EntityIdentity { crate::entity_identity::EntityIdentity::from_primary_key_paths( &json!({ "value": { "x-lix-key": schema_key, } }), &[vec!["value".to_string(), "x-lix-key".to_string()]], ) .expect("registered schema identity should derive") } } ================================================ FILE: packages/engine/src/catalog/mod.rs ================================================ mod context; mod schema; mod snapshot; pub(crate) use context::CatalogContext; pub(crate) use schema::{ ForeignKeyPlan, SchemaCatalogFact, SchemaCatalogKey, SchemaPlan, SchemaPlanId, StateForeignKeyPlan, }; pub(crate) use snapshot::{CatalogSnapshot, StateDeleteReferencePlan}; ================================================ FILE: packages/engine/src/catalog/schema.rs ================================================ pub(crate) use super::snapshot::{ ForeignKeyPlan, SchemaCatalogFact, SchemaCatalogKey, SchemaPlan, SchemaPlanId, StateForeignKeyPlan, }; ================================================ FILE: packages/engine/src/catalog/snapshot.rs ================================================ use std::{collections::BTreeMap, sync::Arc}; use jsonschema::JSONSchema; use serde_json::{Map as JsonMap, Value as JsonValue}; use crate::common::{format_json_pointer, parse_json_pointer}; use crate::domain::{Domain, DomainSchemaIdentity}; use crate::entity_identity::canonical_json_text; use crate::functions::FunctionProviderHandle; use crate::schema::{compile_lix_schema, validate_schema_amendment, SchemaKey}; use crate::LixError; #[derive(Default)] pub(crate) struct CatalogSnapshot { entries: Vec, plans: Vec, by_key: BTreeMap, by_identity: BTreeMap, delete_references_by_target: BTreeMap>, state_delete_references: Vec, fingerprint: CatalogFingerprint, } #[derive(Debug, Clone, PartialEq, Eq)] struct CatalogEntry { identity: DomainSchemaIdentity, key: SchemaCatalogKey, schema: JsonValue, } #[derive(Debug, Clone, Default, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) struct CatalogFingerprint(String); impl std::fmt::Debug for CatalogSnapshot { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("CatalogSnapshot") .field("plan_count", &self.plans.len()) .field("keys", &self.by_key.keys().collect::>()) .finish() } } impl CatalogSnapshot { #[cfg(test)] pub(crate) fn from_visible_schemas(visible_schemas: &[JsonValue]) -> Result { let mut catalog = Self::default(); for schema in visible_schemas { let key = crate::schema::schema_key_from_definition(schema)?; let catalog_key = SchemaCatalogKey::from_schema_key(key); let identity = DomainSchemaIdentity::new( Domain::schema_catalog(crate::GLOBAL_VERSION_ID, true), catalog_key.schema_key.clone(), ); catalog.remember_schema_identity(identity, catalog_key, schema.clone())?; } catalog.rebuild_plans()?; Ok(catalog) } pub(crate) fn from_schema_facts(facts: &[SchemaCatalogFact]) -> Result { let entries = facts .iter() .map(|fact| CatalogEntry { identity: fact.identity.clone(), key: fact.catalog_key.clone(), schema: fact.schema.clone(), }) .collect::>(); Self::from_entries(entries) } #[cfg(test)] pub(crate) fn fingerprint(&self) -> &CatalogFingerprint { &self.fingerprint } pub(crate) fn schema(&self, schema_key: &str) -> Option<&JsonValue> { self.plan_for_key(schema_key) .map(|(_, plan)| plan.schema.as_ref()) } pub(crate) fn insert_schema_for_domain( &mut self, domain: Domain, key: SchemaKey, schema: JsonValue, ) -> Result { let key = SchemaCatalogKey::from_schema_key(key); let identity = DomainSchemaIdentity::new(domain, key.schema_key.clone()); let mut entries = self.entries.clone(); let mut candidate = Self::from_entries(entries.clone())?; let plan_id = candidate.remember_schema_identity(identity.clone(), key, schema)?; entries = candidate.entries.clone(); let candidate = Self::from_entries(entries)?; *self = candidate; Ok(self.by_identity.get(&identity).copied().unwrap_or(plan_id)) } fn from_entries(entries: Vec) -> Result { let mut catalog = Self::default(); for entry in entries { catalog.remember_schema_identity(entry.identity, entry.key, entry.schema)?; } catalog.rebuild_plans()?; Ok(catalog) } fn remember_schema_identity( &mut self, identity: DomainSchemaIdentity, key: SchemaCatalogKey, schema: JsonValue, ) -> Result { if let Some(existing) = self.by_identity.get(&identity).copied() { let existing_entry = &self.entries[existing.index()]; if existing_entry.key == key && existing_entry.schema == schema { return Ok(existing); } if existing_entry.key == key { validate_schema_amendment(&existing_entry.schema, &schema)?; self.entries[existing.index()].schema = schema; return Ok(existing); } return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("schema '{}' is already registered with a different definition in the same schema domain", key.schema_key), )); } if let Some(existing) = self.by_key.get(&key).copied() { let existing_entry = &self.entries[existing.index()]; if existing_entry.identity == identity { return Ok(existing); } return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("schema '{}' is visible from more than one schema domain", existing_entry.key.schema_key), ) .with_hint("Schema references store schema_key, but not the schema domain. Remove the duplicate tracked/untracked schema registration or use a distinct schema key.")); } let plan_id = SchemaPlanId(self.entries.len() as u32); self.by_key.insert(key.clone(), plan_id); self.by_identity.insert(identity.clone(), plan_id); self.entries.push(CatalogEntry { identity, key, schema, }); Ok(plan_id) } fn rebuild_plans(&mut self) -> Result<(), LixError> { let schema_index = self .entries .iter() .map(|entry| (entry.key.clone(), &entry.schema)) .collect::>(); let plans = self .entries .iter() .map(|entry| { SchemaPlan::compile( entry.key.clone(), entry.schema.clone(), &self.by_key, &schema_index, ) }) .collect::, _>>()?; self.plans = plans; self.rebuild_delete_plans(); self.fingerprint = self.compute_fingerprint()?; Ok(()) } fn rebuild_delete_plans(&mut self) { let mut delete_references_by_target = BTreeMap::>::new(); let mut state_delete_references = Vec::::new(); for source_plan in &self.plans { for foreign_key in &source_plan.foreign_keys { delete_references_by_target .entry(foreign_key.referenced_schema.clone()) .or_default() .push(DeleteReferencePlan { source_key: source_plan.key.clone(), foreign_key: foreign_key.clone(), }); } for foreign_key in &source_plan.state_foreign_keys { state_delete_references.push(StateDeleteReferencePlan { source_key: source_plan.key.clone(), foreign_key: foreign_key.clone(), }); } } self.delete_references_by_target = delete_references_by_target; self.state_delete_references = state_delete_references; } fn compute_fingerprint(&self) -> Result { let mut hasher = blake3::Hasher::new(); let mut entries = self.entries.iter().collect::>(); entries.sort_by(|left, right| left.identity.cmp(&right.identity)); for entry in entries { hash_fingerprint_part(&mut hasher, &entry.identity.fingerprint_component()); hash_fingerprint_part(&mut hasher, &entry.key.schema_key); let canonical_schema = canonical_json_text(&entry.schema).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("failed to canonicalize schema for catalog fingerprint: {error}"), ) })?; hash_fingerprint_part(&mut hasher, &canonical_schema); } Ok(CatalogFingerprint(hasher.finalize().to_hex().to_string())) } #[cfg(test)] pub(crate) fn contains(&self, schema_key: &str) -> bool { self.plan_for_key(schema_key).is_some() } #[cfg(test)] pub(crate) fn len(&self) -> usize { self.plans.len() } pub(crate) fn plans(&self) -> impl Iterator { self.plans.iter() } pub(crate) fn plan(&self, plan_id: SchemaPlanId) -> Option<&SchemaPlan> { self.plans.get(plan_id.index()) } pub(crate) fn plan_for_key(&self, schema_key: &str) -> Option<(SchemaPlanId, &SchemaPlan)> { let key = SchemaCatalogKey { schema_key: schema_key.to_string(), }; let plan_id = *self.by_key.get(&key)?; let plan = self.plan(plan_id)?; Some((plan_id, plan)) } pub(crate) fn delete_plan_for_key(&self, schema_key: &str) -> DeleteValidationPlan<'_> { let key = SchemaCatalogKey { schema_key: schema_key.to_string(), }; DeleteValidationPlan { foreign_key_references: self .delete_references_by_target .get(&key) .map(Vec::as_slice) .unwrap_or(&[]), state_foreign_key_references: self.state_delete_references.as_slice(), } } } fn hash_fingerprint_part(hasher: &mut blake3::Hasher, value: &str) { hasher.update(&(value.len() as u64).to_le_bytes()); hasher.update(value.as_bytes()); } #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) struct SchemaPlanId(u32); impl SchemaPlanId { fn index(self) -> usize { self.0 as usize } #[cfg(test)] pub(crate) fn for_test(index: u32) -> Self { Self(index) } } pub(crate) type PointerGroup = Vec>; pub(crate) struct SchemaPlan { pub(crate) key: SchemaCatalogKey, pub(crate) schema: Arc, pub(crate) compiled_schema: JSONSchema, pub(crate) defaults: DefaultPlan, pub(crate) primary_key: Option, pub(crate) uniques: Vec, pub(crate) foreign_keys: Vec, pub(crate) state_foreign_keys: Vec, } impl SchemaPlan { fn compile( key: SchemaCatalogKey, schema: JsonValue, key_index: &BTreeMap, schema_index: &BTreeMap, ) -> Result { let compiled_schema = compile_lix_schema(&schema)?; let defaults = DefaultPlan::from_schema(&schema); let primary_key = primary_key_paths(&schema)?; let uniques = pointer_groups(&schema, "x-lix-unique")?; let foreign_keys = bind_foreign_key_plans( &key, &schema, foreign_key_plans(&schema)?, key_index, schema_index, )?; let state_foreign_keys = state_foreign_key_plans(&schema)?; Ok(Self { key, schema: Arc::new(schema), compiled_schema, defaults, primary_key, uniques, foreign_keys, state_foreign_keys, }) } } #[derive(Debug, Clone, Default, PartialEq, Eq)] pub(crate) struct DefaultPlan { properties: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] struct DefaultPropertyPlan { field_name: String, default: DefaultValuePlan, } #[derive(Debug, Clone, PartialEq, Eq)] enum DefaultValuePlan { Json(JsonValue), Cel(String), } impl DefaultPlan { fn from_schema(schema: &JsonValue) -> Self { let Some(properties) = schema.get("properties").and_then(JsonValue::as_object) else { return Self::default(); }; let mut ordered_properties = properties.iter().collect::>(); ordered_properties.sort_by(|(left_name, _), (right_name, _)| left_name.cmp(right_name)); let properties = ordered_properties .into_iter() .filter_map(|(field_name, field_schema)| { if let Some(expression) = field_schema .get("x-lix-default") .and_then(JsonValue::as_str) { return Some(DefaultPropertyPlan { field_name: field_name.clone(), default: DefaultValuePlan::Cel(expression.to_string()), }); } field_schema .get("default") .map(|value| DefaultPropertyPlan { field_name: field_name.clone(), default: DefaultValuePlan::Json(value.clone()), }) }) .collect(); Self { properties } } pub(crate) fn apply( &self, snapshot: &mut JsonMap, functions: FunctionProviderHandle, schema_key: &str, ) -> Result { let mut changed = false; let mut cel_context = None::>; for property in &self.properties { if snapshot.contains_key(&property.field_name) { continue; } let value = match &property.default { DefaultValuePlan::Json(value) => value.clone(), DefaultValuePlan::Cel(expression) => { let context = cel_context.get_or_insert_with(|| snapshot.clone()); crate::cel::shared_runtime() .evaluate_with_functions(expression, context, functions.clone()) .map_err(|err| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "failed to evaluate x-lix-default for '{}.{}': {}", schema_key, property.field_name, err.message ), hint: None, details: None, })? } }; snapshot.insert(property.field_name.clone(), value); changed = true; } Ok(changed) } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct ForeignKeyPlan { pub(crate) local_properties: PointerGroup, pub(crate) referenced_schema: SchemaCatalogKey, pub(crate) referenced_plan_id: SchemaPlanId, pub(crate) referenced_properties: PointerGroup, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct DeleteReferencePlan { pub(crate) source_key: SchemaCatalogKey, pub(crate) foreign_key: ForeignKeyPlan, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] pub(crate) struct StateDeleteReferencePlan { pub(crate) source_key: SchemaCatalogKey, pub(crate) foreign_key: StateForeignKeyPlan, } #[derive(Debug, Clone, Copy)] pub(crate) struct DeleteValidationPlan<'a> { pub(crate) foreign_key_references: &'a [DeleteReferencePlan], pub(crate) state_foreign_key_references: &'a [StateDeleteReferencePlan], } impl DeleteValidationPlan<'_> { pub(crate) fn has_committed_checks(self) -> bool { !self.foreign_key_references.is_empty() || !self.state_foreign_key_references.is_empty() } } #[derive(Debug, Clone, PartialEq, Eq)] struct UnboundForeignKeyPlan { local_properties: PointerGroup, referenced_schema: SchemaCatalogKey, referenced_properties: PointerGroup, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] pub(crate) struct StateForeignKeyPlan { /// Slot [0] in `x-lix-state-foreign-keys`: local pointer to the target entity_id. pub(crate) entity_id_property: Vec, /// Slot [1] in `x-lix-state-foreign-keys`: local pointer to the target schema_key. pub(crate) schema_key_property: Vec, /// Slot [2] in `x-lix-state-foreign-keys`: local pointer to the target file_id. pub(crate) file_id_property: Vec, } impl StateForeignKeyPlan { pub(crate) fn local_properties(&self) -> PointerGroup { vec![ self.entity_id_property.clone(), self.schema_key_property.clone(), self.file_id_property.clone(), ] } } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) struct SchemaCatalogKey { pub(crate) schema_key: String, } impl SchemaCatalogKey { pub(crate) fn from_schema_key(key: SchemaKey) -> Self { Self { schema_key: key.schema_key, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct SchemaCatalogFact { identity: DomainSchemaIdentity, catalog_key: SchemaCatalogKey, schema: JsonValue, } impl SchemaCatalogFact { pub(crate) fn new(domain: Domain, key: SchemaKey, schema: JsonValue) -> Self { let catalog_key = SchemaCatalogKey::from_schema_key(key); let identity = DomainSchemaIdentity::new(domain, catalog_key.schema_key.clone()); Self { identity, catalog_key, schema, } } pub(crate) fn schema(&self) -> &JsonValue { &self.schema } pub(crate) fn catalog_key(&self) -> &SchemaCatalogKey { &self.catalog_key } } fn primary_key_paths(schema: &JsonValue) -> Result>>, LixError> { let Some(primary_key) = schema.get("x-lix-primary-key") else { return Ok(None); }; let primary_key = primary_key.as_array().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, "schema x-lix-primary-key must be an array of JSON Pointers", ) })?; primary_key .iter() .enumerate() .map(|(index, pointer)| { let pointer = pointer.as_str().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("schema x-lix-primary-key entry at index {index} must be a string"), ) })?; parse_json_pointer(pointer) }) .collect::, _>>() .map(Some) } fn pointer_groups(schema: &JsonValue, field: &str) -> Result, LixError> { let Some(value) = schema.get(field) else { return Ok(Vec::new()); }; let groups = value .as_array() .map(|groups| groups.iter().collect::>()) .unwrap_or_default(); groups .into_iter() .map(|group| { let group = group.as_array().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("schema {field} must contain arrays of JSON Pointers"), ) })?; group .iter() .enumerate() .map(|(index, pointer)| { let pointer = pointer.as_str().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("schema {field} entry at index {index} must be a string"), ) })?; parse_json_pointer(pointer) }) .collect::, _>>() }) .collect() } fn foreign_key_plans(schema: &JsonValue) -> Result, LixError> { let Some(value) = schema.get("x-lix-foreign-keys") else { return Ok(Vec::new()); }; let Some(foreign_keys) = value.as_array() else { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "schema x-lix-foreign-keys must be an array", )); }; foreign_keys .iter() .enumerate() .map(|(index, foreign_key)| { let object = foreign_key.as_object().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("x-lix-foreign-keys[{index}] must be an object"), ) })?; let references = object .get("references") .and_then(JsonValue::as_object) .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("x-lix-foreign-keys[{index}].references must be an object"), ) })?; let referenced_schema_key = references .get("schemaKey") .and_then(JsonValue::as_str) .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "x-lix-foreign-keys[{index}].references.schemaKey must be a string" ), ) })? .to_string(); let local_properties = pointer_array( object.get("properties"), &format!("x-lix-foreign-keys[{index}].properties"), )?; let referenced_properties = pointer_array( references.get("properties"), &format!("x-lix-foreign-keys[{index}].references.properties"), )?; if local_properties.len() != referenced_properties.len() { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "x-lix-foreign-keys[{index}] properties and references.properties must have the same length" ), )); } Ok(UnboundForeignKeyPlan { local_properties, referenced_schema: SchemaCatalogKey { schema_key: referenced_schema_key, }, referenced_properties, }) }) .collect() } fn bind_foreign_key_plans( source_key: &SchemaCatalogKey, source_schema: &JsonValue, unbound_foreign_keys: Vec, key_index: &BTreeMap, schema_index: &BTreeMap, ) -> Result, LixError> { unbound_foreign_keys .into_iter() .map(|foreign_key| { if foreign_key.referenced_schema.schema_key == "lix_state" { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' must not reference schemaKey 'lix_state'; use x-lix-state-foreign-keys with pointers ordered as [entity_id, schema_key, file_id]", source_key.schema_key ), )); } let referenced_plan_id = *key_index.get(&foreign_key.referenced_schema).ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references missing schema '{}'", source_key.schema_key, foreign_key.referenced_schema.schema_key, ), ) })?; let target_schema = schema_index .get(&foreign_key.referenced_schema) .copied() .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references missing schema '{}'", source_key.schema_key, foreign_key.referenced_schema.schema_key, ), ) })?; for (local_pointer, referenced_pointer) in foreign_key .local_properties .iter() .zip(foreign_key.referenced_properties.iter()) { let local_field = schema_field_at_pointer(source_schema, local_pointer).map_err(|detail| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references missing local property '{}': {detail}", source_key.schema_key, format_json_pointer(local_pointer) ), ) })?; let referenced_field = schema_field_at_pointer(target_schema, referenced_pointer).map_err( |detail| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references missing target property '{}.{}': {detail}", source_key.schema_key, foreign_key.referenced_schema.schema_key, format_json_pointer(referenced_pointer) ), ) }, )?; validate_foreign_key_field_types( source_key, &foreign_key.referenced_schema, local_pointer, local_field, referenced_pointer, referenced_field, )?; } if !schema_properties_are_keyed(target_schema, &foreign_key.referenced_properties)? { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references '{}.{}', but referenced properties must match the target primary key or a unique constraint", source_key.schema_key, foreign_key.referenced_schema.schema_key, format_pointer_group(&foreign_key.referenced_properties) ), )); } Ok(ForeignKeyPlan { local_properties: foreign_key.local_properties, referenced_schema: foreign_key.referenced_schema, referenced_plan_id, referenced_properties: foreign_key.referenced_properties, }) }) .collect() } fn schema_field_at_pointer<'a>( schema: &'a JsonValue, pointer: &[String], ) -> Result<&'a JsonValue, String> { if pointer.is_empty() { return Err("empty pointer does not name a field".to_string()); } let mut current = schema; for segment in pointer { let properties = current .get("properties") .and_then(JsonValue::as_object) .ok_or_else(|| { format!( "schema segment before '{}' has no object properties", segment ) })?; current = properties .get(segment) .ok_or_else(|| format!("property '{}' does not exist", segment))?; } Ok(current) } fn validate_foreign_key_field_types( source_key: &SchemaCatalogKey, referenced_key: &SchemaCatalogKey, local_pointer: &[String], local_field: &JsonValue, referenced_pointer: &[String], referenced_field: &JsonValue, ) -> Result<(), LixError> { let local_type = compatible_json_schema_type(local_field).ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' local property '{}' must declare an explicit JSON Schema type", source_key.schema_key, format_json_pointer(local_pointer) ), ) })?; let referenced_type = compatible_json_schema_type(referenced_field).ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' target property '{}.{}' must declare an explicit JSON Schema type", source_key.schema_key, referenced_key.schema_key, format_json_pointer(referenced_pointer) ), ) })?; if local_type != referenced_type { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' has incompatible field types: local '{}' is {}, but target '{}.{}' is {}", source_key.schema_key, format_json_pointer(local_pointer), local_type, referenced_key.schema_key, format_json_pointer(referenced_pointer), referenced_type ), )); } Ok(()) } fn compatible_json_schema_type(field_schema: &JsonValue) -> Option { match field_schema.get("type")? { JsonValue::Array(types) => { let non_null_types = types .iter() .filter(|value| value.as_str() != Some("null")) .cloned() .collect::>(); match non_null_types.as_slice() { [] => None, [single] => Some(single.clone()), _ => Some(JsonValue::Array(non_null_types)), } } value => Some(value.clone()), } } fn schema_properties_are_keyed( target_schema: &JsonValue, referenced_properties: &[Vec], ) -> Result { if let Some(primary_key) = primary_key_paths(target_schema)? { if primary_key == referenced_properties { return Ok(true); } } Ok(pointer_groups(target_schema, "x-lix-unique")? .iter() .any(|unique_group| unique_group == referenced_properties)) } fn state_foreign_key_plans(schema: &JsonValue) -> Result, LixError> { let Some(value) = schema.get("x-lix-state-foreign-keys") else { return Ok(Vec::new()); }; let Some(foreign_keys) = value.as_array() else { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "schema x-lix-state-foreign-keys must be an array", )); }; foreign_keys .iter() .enumerate() .map(|(index, foreign_key)| { let local_properties = pointer_array( Some(foreign_key), &format!("x-lix-state-foreign-keys[{index}]"), )?; if local_properties.len() != 3 { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "x-lix-state-foreign-keys[{index}] must contain exactly three JSON Pointers ordered as [entity_id, schema_key, file_id]" ), )); } Ok(StateForeignKeyPlan { entity_id_property: local_properties[0].clone(), schema_key_property: local_properties[1].clone(), file_id_property: local_properties[2].clone(), }) }) .collect() } fn pointer_array(value: Option<&JsonValue>, context: &str) -> Result { let Some(value) = value else { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("{context} must be an array of JSON Pointers"), )); }; let Some(array) = value.as_array() else { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("{context} must be an array of JSON Pointers"), )); }; array .iter() .enumerate() .map(|(index, pointer)| { let pointer = pointer.as_str().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("{context}[{index}] must be a string"), ) })?; parse_json_pointer(pointer) }) .collect() } fn format_pointer_group(paths: &[Vec]) -> String { paths .iter() .map(|path| format_json_pointer(path)) .collect::>() .join(",") } #[cfg(test)] mod tests { use serde_json::json; use super::*; #[test] fn catalog_rejects_same_schema_key_from_multiple_domains() { let tracked = SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("example_schema"), schema_json("example_schema"), ); let untracked = SchemaCatalogFact::new( Domain::schema_catalog("main", true), SchemaKey::new("example_schema"), schema_json("example_schema"), ); let error = CatalogSnapshot::from_schema_facts(&[tracked, untracked]) .expect_err("same schema key in two reachable domains is ambiguous"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!(error.message.contains("more than one schema domain")); } #[test] fn insert_schema_for_domain_is_atomic_when_binding_fails() { let mut catalog = CatalogSnapshot::from_schema_facts(&[SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("base_schema"), schema_json("base_schema"), )]) .expect("base catalog should bind"); let error = catalog .insert_schema_for_domain( Domain::schema_catalog("main", false), SchemaKey::new("bad_child_schema"), child_schema_json("bad_child_schema", "missing_parent_schema"), ) .expect_err("schema with missing FK target should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!(catalog.contains("base_schema")); assert!( !catalog.contains("bad_child_schema"), "failed catalog insert must not publish a partially bound schema" ); } #[test] fn catalog_fingerprint_is_independent_of_fact_order() { let parent = SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("parent_schema"), schema_json("parent_schema"), ); let child = SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("child_schema"), child_schema_json("child_schema", "parent_schema"), ); let parent_first = CatalogSnapshot::from_schema_facts(&[parent.clone(), child.clone()]) .expect("parent-first facts should bind"); let child_first = CatalogSnapshot::from_schema_facts(&[child, parent]) .expect("child-first facts should bind as the same domain snapshot"); assert_eq!(parent_first.fingerprint(), child_first.fingerprint()); } #[test] fn delete_plan_has_no_committed_checks_for_unreferenced_schema() { let catalog = CatalogSnapshot::from_schema_facts(&[SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("standalone_schema"), schema_json("standalone_schema"), )]) .expect("catalog should bind"); let delete_plan = catalog.delete_plan_for_key("standalone_schema"); assert!(!delete_plan.has_committed_checks()); assert!(delete_plan.foreign_key_references.is_empty()); assert!(delete_plan.state_foreign_key_references.is_empty()); } #[test] fn delete_plan_indexes_foreign_keys_by_referenced_schema() { let parent = SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("parent_schema"), schema_json("parent_schema"), ); let child = SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("child_schema"), child_schema_json("child_schema", "parent_schema"), ); let catalog = CatalogSnapshot::from_schema_facts(&[parent, child]).expect("catalog should bind"); let parent_delete_plan = catalog.delete_plan_for_key("parent_schema"); let child_delete_plan = catalog.delete_plan_for_key("child_schema"); assert!(parent_delete_plan.has_committed_checks()); assert_eq!(parent_delete_plan.foreign_key_references.len(), 1); assert_eq!( parent_delete_plan.foreign_key_references[0] .source_key .schema_key, "child_schema" ); assert!(!child_delete_plan.has_committed_checks()); } #[test] fn delete_plan_conservatively_applies_state_foreign_keys_to_every_schema() { let target = SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("target_schema"), schema_json("target_schema"), ); let source = SchemaCatalogFact::new( Domain::schema_catalog("main", false), SchemaKey::new("state_fk_schema"), state_fk_schema_json("state_fk_schema"), ); let catalog = CatalogSnapshot::from_schema_facts(&[target, source]).expect("catalog should bind"); let target_delete_plan = catalog.delete_plan_for_key("target_schema"); assert!(target_delete_plan.has_committed_checks()); assert_eq!(target_delete_plan.state_foreign_key_references.len(), 1); assert_eq!( target_delete_plan.state_foreign_key_references[0] .source_key .schema_key, "state_fk_schema" ); } fn schema_json(schema_key: &str) -> JsonValue { json!({ "x-lix-key": schema_key, "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" } }, "required": ["id"], "additionalProperties": false }) } fn child_schema_json(schema_key: &str, parent_schema_key: &str) -> JsonValue { json!({ "x-lix-key": schema_key, "x-lix-primary-key": ["/id"], "x-lix-foreign-keys": [{ "properties": ["/parent_id"], "references": { "schemaKey": parent_schema_key, "properties": ["/id"] } }], "type": "object", "properties": { "id": { "type": "string" }, "parent_id": { "type": "string" } }, "required": ["id", "parent_id"], "additionalProperties": false }) } fn state_fk_schema_json(schema_key: &str) -> JsonValue { json!({ "x-lix-key": schema_key, "x-lix-primary-key": ["/id"], "x-lix-state-foreign-keys": [["/target_id", "/target_schema", "/target_file"]], "type": "object", "properties": { "id": { "type": "string" }, "target_id": { "type": "string" }, "target_schema": { "type": "string" }, "target_file": { "type": ["string", "null"] } }, "required": ["id", "target_id", "target_schema", "target_file"], "additionalProperties": false }) } } ================================================ FILE: packages/engine/src/cel/context.rs ================================================ use cel::Context; use serde_json::{Map as JsonMap, Value as JsonValue}; use crate::LixError; use super::provider::CelFunctionProvider; use super::value::json_to_cel; pub(crate) fn build_context_with_functions

( variables: &JsonMap, functions: P, ) -> Result, LixError> where P: CelFunctionProvider, { let mut context = Context::default(); let uuid_functions = functions.clone(); context.add_function("lix_uuid_v7", move || uuid_functions.call_uuid_v7()); let timestamp_functions = functions.clone(); context.add_function("lix_timestamp", move || { timestamp_functions.call_timestamp() }); for (name, value) in variables { let cel_value = json_to_cel(value)?; context.add_variable_from_value(name.clone(), cel_value); } Ok(context) } #[cfg(test)] mod tests { use super::build_context_with_functions; use crate::cel::CelFunctionProvider; use cel::Program; use serde_json::Map as JsonMap; #[test] fn registers_lix_uuid_v7_function() { let context = build_context_with_functions(&JsonMap::new(), fixed_functions()) .expect("build context"); let program = Program::compile("lix_uuid_v7()").expect("compile CEL"); let value = program.execute(&context).expect("execute CEL"); let as_json = value.json().expect("to json"); assert!(as_json.as_str().is_some()); } #[test] fn errors_on_unknown_variables() { let context = build_context_with_functions(&JsonMap::new(), fixed_functions()) .expect("build context"); let program = Program::compile("missing_var == null").expect("compile CEL"); let err = program .execute(&context) .expect_err("execute CEL should fail"); assert!(err.to_string().contains("Undeclared reference")); } #[derive(Clone)] struct FixedFunctions; impl CelFunctionProvider for FixedFunctions { fn call_uuid_v7(&self) -> String { "uuid-fixed".to_string() } fn call_timestamp(&self) -> String { "1970-01-01T00:00:00.000Z".to_string() } } fn fixed_functions() -> FixedFunctions { FixedFunctions } #[test] fn uses_supplied_function_provider() { let context = build_context_with_functions(&JsonMap::new(), fixed_functions()) .expect("build context"); let program = Program::compile("lix_uuid_v7()").expect("compile CEL"); let value = program.execute(&context).expect("execute CEL"); assert_eq!(value.json().expect("to json").as_str(), Some("uuid-fixed")); } } ================================================ FILE: packages/engine/src/cel/error.rs ================================================ use crate::LixError; pub(crate) fn cel_parse_error(expression: &str, error: impl std::fmt::Display) -> LixError { LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("failed to parse CEL expression '{expression}': {error}"), hint: None, details: None, } } pub(crate) fn cel_runtime_error(expression: &str, error: impl std::fmt::Display) -> LixError { LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("failed to evaluate CEL expression '{expression}': {error}"), hint: None, details: None, } } ================================================ FILE: packages/engine/src/cel/mod.rs ================================================ mod context; mod error; mod provider; mod runtime; mod value; pub(crate) use provider::CelFunctionProvider; pub(crate) use runtime::shared_runtime; ================================================ FILE: packages/engine/src/cel/provider.rs ================================================ /// Function source available to CEL expressions. /// /// CEL is shared infrastructure for schema expressions. It should not depend /// on engine1 or engine runtime traits directly; callers adapt their own /// execution-scoped function provider to this small boundary. pub(crate) trait CelFunctionProvider: Clone + Send + Sync + 'static { fn call_uuid_v7(&self) -> String; fn call_timestamp(&self) -> String; } ================================================ FILE: packages/engine/src/cel/runtime.rs ================================================ use std::collections::HashMap; use std::sync::{Arc, OnceLock, RwLock}; use cel::Program; use serde_json::{Map as JsonMap, Value as JsonValue}; use crate::LixError; use super::context::build_context_with_functions; use super::error::{cel_parse_error, cel_runtime_error}; use super::provider::CelFunctionProvider; use super::value::cel_to_json; #[derive(Debug)] struct CompiledProgram { program: Program, } #[derive(Default)] pub struct CelEvaluator { programs: RwLock>>, } impl CelEvaluator { pub fn new() -> Self { Self::default() } pub fn evaluate_with_functions

( &self, expression: &str, variables: &JsonMap, functions: P, ) -> Result where P: CelFunctionProvider, { let compiled = self.compile(expression)?; let context = build_context_with_functions(variables, functions)?; let value = compiled .program .execute(&context) .map_err(|error| cel_runtime_error(expression, error))?; cel_to_json(&value) } fn compile(&self, expression: &str) -> Result, LixError> { if let Some(existing) = self.programs.read().unwrap().get(expression).cloned() { return Ok(existing); } let program = Program::compile(expression).map_err(|error| cel_parse_error(expression, error))?; let compiled = Arc::new(CompiledProgram { program }); self.programs .write() .unwrap() .insert(expression.to_string(), compiled.clone()); Ok(compiled) } } pub(crate) fn shared_runtime() -> &'static CelEvaluator { static SHARED_RUNTIME: OnceLock = OnceLock::new(); SHARED_RUNTIME.get_or_init(CelEvaluator::new) } #[cfg(test)] mod tests { use super::CelEvaluator; use crate::cel::CelFunctionProvider; use serde_json::{json, Map as JsonMap, Value as JsonValue}; #[derive(Clone)] struct FixedFunctions; impl CelFunctionProvider for FixedFunctions { fn call_uuid_v7(&self) -> String { "uuid-fixed".to_string() } fn call_timestamp(&self) -> String { "1970-01-01T00:00:00.000Z".to_string() } } fn fixed_functions() -> FixedFunctions { FixedFunctions } #[test] fn evaluates_basic_expressions() { let evaluator = CelEvaluator::new(); let value = evaluator .evaluate_with_functions("'open'", &JsonMap::new(), fixed_functions()) .expect("evaluate CEL"); assert_eq!(value, JsonValue::String("open".to_string())); } #[test] fn evaluates_with_variables() { let evaluator = CelEvaluator::new(); let mut context = JsonMap::new(); context.insert("name".to_string(), JsonValue::String("sample".to_string())); let value = evaluator .evaluate_with_functions("name + '-slug'", &context, fixed_functions()) .expect("evaluate CEL"); assert_eq!(value, JsonValue::String("sample-slug".to_string())); } #[test] fn reports_parse_errors() { let evaluator = CelEvaluator::new(); let err = evaluator .evaluate_with_functions("lix_uuid_v7(", &JsonMap::new(), fixed_functions()) .expect_err("expected parse error"); assert!(err.to_string().contains("failed to parse CEL expression")); } #[test] fn reports_runtime_errors() { let evaluator = CelEvaluator::new(); let err = evaluator .evaluate_with_functions("1 / 0", &JsonMap::new(), fixed_functions()) .expect_err("expected runtime error"); assert!(err .to_string() .contains("failed to evaluate CEL expression")); } #[test] fn supports_function_calls() { let evaluator = CelEvaluator::new(); let value = evaluator .evaluate_with_functions("lix_timestamp()", &JsonMap::new(), fixed_functions()) .expect("evaluate CEL"); assert_eq!(value.as_str(), Some("1970-01-01T00:00:00.000Z")); } #[test] fn caches_compiled_programs() { let evaluator = CelEvaluator::new(); let mut context = JsonMap::new(); context.insert("name".to_string(), json!("x")); let _ = evaluator .evaluate_with_functions("name + '-slug'", &context, fixed_functions()) .expect("first evaluation"); let _ = evaluator .evaluate_with_functions("name + '-slug'", &context, fixed_functions()) .expect("second evaluation"); let size = evaluator.programs.read().unwrap().len(); assert_eq!(size, 1); } #[test] fn errors_on_unknown_variable() { let evaluator = CelEvaluator::new(); let err = evaluator .evaluate_with_functions("missing_var + '-slug'", &JsonMap::new(), fixed_functions()) .expect_err("expected unknown variable error"); assert!(err.to_string().contains("Undeclared reference")); } } ================================================ FILE: packages/engine/src/cel/value.rs ================================================ use cel::Value as CelValue; use serde_json::Value as JsonValue; use crate::LixError; pub fn json_to_cel(value: &JsonValue) -> Result { cel::to_value(value).map_err(|err| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("failed to convert JSON value to CEL value: {err}"), hint: None, details: None, }) } pub fn cel_to_json(value: &CelValue) -> Result { value.json().map_err(|err| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("failed to convert CEL value to JSON value: {err}"), hint: None, details: None, }) } #[cfg(test)] mod tests { use super::{cel_to_json, json_to_cel}; use serde_json::json; #[test] fn converts_json_scalars() { let value = json!("hello"); let cel = json_to_cel(&value).expect("convert to CEL"); let roundtrip = cel_to_json(&cel).expect("convert to JSON"); assert_eq!(roundtrip, value); } #[test] fn converts_json_objects_and_arrays() { let value = json!({ "name": "Ada", "flags": [true, false], "meta": { "count": 1 } }); let cel = json_to_cel(&value).expect("convert to CEL"); let roundtrip = cel_to_json(&cel).expect("convert to JSON"); assert_eq!(roundtrip, value); } } ================================================ FILE: packages/engine/src/commit_graph/context.rs ================================================ use std::collections::BTreeSet; use crate::commit_graph::walker::{best_common_ancestors, walk_reachable_commits}; use crate::commit_graph::{ CommitGraphChangeHistoryEntry, CommitGraphChangeHistoryRequest, CommitGraphCommit, CommitGraphEdge, CommitGraphReader, ReachableCommitGraphCommit, }; use crate::commit_store::{Change, Commit, CommitStoreContext, CommitStoreReader, LocatedChange}; use crate::entity_identity::EntityIdentity; use crate::storage::StorageReader; use crate::storage::{ScopedStorageReader, StorageReadScope}; use crate::LixError; const COMMIT_SCHEMA_KEY: &str = "lix_commit"; /// Read model for resolving commit-store commits into entity state at a head. /// /// This module does not own durable storage. It reads immutable commit-store /// facts through a caller-provided KV store and applies commit graph rules on /// top. #[derive(Clone)] pub(crate) struct CommitGraphContext { commit_store: CommitStoreContext, } impl CommitGraphContext { pub(crate) fn new() -> Self { Self { commit_store: CommitStoreContext::new(), } } /// Creates a graph reader over a caller-provided KV store. pub(crate) fn reader(&self, store: S) -> CommitGraphStoreReader where S: StorageReader, { let read_scope = StorageReadScope::new(store); CommitGraphStoreReader { commit_store_reader: self.commit_store.reader(read_scope.store()), } } } /// Commit-graph reader that resolves commit-store entities at a commit head. pub(crate) struct CommitGraphStoreReader where S: StorageReader, { commit_store_reader: CommitStoreReader>, } impl CommitGraphStoreReader where S: StorageReader, { /// Loads and parses a `lix_commit` canonical change by commit id. pub(crate) async fn load_commit( &mut self, commit_id: &str, ) -> Result, LixError> { let Some(commit) = self.commit_store_reader.load_commit(commit_id).await? else { return Ok(None); }; self.graph_commit_from_store_commit(commit).await.map(Some) } /// Loads every commit fact from the commit store. /// /// This is used by global commit surfaces where the caller wants the durable /// graph facts themselves, not reachability from a particular version head. pub(crate) async fn all_commits(&mut self) -> Result, LixError> { let stored_commits = self.commit_store_reader.scan_commits().await?; let mut commits = Vec::new(); for commit in stored_commits { commits.push(self.graph_commit_from_store_commit(commit).await?); } commits.sort_by(|left, right| left.commit_id.cmp(&right.commit_id)); Ok(commits) } /// Walks from `head_commit_id` through parent commits and records nearest depth. pub(crate) async fn reachable_commits( &mut self, head_commit_id: &str, ) -> Result, LixError> { walk_reachable_commits(self, head_commit_id).await } /// Returns the best common ancestors shared by two commit heads. /// /// This is the commit-DAG primitive. It can return more than one commit in /// criss-cross histories. Merge code should layer an explicit merge-base /// policy on top when it needs exactly one base for a three-way merge. pub(crate) async fn best_common_ancestors( &mut self, left_commit_id: &str, right_commit_id: &str, ) -> Result, LixError> { best_common_ancestors(self, left_commit_id, right_commit_id).await } /// Resolves the single commit base to use for a three-way merge. /// /// This is merge policy layered over `best_common_ancestors(...)`. Histories /// with no shared base or multiple equally good bases are rejected for now /// so merge code cannot accidentally hide unsupported graph semantics. pub(crate) async fn merge_base( &mut self, left_commit_id: &str, right_commit_id: &str, ) -> Result { let ancestors = self .best_common_ancestors(left_commit_id, right_commit_id) .await?; match ancestors.as_slice() { [] => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "commit_graph found no common history between '{left_commit_id}' and '{right_commit_id}'" ), )), [base] => Ok(base.clone()), _ => Err(LixError::ambiguous_merge_base( left_commit_id, right_commit_id, ancestors .iter() .map(|ancestor| ancestor.commit_id.clone()) .collect(), )), } } /// Derives parent/child edges from parsed commits. pub(crate) fn commit_edges(&self, commits: &[CommitGraphCommit]) -> Vec { commits .iter() .flat_map(|commit| { commit.parent_commit_ids.iter().enumerate().map( |(parent_order, parent_commit_id)| CommitGraphEdge { parent_commit_id: parent_commit_id.clone(), child_commit_id: commit.commit_id.clone(), parent_order: parent_order as u32, }, ) }) .collect() } /// Returns canonical changes reachable from `start_commit_id`. /// /// This is the primitive history API. It reports the commit/depth where /// each matching canonical change was introduced or adopted during graph /// traversal and leaves row shaping to callers such as SQL providers. pub(crate) async fn change_history_from_commit( &mut self, start_commit_id: &str, request: &CommitGraphChangeHistoryRequest, ) -> Result, LixError> { let commits = self.reachable_commits(start_commit_id).await?; let mut entries = Vec::new(); let mut seen_change_ids = BTreeSet::new(); for reachable in commits { if !depth_matches(reachable.depth, request) { continue; } let commit_id = reachable.commit.commit_id; for change_id in reachable.commit.change_ids { if !seen_change_ids.insert(change_id.clone()) { continue; } let change = self .load_member_canonical_change(&change_id, &commit_id) .await?; if change_matches_history_request(&change.record, request) { entries.push(CommitGraphChangeHistoryEntry { located_change: change, observed_commit_id: commit_id.clone(), start_commit_id: start_commit_id.to_string(), depth: reachable.depth, }); } } } Ok(entries) } async fn load_member_canonical_change( &mut self, change_id: &str, source_commit_id: &str, ) -> Result { let change_ids = vec![change_id.to_string()]; self.load_canonical_changes(&change_ids) .await? .into_iter() .next() .flatten() .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!( "commit_graph commit '{source_commit_id}' references missing change '{change_id}'" ), ) }) } async fn graph_commit_from_store_commit( &mut self, commit: Commit, ) -> Result { let change_ids = self.load_commit_change_ids(&commit).await?; Ok(commit_graph_commit_from_store_commit(commit, change_ids)?) } async fn load_commit_change_ids(&self, commit: &Commit) -> Result, LixError> { let mut change_ids = Vec::new(); for pack_id in 0..commit.change_pack_count { let Some(changes) = self .commit_store_reader .load_change_pack(&commit.id, pack_id) .await? else { return Err(missing_pack_error("change", &commit.id, pack_id)); }; change_ids.extend(changes.into_iter().map(|change| change.id)); } for pack_id in 0..commit.membership_pack_count { let Some(members) = self .commit_store_reader .load_membership_pack(&commit.id, pack_id) .await? else { return Err(missing_pack_error("membership", &commit.id, pack_id)); }; change_ids.extend(members.into_iter().map(|locator| locator.change_id)); } Ok(change_ids) } async fn load_canonical_changes( &self, change_ids: &[String], ) -> Result>, LixError> { self.commit_store_reader .load_located_changes(change_ids) .await .map(|changes| { changes .into_iter() .map(|located| { located.map(|located| LocatedChange { record: canonical_change_from_store_change(located.record), source_commit_id: located.source_commit_id, source_pack_id: located.source_pack_id, }) }) .collect() }) } } #[async_trait::async_trait] impl CommitGraphReader for CommitGraphStoreReader where S: StorageReader, { async fn load_commit( &mut self, commit_id: &str, ) -> Result, LixError> { CommitGraphStoreReader::load_commit(self, commit_id).await } async fn all_commits(&mut self) -> Result, LixError> { CommitGraphStoreReader::all_commits(self).await } async fn reachable_commits( &mut self, head_commit_id: &str, ) -> Result, LixError> { CommitGraphStoreReader::reachable_commits(self, head_commit_id).await } async fn best_common_ancestors( &mut self, left_commit_id: &str, right_commit_id: &str, ) -> Result, LixError> { CommitGraphStoreReader::best_common_ancestors(self, left_commit_id, right_commit_id).await } async fn merge_base( &mut self, left_commit_id: &str, right_commit_id: &str, ) -> Result { CommitGraphStoreReader::merge_base(self, left_commit_id, right_commit_id).await } fn commit_edges(&self, commits: &[CommitGraphCommit]) -> Vec { CommitGraphStoreReader::commit_edges(self, commits) } async fn change_history_from_commit( &mut self, start_commit_id: &str, request: &CommitGraphChangeHistoryRequest, ) -> Result, LixError> { CommitGraphStoreReader::change_history_from_commit(self, start_commit_id, request).await } } fn depth_matches(depth: u32, request: &CommitGraphChangeHistoryRequest) -> bool { request.min_depth.map_or(true, |min| depth >= min) && request.max_depth.map_or(true, |max| depth <= max) } fn change_matches_history_request( change: &Change, request: &CommitGraphChangeHistoryRequest, ) -> bool { (request.include_tombstones || change.snapshot_ref.is_some()) && (request.entity_ids.is_empty() || request.entity_ids.contains(&change.entity_id)) && (request.schema_keys.is_empty() || request.schema_keys.contains(&change.schema_key)) && (request.file_ids.is_empty() || change .file_id .as_ref() .is_some_and(|file_id| request.file_ids.contains(file_id))) } fn commit_graph_commit_from_store_commit( commit: Commit, change_ids: Vec, ) -> Result { let change = commit_header_canonical_change(commit.clone()); Ok(CommitGraphCommit { canonical_change: change.clone(), change, commit_id: commit.id, change_ids, author_account_ids: commit.author_account_ids, parent_commit_ids: commit.parent_ids, }) } fn commit_header_canonical_change(commit: Commit) -> Change { Change { id: commit.change_id, entity_id: EntityIdentity::single(&commit.id), schema_key: COMMIT_SCHEMA_KEY.to_string(), file_id: None, snapshot_ref: None, metadata_ref: None, created_at: commit.created_at, } } fn canonical_change_from_store_change(change: Change) -> Change { Change { id: change.id, entity_id: change.entity_id, schema_key: change.schema_key, file_id: change.file_id, snapshot_ref: change.snapshot_ref, metadata_ref: change.metadata_ref, created_at: change.created_at, } } fn missing_pack_error(label: &str, commit_id: &str, pack_id: u32) -> LixError { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("commit_graph missing {label} pack ({commit_id}, {pack_id})"), ) } #[cfg(test)] mod tests { use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use crate::backend::testing::UnitTestBackend; use crate::commit_graph::{CommitGraphChangeHistoryRequest, CommitGraphContext}; use crate::commit_store::{ Change, ChangeLocator, ChangeRef, CommitDraftRef, CommitStoreContext, }; use crate::storage::{StorageContext, StorageWriteSet}; #[tokio::test] async fn load_commit_parses_commit_snapshot() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[commit_change( "commit-1-change", "commit-1", &["change-1", "change-2"], &["parent-1"], )], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let commit = reader .load_commit("commit-1") .await .expect("commit load should succeed") .expect("commit should exist"); assert_eq!(commit.commit_id, "commit-1"); assert_eq!(commit.change_ids, vec!["change-1", "change-2"]); assert_eq!(commit.parent_commit_ids, vec!["parent-1"]); assert_eq!(commit.change.id, "commit-1-change"); } #[tokio::test] async fn load_commit_returns_none_for_missing_commit() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let commit = reader .load_commit("missing") .await .expect("commit load should succeed"); assert_eq!(commit, None); } #[tokio::test] async fn all_commits_returns_parsed_commits_sorted_by_id() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-b-change", "commit-b", &[], &[]), entity_change("change-1", "entity-1", "example", "{}"), commit_change("commit-a-change", "commit-a", &[], &[]), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let commits = reader .all_commits() .await .expect("commit scan should succeed"); assert_eq!( commits .iter() .map(|commit| commit.commit_id.as_str()) .collect::>(), vec!["commit-a", "commit-b"] ); } #[tokio::test] async fn commit_edges_are_derived_from_parent_commit_ids() { let graph = CommitGraphContext::new(); let reader = graph.reader(StorageContext::new(Arc::new(UnitTestBackend::new()))); let commits = vec![parsed_commit( "commit-head", &[], &["commit-left", "commit-right"], )]; let edges = reader.commit_edges(&commits); assert_eq!( edges .iter() .map(|edge| ( edge.parent_commit_id.as_str(), edge.child_commit_id.as_str(), edge.parent_order, )) .collect::>(), vec![ ("commit-left", "commit-head", 0), ("commit-right", "commit-head", 1) ] ); } #[tokio::test] async fn change_history_from_commit_reports_matching_canonical_changes_with_depth() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ entity_change("change-root", "entity-root", "test_schema", "{}"), entity_change("change-head", "entity-head", "test_schema", "{}"), commit_change("commit-root-change", "commit-root", &["change-root"], &[]), commit_change( "commit-head-change", "commit-head", &["change-head"], &["commit-root"], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let history = reader .change_history_from_commit( "commit-head", &CommitGraphChangeHistoryRequest { schema_keys: vec!["test_schema".to_string()], include_tombstones: true, ..CommitGraphChangeHistoryRequest::default() }, ) .await .expect("history should resolve"); assert_eq!( history .iter() .map(|entry| ( entry.located_change.record.id.as_str(), entry.observed_commit_id.as_str(), entry.start_commit_id.as_str(), entry.depth )) .collect::>(), vec![ ("change-head", "commit-head", "commit-head", 0), ("change-root", "commit-root", "commit-head", 1), ] ); } #[tokio::test] async fn change_history_from_commit_filters_depth_entity_file_and_tombstones() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ entity_change_with_file( "change-file-a", "entity-1", "test_schema", Some("file-a"), "{}", ), entity_tombstone("change-tombstone", "entity-1", "test_schema"), entity_change_with_file( "change-file-b", "entity-2", "test_schema", Some("file-b"), "{}", ), commit_change("commit-root-change", "commit-root", &["change-file-a"], &[]), commit_change( "commit-head-change", "commit-head", &["change-tombstone", "change-file-b"], &["commit-root"], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let history = reader .change_history_from_commit( "commit-head", &CommitGraphChangeHistoryRequest { entity_ids: vec![crate::entity_identity::EntityIdentity::single("entity-1")], file_ids: vec!["file-a".to_string()], min_depth: Some(1), max_depth: Some(1), include_tombstones: false, ..CommitGraphChangeHistoryRequest::default() }, ) .await .expect("history should resolve"); assert_eq!(history.len(), 1); assert_eq!(history[0].located_change.record.id, "change-file-a"); assert_eq!(history[0].depth, 1); } #[tokio::test] async fn change_history_from_commit_includes_tombstones_when_requested() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ entity_tombstone("change-deleted", "entity-1", "test_schema"), commit_change( "commit-head-change", "commit-head", &["change-deleted"], &[], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let hidden = reader .change_history_from_commit("commit-head", &CommitGraphChangeHistoryRequest::default()) .await .expect("history should resolve"); let visible = reader .change_history_from_commit( "commit-head", &CommitGraphChangeHistoryRequest { include_tombstones: true, ..CommitGraphChangeHistoryRequest::default() }, ) .await .expect("history should resolve"); assert!(hidden.is_empty()); assert_eq!(visible.len(), 1); assert_eq!(visible[0].located_change.record.id, "change-deleted"); } #[derive(Clone)] struct TestChange { change: Change, commit_change_ids: Vec, parent_commit_ids: Vec, author_account_ids: Vec, } impl TestChange { fn commit( change_id: &str, commit_id: &str, change_ids: &[&str], parent_commit_ids: &[&str], ) -> Self { Self { change: Change { id: change_id.to_string(), entity_id: crate::entity_identity::EntityIdentity::single(commit_id), schema_key: super::COMMIT_SCHEMA_KEY.to_string(), file_id: None, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), }, commit_change_ids: change_ids.iter().map(|id| id.to_string()).collect(), parent_commit_ids: parent_commit_ids.iter().map(|id| id.to_string()).collect(), author_account_ids: Vec::new(), } } fn entity( change_id: &str, entity_id: &str, schema_key: &str, file_id: Option<&str>, snapshot_content: Option<&str>, created_at: &str, ) -> Self { Self { change: Change { id: change_id.to_string(), entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: schema_key.to_string(), file_id: file_id.map(str::to_string), snapshot_ref: snapshot_content.map(|content| { crate::json_store::JsonRef::from_hash(blake3::hash(content.as_bytes())) }), metadata_ref: None, created_at: created_at.to_string(), }, commit_change_ids: Vec::new(), parent_commit_ids: Vec::new(), author_account_ids: Vec::new(), } } fn is_commit(&self) -> bool { self.change.schema_key == super::COMMIT_SCHEMA_KEY } } async fn append_changes(storage: StorageContext, changes: &[TestChange]) { let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let canonical_changes = changes .iter() .filter(|change| !change.is_commit()) .map(|change| change.change.clone()) .collect::>(); let changes_by_id: BTreeMap<&str, &Change> = canonical_changes .iter() .map(|change| (change.id.as_str(), change)) .collect::>(); let mut authored_change_ids = BTreeSet::new(); let commit_store = CommitStoreContext::new(); for change in changes.iter().filter(|change| change.is_commit()) { let commit = crate::commit_graph::CommitGraphCommit { canonical_change: change.change.clone(), change: change.change.clone(), commit_id: change .change .entity_id .as_single_string() .expect("commit fixture should use single entity id") .to_string(), change_ids: change.commit_change_ids.clone(), author_account_ids: change.author_account_ids.clone(), parent_commit_ids: change.parent_commit_ids.clone(), }; let parent_commit_ids = commit.parent_commit_ids.clone(); let author_account_ids = commit.author_account_ids.clone(); let commit_draft = CommitDraftRef { id: &commit.commit_id, change_id: &commit.canonical_change.id, parent_ids: &parent_commit_ids, author_account_ids: &author_account_ids, created_at: &commit.canonical_change.created_at, }; let mut authored_changes = Vec::new(); let mut adopted_changes = Vec::new(); let mut corrupt_missing_members = Vec::new(); for change_id in &commit.change_ids { if let Some(change) = changes_by_id.get(change_id.as_str()) { if authored_change_ids.insert(change_id.clone()) { authored_changes.push(change_ref_from_canonical(change.as_ref())); } else { adopted_changes.push(change_ref_from_canonical(change.as_ref())); } } else { corrupt_missing_members.push(change_id.clone()); } } if corrupt_missing_members.is_empty() { commit_store .writer(tx.as_mut(), &mut writes) .stage_commit_draft(commit_draft, authored_changes, adopted_changes) .await .expect("commit-store append should succeed"); } else { crate::commit_store::storage::stage_commit( &mut writes, commit_draft, authored_changes, corrupt_missing_members .into_iter() .map(|change_id| ChangeLocator { source_commit_id: "missing-source-commit".to_string(), source_pack_id: 0, source_ordinal: 0, change_id, }) .collect(), ) .expect("corrupt commit-store fixture should stage"); } } writes .apply(&mut tx.as_mut()) .await .expect("writes should apply"); tx.commit().await.expect("commit should succeed"); } fn change_ref_from_canonical<'a>(change: crate::commit_store::ChangeRef<'a>) -> ChangeRef<'a> { ChangeRef { id: change.id, entity_id: change.entity_id, schema_key: change.schema_key, file_id: change.file_id, snapshot_ref: change.snapshot_ref, metadata_ref: change.metadata_ref, created_at: change.created_at, } } fn commit_change( change_id: &str, commit_id: &str, change_ids: &[&str], parent_commit_ids: &[&str], ) -> TestChange { TestChange::commit(change_id, commit_id, change_ids, parent_commit_ids) } fn parsed_commit( commit_id: &str, change_ids: &[&str], parent_commit_ids: &[&str], ) -> crate::commit_graph::CommitGraphCommit { let fixture = commit_change( &format!("{commit_id}-change"), commit_id, change_ids, parent_commit_ids, ); crate::commit_graph::CommitGraphCommit { canonical_change: fixture.change.clone(), change: fixture.change, commit_id: commit_id.to_string(), change_ids: change_ids .iter() .map(|change_id| change_id.to_string()) .collect(), author_account_ids: Vec::new(), parent_commit_ids: parent_commit_ids .iter() .map(|parent_id| parent_id.to_string()) .collect(), } } fn entity_change( change_id: &str, entity_id: &str, schema_key: &str, snapshot_content: &str, ) -> TestChange { entity_change_at( change_id, entity_id, schema_key, snapshot_content, "2026-01-01T00:00:00Z", ) } fn entity_change_at( change_id: &str, entity_id: &str, schema_key: &str, snapshot_content: &str, created_at: &str, ) -> TestChange { TestChange::entity( change_id, entity_id, schema_key, None, Some(snapshot_content), created_at, ) } fn entity_change_with_file( change_id: &str, entity_id: &str, schema_key: &str, file_id: Option<&str>, snapshot_content: &str, ) -> TestChange { TestChange::entity( change_id, entity_id, schema_key, file_id, Some(snapshot_content), "2026-01-01T00:00:00Z", ) } fn entity_tombstone(change_id: &str, entity_id: &str, schema_key: &str) -> TestChange { TestChange::entity( change_id, entity_id, schema_key, None, None, "2026-01-02T00:00:00Z", ) } } ================================================ FILE: packages/engine/src/commit_graph/mod.rs ================================================ mod context; mod types; mod walker; #[allow(unused_imports)] pub(crate) use context::{CommitGraphContext, CommitGraphStoreReader}; #[allow(unused_imports)] pub(crate) use types::{ CommitGraphChangeHistoryEntry, CommitGraphChangeHistoryRequest, CommitGraphCommit, CommitGraphEdge, CommitGraphReader, ReachableCommitGraphCommit, }; ================================================ FILE: packages/engine/src/commit_graph/types.rs ================================================ use crate::commit_store::{Change, LocatedChange}; use crate::entity_identity::EntityIdentity; use crate::LixError; /// Parsed `lix_commit` entity from the changelog. /// /// Commits are stored as ordinary canonical changes. The graph reader parses /// their snapshot so traversal code can work with explicit parent ids and the /// ordered canonical changes introduced relative to the first parent. A merge /// commit may reference existing changes from another parent instead of owning /// newly minted copies. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct CommitGraphCommit { pub(crate) canonical_change: Change, pub(crate) change: Change, pub(crate) commit_id: String, pub(crate) change_ids: Vec, pub(crate) author_account_ids: Vec, pub(crate) parent_commit_ids: Vec, } /// Commit reachable from a requested graph head. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct ReachableCommitGraphCommit { pub(crate) commit: CommitGraphCommit, pub(crate) depth: u32, } /// Derived parent/child edge between two commit entities. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct CommitGraphEdge { pub(crate) parent_commit_id: String, pub(crate) child_commit_id: String, pub(crate) parent_order: u32, } /// Filter for canonical change history from a chosen traversal start commit. #[derive(Debug, Clone, Default, PartialEq, Eq)] pub(crate) struct CommitGraphChangeHistoryRequest { pub(crate) entity_ids: Vec, pub(crate) schema_keys: Vec, pub(crate) file_ids: Vec, pub(crate) min_depth: Option, pub(crate) max_depth: Option, pub(crate) include_tombstones: bool, } /// Canonical change observed while walking commit history from a start commit. /// /// `start_commit_id` is the traversal anchor requested by the caller. It is not /// necessarily a graph root or a version head. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct CommitGraphChangeHistoryEntry { pub(crate) located_change: LocatedChange, pub(crate) observed_commit_id: String, pub(crate) start_commit_id: String, pub(crate) depth: u32, } /// Execution-scoped reader for commit graph facts. /// /// SQL surfaces consume this trait so they depend on graph semantics, not on /// changelog storage or traversal details. #[allow(dead_code)] #[async_trait::async_trait] pub(crate) trait CommitGraphReader: Send + Sync { #[allow(dead_code)] async fn load_commit(&mut self, commit_id: &str) -> Result, LixError>; async fn all_commits(&mut self) -> Result, LixError>; async fn reachable_commits( &mut self, head_commit_id: &str, ) -> Result, LixError>; /// Returns the best common ancestors shared by two commit heads. /// /// This is intentionally not called "lowest common ancestor": commit /// history is a DAG, not a tree, and some histories have multiple equally /// good common ancestors. Merge policy can require exactly one base later. #[allow(dead_code)] async fn best_common_ancestors( &mut self, left_commit_id: &str, right_commit_id: &str, ) -> Result, LixError>; /// Resolves the single commit base to use for a three-way merge. /// /// This is merge policy, not raw graph math: no common history and multiple /// best common ancestors are both errors until merge has explicit support /// for those cases. #[allow(dead_code)] async fn merge_base( &mut self, left_commit_id: &str, right_commit_id: &str, ) -> Result; fn commit_edges(&self, commits: &[CommitGraphCommit]) -> Vec; async fn change_history_from_commit( &mut self, start_commit_id: &str, request: &CommitGraphChangeHistoryRequest, ) -> Result, LixError>; } ================================================ FILE: packages/engine/src/commit_graph/walker.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use crate::commit_graph::{CommitGraphCommit, CommitGraphStoreReader, ReachableCommitGraphCommit}; use crate::storage::StorageReader; use crate::LixError; /// Walks parent links from `head_commit_id` and returns reachable commits /// nearest-first. /// /// The walker is intentionally storage-free. It asks `CommitGraphReader` to /// load parsed commit facts and owns only traversal concerns: caching, cycle /// detection, and nearest-depth selection. pub(crate) async fn walk_reachable_commits( reader: &mut CommitGraphStoreReader, head_commit_id: &str, ) -> Result, LixError> where S: StorageReader, { let mut loader = CommitTraversalLoader::new(reader); let mut visiting = BTreeSet::new(); let mut nearest_depths = BTreeMap::new(); loader .walk_commit(head_commit_id, 0, &mut visiting, &mut nearest_depths) .await?; let mut commits = nearest_depths .into_iter() .map(|(commit_id, depth)| { let commit = loader .loaded .remove(&commit_id) .expect("visited commit should be cached"); ReachableCommitGraphCommit { commit, depth } }) .collect::>(); commits.sort_by(|left, right| { left.depth .cmp(&right.depth) .then_with(|| left.commit.commit_id.cmp(&right.commit.commit_id)) }); Ok(commits) } /// Returns the best common ancestors shared by two commit heads. /// /// This is graph math, not merge policy. A commit is "best" when it is a /// common ancestor and no descendant of it is also a common ancestor. /// /// Simple history has one best common ancestor: /// /// ```text /// A -- B -- C left /// \ /// D right /// ``` /// /// `best_common_ancestors(C, D)` returns `[B]`. /// /// Commit history is a DAG, not a tree, so criss-cross histories can have /// multiple equally good answers. Callers that need one merge base should wrap /// this API with an explicit policy instead of pretending the graph always has /// a single lowest common ancestor. pub(crate) async fn best_common_ancestors( reader: &mut CommitGraphStoreReader, left_commit_id: &str, right_commit_id: &str, ) -> Result, LixError> where S: StorageReader, { let left_reachable = walk_reachable_commits(reader, left_commit_id).await?; let right_reachable = walk_reachable_commits(reader, right_commit_id).await?; let right_ids = right_reachable .iter() .map(|reachable| reachable.commit.commit_id.clone()) .collect::>(); let common_ids = left_reachable .iter() .filter(|reachable| right_ids.contains(&reachable.commit.commit_id)) .map(|reachable| reachable.commit.commit_id.clone()) .collect::>(); let mut best = Vec::new(); for reachable in left_reachable { let commit_id = &reachable.commit.commit_id; if !common_ids.contains(commit_id) { continue; } if has_descendant_in_set(reader, commit_id, &common_ids).await? { continue; } best.push(reachable.commit); } best.sort_by(|left, right| left.commit_id.cmp(&right.commit_id)); Ok(best) } async fn has_descendant_in_set( reader: &mut CommitGraphStoreReader, commit_id: &str, candidate_descendant_ids: &BTreeSet, ) -> Result where S: StorageReader, { for candidate_descendant_id in candidate_descendant_ids { if candidate_descendant_id == commit_id { continue; } let reachable = walk_reachable_commits(reader, candidate_descendant_id).await?; if reachable .iter() .any(|reachable| reachable.commit.commit_id == commit_id) { return Ok(true); } } Ok(false) } struct CommitTraversalLoader<'a, S> where S: StorageReader, { reader: &'a mut CommitGraphStoreReader, loaded: BTreeMap, } impl<'a, S> CommitTraversalLoader<'a, S> where S: StorageReader, { fn new(reader: &'a mut CommitGraphStoreReader) -> Self { Self { reader, loaded: BTreeMap::new(), } } async fn walk_commit( &mut self, commit_id: &str, depth: u32, visiting: &mut BTreeSet, nearest_depths: &mut BTreeMap, ) -> Result<(), LixError> { let mut stack = vec![TraversalFrame { commit_id: commit_id.to_string(), depth, expanded: false, }]; while let Some(frame) = stack.pop() { if frame.expanded { visiting.remove(&frame.commit_id); continue; } if visiting.contains(&frame.commit_id) { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "commit_graph cycle detected at commit '{}'", frame.commit_id ), )); } if let Some(previous_depth) = nearest_depths.get(&frame.commit_id) { if *previous_depth <= frame.depth { continue; } } let commit = self.load_commit(&frame.commit_id).await?; nearest_depths.insert(frame.commit_id.clone(), frame.depth); visiting.insert(frame.commit_id.clone()); stack.push(TraversalFrame { commit_id: frame.commit_id, depth: frame.depth, expanded: true, }); for parent_commit_id in commit.parent_commit_ids.iter().rev() { stack.push(TraversalFrame { commit_id: parent_commit_id.clone(), depth: frame.depth + 1, expanded: false, }); } } Ok(()) } async fn load_commit(&mut self, commit_id: &str) -> Result { if let Some(commit) = self.loaded.get(commit_id) { return Ok(commit.clone()); } let Some(commit) = self.reader.load_commit(commit_id).await? else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("commit_graph missing commit '{commit_id}'"), )); }; self.loaded.insert(commit_id.to_string(), commit.clone()); Ok(commit) } } struct TraversalFrame { commit_id: String, depth: u32, expanded: bool, } #[cfg(test)] mod tests { use std::sync::Arc; use serde_json::json; use crate::backend::testing::UnitTestBackend; use crate::commit_graph::CommitGraphContext; use crate::commit_store::{Change, CommitDraftRef, CommitStoreContext}; use crate::storage::{StorageContext, StorageWriteSet}; use crate::LixError; #[tokio::test] async fn reachable_commits_returns_commits_nearest_first() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-root-change", "commit-root", &[], &[]), commit_change( "commit-parent-change", "commit-parent", &[], &["commit-root"], ), commit_change("commit-head-change", "commit-head", &[], &["commit-parent"]), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let commits = reader .reachable_commits("commit-head") .await .expect("reachable commits should load"); assert_eq!( commits .iter() .map(|reachable| (reachable.commit.commit_id.as_str(), reachable.depth)) .collect::>(), vec![("commit-head", 0), ("commit-parent", 1), ("commit-root", 2)] ); } #[tokio::test] async fn reachable_commits_errors_on_missing_parent_commit() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[commit_change( "commit-head-change", "commit-head", &[], &["missing-parent"], )], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let error = reader .reachable_commits("commit-head") .await .expect_err("missing parent should fail"); assert!(error.message.contains("missing-parent")); } #[tokio::test] async fn reachable_commits_errors_on_cycle() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-a-change", "commit-a", &[], &["commit-b"]), commit_change("commit-b-change", "commit-b", &[], &["commit-a"]), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let error = reader .reachable_commits("commit-a") .await .expect_err("cycle should fail"); assert!(error.message.contains("cycle")); } #[tokio::test] async fn reachable_commits_dedupes_shared_ancestors_in_diamond() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-root-change", "commit-root", &[], &[]), commit_change("commit-left-change", "commit-left", &[], &["commit-root"]), commit_change("commit-right-change", "commit-right", &[], &["commit-root"]), commit_change( "commit-head-change", "commit-head", &[], &["commit-left", "commit-right"], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let commits = reader .reachable_commits("commit-head") .await .expect("reachable commits should load"); assert_eq!( commits .iter() .map(|reachable| (reachable.commit.commit_id.as_str(), reachable.depth)) .collect::>(), vec![ ("commit-head", 0), ("commit-left", 1), ("commit-right", 1), ("commit-root", 2), ] ); } #[tokio::test] async fn reachable_commits_keeps_nearest_depth_for_multiple_paths() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-root-change", "commit-root", &[], &[]), commit_change( "commit-parent-change", "commit-parent", &[], &["commit-root"], ), commit_change( "commit-head-change", "commit-head", &[], &["commit-root", "commit-parent"], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let commits = reader .reachable_commits("commit-head") .await .expect("reachable commits should load"); assert_eq!( commits .iter() .map(|reachable| (reachable.commit.commit_id.as_str(), reachable.depth)) .collect::>(), vec![("commit-head", 0), ("commit-parent", 1), ("commit-root", 1)] ); } #[tokio::test] async fn reachable_commits_orders_same_depth_commits_by_id() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-z-change", "commit-z", &[], &[]), commit_change("commit-a-change", "commit-a", &[], &[]), commit_change( "commit-head-change", "commit-head", &[], &["commit-z", "commit-a"], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let commits = reader .reachable_commits("commit-head") .await .expect("reachable commits should load"); assert_eq!( commits .iter() .map(|reachable| (reachable.commit.commit_id.as_str(), reachable.depth)) .collect::>(), vec![("commit-head", 0), ("commit-a", 1), ("commit-z", 1)] ); } #[tokio::test] async fn reachable_commits_errors_on_missing_head_commit() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let error = reader .reachable_commits("missing-head") .await .expect_err("missing head should fail"); assert!(error.message.contains("missing-head")); } #[tokio::test] async fn best_common_ancestors_returns_nearest_common_commit_in_simple_graph() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-a-change", "commit-a", &[], &[]), commit_change("commit-b-change", "commit-b", &[], &["commit-a"]), commit_change("commit-c-change", "commit-c", &[], &["commit-b"]), commit_change("commit-d-change", "commit-d", &[], &["commit-b"]), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let ancestors = reader .best_common_ancestors("commit-c", "commit-d") .await .expect("best common ancestors should load"); assert_eq!( ancestors .iter() .map(|commit| commit.commit_id.as_str()) .collect::>(), vec!["commit-b"] ); } #[tokio::test] async fn best_common_ancestors_returns_shared_fork_in_diamond_graph() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-root-change", "commit-root", &[], &[]), commit_change("commit-left-change", "commit-left", &[], &["commit-root"]), commit_change("commit-right-change", "commit-right", &[], &["commit-root"]), commit_change( "commit-left-head-change", "commit-left-head", &[], &["commit-left"], ), commit_change( "commit-right-head-change", "commit-right-head", &[], &["commit-right"], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let ancestors = reader .best_common_ancestors("commit-left-head", "commit-right-head") .await .expect("best common ancestors should load"); assert_eq!( ancestors .iter() .map(|commit| commit.commit_id.as_str()) .collect::>(), vec!["commit-root"] ); } #[tokio::test] async fn best_common_ancestors_returns_parent_when_one_side_is_ancestor() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-a-change", "commit-a", &[], &[]), commit_change("commit-b-change", "commit-b", &[], &["commit-a"]), commit_change("commit-c-change", "commit-c", &[], &["commit-b"]), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let ancestors = reader .best_common_ancestors("commit-b", "commit-c") .await .expect("best common ancestors should load"); assert_eq!( ancestors .iter() .map(|commit| commit.commit_id.as_str()) .collect::>(), vec!["commit-b"] ); } #[tokio::test] async fn best_common_ancestors_returns_multiple_bases_for_criss_cross_graph() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-root-change", "commit-root", &[], &[]), commit_change("commit-left-change", "commit-left", &[], &["commit-root"]), commit_change("commit-right-change", "commit-right", &[], &["commit-root"]), commit_change( "commit-head-left-change", "commit-head-left", &[], &["commit-left", "commit-right"], ), commit_change( "commit-head-right-change", "commit-head-right", &[], &["commit-right", "commit-left"], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let ancestors = reader .best_common_ancestors("commit-head-left", "commit-head-right") .await .expect("best common ancestors should load"); assert_eq!( ancestors .iter() .map(|commit| commit.commit_id.as_str()) .collect::>(), vec!["commit-left", "commit-right"] ); } #[tokio::test] async fn merge_base_returns_single_best_common_ancestor() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-a-change", "commit-a", &[], &[]), commit_change("commit-b-change", "commit-b", &[], &["commit-a"]), commit_change("commit-c-change", "commit-c", &[], &["commit-b"]), commit_change("commit-d-change", "commit-d", &[], &["commit-b"]), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let base = reader .merge_base("commit-c", "commit-d") .await .expect("single merge base should resolve"); assert_eq!(base.commit_id, "commit-b"); } #[tokio::test] async fn merge_base_errors_when_histories_have_no_common_commit() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-left-change", "commit-left", &[], &[]), commit_change("commit-right-change", "commit-right", &[], &[]), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let error = reader .merge_base("commit-left", "commit-right") .await .expect_err("unrelated histories should not have a merge base"); assert!(error.message.contains("no common history")); } #[tokio::test] async fn merge_base_errors_when_best_common_ancestor_is_ambiguous() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); append_changes( storage.clone(), &[ commit_change("commit-root-change", "commit-root", &[], &[]), commit_change("commit-left-change", "commit-left", &[], &["commit-root"]), commit_change("commit-right-change", "commit-right", &[], &["commit-root"]), commit_change( "commit-head-left-change", "commit-head-left", &[], &["commit-left", "commit-right"], ), commit_change( "commit-head-right-change", "commit-head-right", &[], &["commit-right", "commit-left"], ), ], ) .await; let graph = CommitGraphContext::new(); let mut reader = graph.reader(storage); let error = reader .merge_base("commit-head-left", "commit-head-right") .await .expect_err("ambiguous best common ancestors should fail"); assert_eq!(error.code, LixError::CODE_AMBIGUOUS_MERGE_BASE); assert_eq!( error .details .as_ref() .and_then(|details| details.get("left_commit_id")), Some(&json!("commit-head-left")) ); assert_eq!( error .details .as_ref() .and_then(|details| details.get("right_commit_id")), Some(&json!("commit-head-right")) ); assert_eq!( error .details .as_ref() .and_then(|details| details.get("candidates")), Some(&json!(["commit-left", "commit-right"])) ); } #[derive(Clone)] struct TestCommitChange { change: Change, parent_commit_ids: Vec, } async fn append_changes(storage: StorageContext, changes: &[TestCommitChange]) { let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let commit_store = CommitStoreContext::new(); for change in changes { let commit_id = change .change .entity_id .as_single_string() .expect("commit fixture should have single id") .to_string(); let author_account_ids = Vec::new(); let commit = CommitDraftRef { id: &commit_id, change_id: &change.change.id, parent_ids: &change.parent_commit_ids, author_account_ids: &author_account_ids, created_at: &change.change.created_at, }; commit_store .writer(tx.as_mut(), &mut writes) .stage_commit_draft(commit, Vec::new(), Vec::new()) .await .expect("commit-store fixture should append"); } writes .apply(&mut tx.as_mut()) .await .expect("writes should apply"); tx.commit().await.expect("commit should succeed"); } fn commit_change( change_id: &str, commit_id: &str, change_ids: &[&str], parent_commit_ids: &[&str], ) -> TestCommitChange { let _ = change_ids; TestCommitChange { change: Change { id: change_id.to_string(), entity_id: crate::entity_identity::EntityIdentity::single(commit_id), schema_key: "lix_commit".to_string(), file_id: None, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), }, parent_commit_ids: parent_commit_ids.iter().map(|id| id.to_string()).collect(), } } } ================================================ FILE: packages/engine/src/commit_store/codec.rs ================================================ use crate::commit_store::{ Change, ChangeLocator, ChangeLocatorRef, ChangeRef, Commit, StoredCommitRef, }; use crate::entity_identity::EntityIdentity; use crate::json_store::JsonRef; use crate::LixError; const COMMIT_MAGIC: &[u8; 5] = b"LXCM1"; const CHANGE_MAGIC: &[u8; 5] = b"LXCH2"; const CHANGE_PACK_MAGIC: &[u8; 5] = b"LXCP3"; const MEMBERSHIP_PACK_MAGIC: &[u8; 5] = b"LXMP1"; const CHANGE_ID_FULL: u8 = 0; const CHANGE_ID_COMMIT_SUFFIX: u8 = 1; pub(crate) fn encode_commit_ref(commit: StoredCommitRef<'_>) -> Result, LixError> { let mut bytes = Vec::new(); bytes.extend_from_slice(COMMIT_MAGIC); write_str(&mut bytes, commit.id)?; write_str(&mut bytes, commit.change_id)?; write_strs(&mut bytes, commit.parent_ids.iter().map(String::as_str))?; write_strs( &mut bytes, commit.author_account_ids.iter().map(String::as_str), )?; write_str(&mut bytes, commit.created_at)?; bytes.extend_from_slice(&commit.change_pack_count.to_le_bytes()); bytes.extend_from_slice(&commit.membership_pack_count.to_le_bytes()); Ok(bytes) } pub(crate) fn decode_commit(bytes: &[u8]) -> Result { let mut cursor = ByteCursor::new(bytes); cursor.expect_magic(COMMIT_MAGIC, "commit")?; let id = cursor.read_string("id")?; let change_id = cursor.read_string("change_id")?; let parent_ids = cursor.read_strings("parent_ids")?; let author_account_ids = cursor.read_strings("author_account_ids")?; let created_at = cursor.read_string("created_at")?; let change_pack_count = cursor.read_u32("change_pack_count")?; let membership_pack_count = cursor.read_u32("membership_pack_count")?; cursor.expect_end("commit")?; Ok(Commit { id, change_id, parent_ids, author_account_ids, created_at, change_pack_count, membership_pack_count, }) } pub(crate) fn encode_change_ref(change: ChangeRef<'_>) -> Result, LixError> { let mut bytes = Vec::new(); write_change_ref(&mut bytes, change)?; Ok(bytes) } fn write_change_ref(bytes: &mut Vec, change: ChangeRef<'_>) -> Result<(), LixError> { let entity_id = change.entity_id.as_json_array_text().map_err(|error| { LixError::unknown(format!( "failed to encode commit-store change entity identity: {error}" )) })?; bytes.extend_from_slice(CHANGE_MAGIC); write_str(bytes, change.id)?; write_str(bytes, &entity_id)?; write_str(bytes, change.schema_key)?; write_optional_str(bytes, change.file_id)?; write_optional_json_ref(bytes, change.snapshot_ref); write_optional_json_ref(bytes, change.metadata_ref); write_str(bytes, change.created_at) } pub(crate) fn decode_change(bytes: &[u8]) -> Result { let mut cursor = ByteCursor::new(bytes); cursor.expect_magic(CHANGE_MAGIC, "change")?; let id = cursor.read_string("id")?; let entity_id = cursor.read_string("entity_id")?; let entity_id = EntityIdentity::from_json_array_text(&entity_id).map_err(|error| { LixError::unknown(format!( "failed to decode commit-store change entity identity: {error}" )) })?; let schema_key = cursor.read_string("schema_key")?; let file_id = cursor.read_optional_string("file_id")?; let snapshot_ref = cursor.read_optional_json_ref("snapshot_ref")?; let metadata_ref = cursor.read_optional_json_ref("metadata_ref")?; let created_at = cursor.read_string("created_at")?; cursor.expect_end("change")?; Ok(Change { id, entity_id, schema_key, file_id, snapshot_ref, metadata_ref, created_at, }) } pub(crate) fn encode_change_pack( commit_id: &str, pack_id: u32, changes: &[ChangeRef<'_>], ) -> Result, LixError> { let mut bytes = Vec::new(); bytes.extend_from_slice(CHANGE_PACK_MAGIC); write_var_str(&mut bytes, commit_id, "change pack commit_id")?; bytes.extend_from_slice(&pack_id.to_le_bytes()); let (shapes, change_shape_indexes) = change_shapes(changes); write_var_len(&mut bytes, shapes.len(), "change pack shapes")?; for shape in &shapes { write_var_str(&mut bytes, shape.schema_key, "schema_key")?; write_optional_var_str(&mut bytes, shape.file_id, "file_id")?; } write_var_len(&mut bytes, changes.len(), "change pack changes")?; for (change, shape_index) in changes.iter().copied().zip(change_shape_indexes) { write_var_change_id(&mut bytes, commit_id, change.id)?; write_var_entity_identity(&mut bytes, change.entity_id)?; write_var_len(&mut bytes, shape_index, "change shape index")?; write_optional_json_ref(&mut bytes, change.snapshot_ref); write_optional_json_ref(&mut bytes, change.metadata_ref); write_var_str(&mut bytes, change.created_at, "created_at")?; } Ok(bytes) } pub(crate) fn decode_change_pack(bytes: &[u8]) -> Result<(String, u32, Vec), LixError> { let mut cursor = ByteCursor::new(bytes); cursor.expect_magic(CHANGE_PACK_MAGIC, "change pack")?; let commit_id = cursor.read_var_string("commit_id")?; let pack_id = cursor.read_u32("pack_id")?; let shape_count = cursor.read_var_usize("shape_count")?; let mut shapes = Vec::with_capacity(shape_count); for _ in 0..shape_count { shapes.push(ChangeShape { schema_key: cursor.read_var_string("schema_key")?, file_id: cursor.read_optional_var_string("file_id")?, }); } let change_count = cursor.read_var_usize("change_count")?; let mut changes = Vec::with_capacity(change_count); for _ in 0..change_count { let id = cursor.read_var_change_id(&commit_id)?; let entity_id = cursor.read_var_entity_identity()?; let shape_index = cursor.read_var_usize("shape_index")?; let shape = shapes.get(shape_index).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store change pack: shape index {shape_index} is out of bounds"), ) })?; let snapshot_ref = cursor.read_optional_json_ref("snapshot_ref")?; let metadata_ref = cursor.read_optional_json_ref("metadata_ref")?; let created_at = cursor.read_var_string("created_at")?; changes.push(Change { id, entity_id, schema_key: shape.schema_key.clone(), file_id: shape.file_id.clone(), snapshot_ref, metadata_ref, created_at, }); } cursor.expect_end("change pack")?; Ok((commit_id, pack_id, changes)) } #[derive(Debug, Clone, Copy, PartialEq, Eq)] struct ChangeShapeRef<'a> { schema_key: &'a str, file_id: Option<&'a str>, } #[derive(Debug, Clone, PartialEq, Eq)] struct ChangeShape { schema_key: String, file_id: Option, } fn change_shapes<'a>(changes: &'a [ChangeRef<'a>]) -> (Vec>, Vec) { let mut shapes = Vec::new(); let mut shape_indexes = Vec::with_capacity(changes.len()); for change in changes { let shape = ChangeShapeRef { schema_key: change.schema_key, file_id: change.file_id, }; let shape_index = match shapes.iter().position(|candidate| *candidate == shape) { Some(shape_index) => shape_index, None => { let shape_index = shapes.len(); shapes.push(shape); shape_index } }; shape_indexes.push(shape_index); } (shapes, shape_indexes) } pub(crate) fn encode_membership_pack<'a>( commit_id: &str, pack_id: u32, members: impl IntoIterator>, ) -> Result, LixError> { let members = members.into_iter().collect::>(); let mut bytes = Vec::new(); bytes.extend_from_slice(MEMBERSHIP_PACK_MAGIC); write_str(&mut bytes, commit_id)?; bytes.extend_from_slice(&pack_id.to_le_bytes()); write_len(&mut bytes, members.len(), "membership pack members")?; for member in members { encode_locator(&mut bytes, member)?; } Ok(bytes) } pub(crate) fn decode_membership_pack( bytes: &[u8], ) -> Result<(String, u32, Vec), LixError> { let mut cursor = ByteCursor::new(bytes); cursor.expect_magic(MEMBERSHIP_PACK_MAGIC, "membership pack")?; let commit_id = cursor.read_string("commit_id")?; let pack_id = cursor.read_u32("pack_id")?; let member_count = cursor.read_u32("member_count")? as usize; let mut members = Vec::with_capacity(member_count); for _ in 0..member_count { members.push(decode_locator(&mut cursor)?); } cursor.expect_end("membership pack")?; Ok((commit_id, pack_id, members)) } fn encode_locator(bytes: &mut Vec, locator: ChangeLocatorRef<'_>) -> Result<(), LixError> { write_str(bytes, locator.source_commit_id)?; bytes.extend_from_slice(&locator.source_pack_id.to_le_bytes()); bytes.extend_from_slice(&locator.source_ordinal.to_le_bytes()); write_str(bytes, locator.change_id) } fn decode_locator(cursor: &mut ByteCursor<'_>) -> Result { Ok(ChangeLocator { source_commit_id: cursor.read_string("source_commit_id")?, source_pack_id: cursor.read_u32("source_pack_id")?, source_ordinal: cursor.read_u32("source_ordinal")?, change_id: cursor.read_string("change_id")?, }) } fn write_str(bytes: &mut Vec, value: &str) -> Result<(), LixError> { let len = u32::try_from(value.len()).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store string field exceeds u32 length", ) })?; bytes.extend_from_slice(&len.to_le_bytes()); bytes.extend_from_slice(value.as_bytes()); Ok(()) } fn write_optional_str(bytes: &mut Vec, value: Option<&str>) -> Result<(), LixError> { match value { Some(value) => { bytes.push(1); write_str(bytes, value)?; } None => bytes.push(0), } Ok(()) } fn write_optional_json_ref(bytes: &mut Vec, value: Option<&JsonRef>) { match value { Some(value) => { bytes.push(1); bytes.extend_from_slice(value.as_hash_bytes()); } None => bytes.push(0), } } fn write_len(bytes: &mut Vec, len: usize, field: &str) -> Result<(), LixError> { let len = u32::try_from(len).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("commit-store {field} exceeds u32 length"), ) })?; bytes.extend_from_slice(&len.to_le_bytes()); Ok(()) } fn write_var_len(bytes: &mut Vec, len: usize, field: &str) -> Result<(), LixError> { let mut value = u32::try_from(len).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("commit-store {field} exceeds u32 length"), ) })?; while value >= 0x80 { bytes.push((value as u8 & 0x7f) | 0x80); value >>= 7; } bytes.push(value as u8); Ok(()) } fn write_var_str(bytes: &mut Vec, value: &str, field: &str) -> Result<(), LixError> { write_var_len(bytes, value.len(), field)?; bytes.extend_from_slice(value.as_bytes()); Ok(()) } fn write_optional_var_str( bytes: &mut Vec, value: Option<&str>, field: &str, ) -> Result<(), LixError> { match value { Some(value) => { bytes.push(1); write_var_str(bytes, value, field)?; } None => bytes.push(0), } Ok(()) } fn write_change_id(bytes: &mut Vec, commit_id: &str, change_id: &str) -> Result<(), LixError> { if let Some(suffix) = change_id.strip_prefix(commit_id) { bytes.push(CHANGE_ID_COMMIT_SUFFIX); write_str(bytes, suffix) } else { bytes.push(CHANGE_ID_FULL); write_str(bytes, change_id) } } fn write_var_change_id( bytes: &mut Vec, commit_id: &str, change_id: &str, ) -> Result<(), LixError> { if let Some(suffix) = change_id.strip_prefix(commit_id) { bytes.push(CHANGE_ID_COMMIT_SUFFIX); write_var_str(bytes, suffix, "change_id") } else { bytes.push(CHANGE_ID_FULL); write_var_str(bytes, change_id, "change_id") } } fn write_entity_identity(bytes: &mut Vec, identity: &EntityIdentity) -> Result<(), LixError> { write_len( bytes, identity.parts.len(), "commit-store entity identity parts", )?; for part in &identity.parts { write_str(bytes, part)?; } Ok(()) } fn write_var_entity_identity( bytes: &mut Vec, identity: &EntityIdentity, ) -> Result<(), LixError> { write_var_len( bytes, identity.parts.len(), "commit-store entity identity parts", )?; for part in &identity.parts { write_var_str(bytes, part, "entity identity part")?; } Ok(()) } fn write_strs<'a>( bytes: &mut Vec, values: impl IntoIterator, ) -> Result<(), LixError> { let values = values.into_iter().collect::>(); let len = u32::try_from(values.len()).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store string vector field exceeds u32 length", ) })?; bytes.extend_from_slice(&len.to_le_bytes()); for value in values { write_str(bytes, value)?; } Ok(()) } struct ByteCursor<'a> { bytes: &'a [u8], offset: usize, } impl<'a> ByteCursor<'a> { fn new(bytes: &'a [u8]) -> Self { Self { bytes, offset: 0 } } fn expect_magic(&mut self, magic: &[u8], label: &str) -> Result<(), LixError> { if self.bytes.len() < magic.len() || &self.bytes[..magic.len()] != magic { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store {label}: invalid magic"), )); } self.offset = magic.len(); Ok(()) } fn read_string(&mut self, field: &str) -> Result { let len = self.read_u32(field)? as usize; let end = self.offset.checked_add(len).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: length overflow"), ) })?; let bytes = self.bytes.get(self.offset..end).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: truncated string"), ) })?; self.offset = end; String::from_utf8(bytes.to_vec()).map_err(|error| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}` as UTF-8: {error}"), ) }) } fn read_strings(&mut self, field: &str) -> Result, LixError> { let count = self.read_u32(field)? as usize; let mut values = Vec::with_capacity(count); for _ in 0..count { values.push(self.read_string(field)?); } Ok(values) } fn read_optional_string(&mut self, field: &str) -> Result, LixError> { match self.read_u8(field)? { 0 => Ok(None), 1 => self.read_string(field).map(Some), tag => Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: invalid option tag {tag}"), )), } } fn read_optional_json_ref(&mut self, field: &str) -> Result, LixError> { match self.read_u8(field)? { 0 => Ok(None), 1 => { let end = self.offset.checked_add(32).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: offset overflow"), ) })?; let bytes = self.bytes.get(self.offset..end).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: truncated ref"), ) })?; self.offset = end; let hash = <[u8; 32]>::try_from(bytes).expect("json ref length was checked"); Ok(Some(JsonRef::from_hash_bytes(hash))) } tag => Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: invalid option tag {tag}"), )), } } fn read_u8(&mut self, field: &str) -> Result { let byte = self.bytes.get(self.offset).copied().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: truncated u8"), ) })?; self.offset += 1; Ok(byte) } fn read_u32(&mut self, field: &str) -> Result { let end = self.offset.checked_add(4).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: offset overflow"), ) })?; let bytes = self.bytes.get(self.offset..end).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: truncated u32"), ) })?; self.offset = end; Ok(u32::from_le_bytes( bytes .try_into() .expect("slice length was checked before u32 decode"), )) } fn read_var_usize(&mut self, field: &str) -> Result { let mut value = 0u32; let mut shift = 0u32; for byte_index in 0..5 { let byte = self.read_u8(field)?; if shift == 28 && (byte & 0x80 != 0 || byte & 0x70 != 0) { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: varint exceeds u32"), )); } if byte_index > 0 && byte & 0x80 == 0 && byte == 0 { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: non-canonical varint"), )); } value |= ((byte & 0x7f) as u32) << shift; if byte & 0x80 == 0 { return Ok(value as usize); } shift += 7; } Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: varint exceeds u32"), )) } fn read_var_string(&mut self, field: &str) -> Result { let len = self.read_var_usize(field)?; let end = self.offset.checked_add(len).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: length overflow"), ) })?; let bytes = self.bytes.get(self.offset..end).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: truncated string"), ) })?; self.offset = end; String::from_utf8(bytes.to_vec()).map_err(|error| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}` as UTF-8: {error}"), ) }) } fn read_optional_var_string(&mut self, field: &str) -> Result, LixError> { match self.read_u8(field)? { 0 => Ok(None), 1 => self.read_var_string(field).map(Some), tag => Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `{field}`: invalid option tag {tag}"), )), } } fn read_change_id(&mut self, commit_id: &str) -> Result { let tag = self.read_u8("change_id tag")?; let value = self.read_string("change_id")?; match tag { CHANGE_ID_FULL => Ok(value), CHANGE_ID_COMMIT_SUFFIX => Ok(format!("{commit_id}{value}")), tag => Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `change_id`: invalid tag {tag}"), )), } } fn read_var_change_id(&mut self, commit_id: &str) -> Result { let tag = self.read_u8("change_id tag")?; let value = self.read_var_string("change_id")?; match tag { CHANGE_ID_FULL => Ok(value), CHANGE_ID_COMMIT_SUFFIX => Ok(format!("{commit_id}{value}")), tag => Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store field `change_id`: invalid tag {tag}"), )), } } fn read_entity_identity(&mut self) -> Result { let count = self.read_u32("entity identity part count")? as usize; let mut parts = Vec::with_capacity(count); for _ in 0..count { parts.push(self.read_string("entity identity part")?); } if parts.is_empty() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, "failed to decode commit-store entity identity: empty identity", )); } Ok(EntityIdentity { parts }) } fn read_var_entity_identity(&mut self) -> Result { let count = self.read_var_usize("entity identity part count")?; let mut parts = Vec::with_capacity(count); for _ in 0..count { parts.push(self.read_var_string("entity identity part")?); } if parts.is_empty() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, "failed to decode commit-store entity identity: empty identity", )); } Ok(EntityIdentity { parts }) } fn expect_end(&self, label: &str) -> Result<(), LixError> { if self.offset != self.bytes.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to decode commit-store {label}: trailing bytes"), )); } Ok(()) } } #[cfg(test)] mod tests { use super::*; #[test] fn commit_codec_roundtrips() { let commit = Commit { id: "commit-1".to_string(), change_id: "commit-change-1".to_string(), parent_ids: vec!["parent-1".to_string(), "parent-2".to_string()], author_account_ids: vec!["author-1".to_string()], created_at: "2026-01-01T00:00:00Z".to_string(), change_pack_count: 2, membership_pack_count: 1, }; let encoded = encode_commit_ref(commit.as_ref()).expect("commit should encode"); let decoded = decode_commit(&encoded).expect("commit should decode"); assert_eq!(decoded, commit); } #[test] fn change_codec_roundtrips() { let change = Change { id: "change-1".to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: Some("file-1".to_string()), snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])), metadata_ref: Some(JsonRef::from_hash_bytes([2; 32])), created_at: "2026-01-01T00:00:00Z".to_string(), }; let encoded = encode_change_ref(change.as_ref()).expect("change should encode"); let decoded = decode_change(&encoded).expect("change should decode"); assert_eq!(decoded, change); } #[test] fn change_codec_roundtrips_empty_optionals() { let change = Change { id: "change-1".to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: None, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), }; let encoded = encode_change_ref(change.as_ref()).expect("change should encode"); let decoded = decode_change(&encoded).expect("change should decode"); assert_eq!(decoded, change); } #[test] fn change_pack_compacts_shared_shape_and_commit_id_prefix() { let changes = [ Change { id: "commit-1:change-1".to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: Some("file-1".to_string()), snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])), metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), }, Change { id: "external-change".to_string(), entity_id: EntityIdentity::single("entity-2"), schema_key: "test_schema".to_string(), file_id: Some("file-1".to_string()), snapshot_ref: None, metadata_ref: Some(JsonRef::from_hash_bytes([2; 32])), created_at: "2026-01-02T00:00:00Z".to_string(), }, ]; let encoded = encode_change_pack( "commit-1", 7, &changes.iter().map(Change::as_ref).collect::>(), ) .expect("pack should encode"); let (commit_id, pack_id, decoded) = decode_change_pack(&encoded).expect("pack should decode"); assert_eq!(commit_id, "commit-1"); assert_eq!(pack_id, 7); assert_eq!(decoded, changes); let mut cursor = ByteCursor::new(&encoded); cursor .expect_magic(CHANGE_PACK_MAGIC, "change pack") .unwrap(); assert_eq!(cursor.read_var_string("commit_id").unwrap(), "commit-1"); assert_eq!(cursor.read_u32("pack_id").unwrap(), 7); assert_eq!(cursor.read_var_usize("shape_count").unwrap(), 1); assert_eq!(cursor.read_var_string("schema_key").unwrap(), "test_schema"); assert_eq!( cursor .read_optional_var_string("file_id") .unwrap() .as_deref(), Some("file-1") ); assert_eq!(cursor.read_var_usize("change_count").unwrap(), 2); assert_eq!( cursor.read_u8("change_id tag").unwrap(), CHANGE_ID_COMMIT_SUFFIX ); assert_eq!(cursor.read_var_string("change_id").unwrap(), ":change-1"); } #[test] fn change_pack_rejects_overlong_varint() { let mut encoded = Vec::new(); encoded.extend_from_slice(CHANGE_PACK_MAGIC); encoded.extend_from_slice(&[0x80, 0x80, 0x80, 0x80, 0x80]); let error = decode_change_pack(&encoded).expect_err("overlong varint should reject"); assert!( error.to_string().contains("varint exceeds u32"), "error should mention overlong varint: {error}" ); } #[test] fn change_pack_rejects_varint_above_u32() { let mut encoded = Vec::new(); encoded.extend_from_slice(CHANGE_PACK_MAGIC); encoded.extend_from_slice(&[0xff, 0xff, 0xff, 0xff, 0x1f]); let error = decode_change_pack(&encoded).expect_err("too-large varint should reject"); assert!( error.to_string().contains("varint exceeds u32"), "error should mention oversized varint: {error}" ); } #[test] fn change_pack_rejects_non_canonical_varint() { let mut encoded = Vec::new(); encoded.extend_from_slice(CHANGE_PACK_MAGIC); encoded.extend_from_slice(&[0x80, 0x00]); let error = decode_change_pack(&encoded).expect_err("non-canonical varint should reject"); assert!( error.to_string().contains("non-canonical varint"), "error should mention non-canonical varint: {error}" ); } #[test] fn change_codec_rejects_invalid_optional_tag() { let change = Change { id: "change-1".to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: None, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), }; let mut encoded = encode_change_ref(change.as_ref()).expect("change should encode"); let mut cursor = ByteCursor::new(&encoded); cursor.expect_magic(CHANGE_MAGIC, "change").unwrap(); cursor.read_string("id").unwrap(); cursor.read_string("entity_id").unwrap(); cursor.read_string("schema_key").unwrap(); let file_tag_offset = cursor.offset; encoded[file_tag_offset] = 2; let error = decode_change(&encoded).expect_err("invalid optional tag should fail"); assert!( error.to_string().contains("invalid option tag"), "error should mention invalid tag: {error}" ); } #[test] fn change_codec_rejects_truncated_json_ref() { let change = Change { id: "change-1".to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: None, snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])), metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), }; let mut encoded = encode_change_ref(change.as_ref()).expect("change should encode"); let mut cursor = ByteCursor::new(&encoded); cursor.expect_magic(CHANGE_MAGIC, "change").unwrap(); cursor.read_string("id").unwrap(); cursor.read_string("entity_id").unwrap(); cursor.read_string("schema_key").unwrap(); cursor.read_optional_string("file_id").unwrap(); cursor.read_u8("snapshot_ref").unwrap(); encoded.truncate(cursor.offset + 16); let error = decode_change(&encoded).expect_err("truncated ref should fail"); assert!( error.to_string().contains("truncated ref"), "error should mention truncation: {error}" ); } #[test] fn change_codec_rejects_trailing_bytes() { let change = Change { id: "change-1".to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: None, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), }; let mut encoded = encode_change_ref(change.as_ref()).expect("change should encode"); encoded.push(0); let error = decode_change(&encoded).expect_err("trailing bytes should fail"); assert!( error.to_string().contains("trailing bytes"), "error should mention trailing bytes: {error}" ); } } ================================================ FILE: packages/engine/src/commit_store/context.rs ================================================ use crate::commit_store::{ Change, ChangeIndexEntry, ChangeLocator, ChangeRef, ChangeScanRequest, Commit, CommitDraftRef, LocatedChange, StagedCommitStoreCommit, }; use crate::storage::{StorageReader, StorageWriteSet}; use crate::LixError; use std::collections::{BTreeMap, BTreeSet}; use tokio::sync::Mutex; /// Canonical physical storage boundary for commits and their changes. #[derive(Clone, Copy, Debug, Default)] pub(crate) struct CommitStoreContext; impl CommitStoreContext { pub(crate) fn new() -> Self { Self } /// Creates a commit-store writer over read visibility and a pending write set. pub(crate) fn writer<'a, S>( &self, store: &'a mut S, writes: &'a mut StorageWriteSet, ) -> CommitStoreWriter<'a, S> where S: StorageReader + ?Sized, { CommitStoreWriter { store, writes } } /// Creates a commit-store reader over a storage snapshot or transaction. pub(crate) fn reader(&self, store: S) -> CommitStoreReader where S: StorageReader, { CommitStoreReader { store: Mutex::new(store), } } pub(crate) async fn load_commit_from( &self, store: &mut (impl StorageReader + ?Sized), commit_id: &str, ) -> Result, LixError> { crate::commit_store::storage::load_commit(store, commit_id).await } pub(crate) async fn load_change_pack_from( &self, store: &mut (impl StorageReader + ?Sized), commit_id: &str, pack_id: u32, ) -> Result>, LixError> { crate::commit_store::storage::load_change_pack(store, commit_id, pack_id).await } pub(crate) async fn load_membership_pack_from( &self, store: &mut (impl StorageReader + ?Sized), commit_id: &str, pack_id: u32, ) -> Result>, LixError> { crate::commit_store::storage::load_membership_pack(store, commit_id, pack_id).await } } /// Commit-store reader over a storage snapshot or transaction. pub(crate) struct CommitStoreReader { store: Mutex, } impl CommitStoreReader where S: StorageReader, { pub(crate) async fn load_change_index_entries( &self, change_ids: &[String], ) -> Result>, LixError> { crate::commit_store::storage::load_change_index_entries( &mut *self.store.lock().await, change_ids, ) .await } pub(crate) async fn load_commit( &self, commit_id: &str, ) -> Result, LixError> { crate::commit_store::storage::load_commit(&mut *self.store.lock().await, commit_id).await } pub(crate) async fn scan_commits(&self) -> Result, LixError> { crate::commit_store::storage::scan_commits(&mut *self.store.lock().await).await } pub(crate) async fn load_change_pack( &self, commit_id: &str, pack_id: u32, ) -> Result>, LixError> { crate::commit_store::storage::load_change_pack( &mut *self.store.lock().await, commit_id, pack_id, ) .await } pub(crate) async fn load_membership_pack( &self, commit_id: &str, pack_id: u32, ) -> Result>, LixError> { crate::commit_store::storage::load_membership_pack( &mut *self.store.lock().await, commit_id, pack_id, ) .await } pub(crate) async fn load_changes( &self, change_ids: &[String], ) -> Result>, LixError> { if change_ids.is_empty() { return Ok(Vec::new()); } let mut store = self.store.lock().await; let entries = crate::commit_store::storage::load_change_index_entries(&mut *store, change_ids) .await?; let mut changes = Vec::with_capacity(entries.len()); let mut commits_by_id = BTreeMap::new(); let mut packs_by_locator = BTreeMap::new(); for (change_id, entry) in change_ids.iter().zip(entries) { changes.push(match entry { Some(ChangeIndexEntry::CommitHeader { commit_id, .. }) => { if !commits_by_id.contains_key(&commit_id) { let commit = crate::commit_store::storage::load_commit(&mut *store, &commit_id) .await?; commits_by_id.insert(commit_id.clone(), commit); } commits_by_id .get(&commit_id) .cloned() .flatten() .map(commit_header_change) } Some(ChangeIndexEntry::PackedChange { locator }) => Some( load_change_by_locator_cached( &mut *store, &mut packs_by_locator, &locator, change_id, ) .await?, ), None => None, }); } Ok(changes) } pub(crate) async fn load_located_changes( &self, change_ids: &[String], ) -> Result>, LixError> { if change_ids.is_empty() { return Ok(Vec::new()); } let mut store = self.store.lock().await; let entries = crate::commit_store::storage::load_change_index_entries(&mut *store, change_ids) .await?; let mut changes = Vec::with_capacity(entries.len()); let mut commits_by_id = BTreeMap::new(); let mut packs_by_locator = BTreeMap::new(); for (change_id, entry) in change_ids.iter().zip(entries) { changes.push(match entry { Some(ChangeIndexEntry::CommitHeader { commit_id, .. }) => { if !commits_by_id.contains_key(&commit_id) { let commit = crate::commit_store::storage::load_commit(&mut *store, &commit_id) .await?; commits_by_id.insert(commit_id.clone(), commit); } commits_by_id .get(&commit_id) .cloned() .flatten() .map(|commit| located_commit_header_change(commit, 0)) } Some(ChangeIndexEntry::PackedChange { locator }) => Some(LocatedChange { record: load_change_by_locator_cached( &mut *store, &mut packs_by_locator, &locator, change_id, ) .await?, source_commit_id: locator.source_commit_id, source_pack_id: locator.source_pack_id, }), None => None, }); } Ok(changes) } pub(crate) async fn load_commit_changes( &self, commit_id: &str, ) -> Result, LixError> { let mut store = self.store.lock().await; let Some(commit) = crate::commit_store::storage::load_commit(&mut *store, commit_id).await? else { return Ok(Vec::new()); }; let mut changes = Vec::new(); for pack_id in 0..commit.change_pack_count { let Some(mut pack_changes) = crate::commit_store::storage::load_change_pack(&mut *store, commit_id, pack_id) .await? else { return Err(missing_pack_error("change", commit_id, pack_id)); }; changes.append(&mut pack_changes); } for pack_id in 0..commit.membership_pack_count { let Some(locators) = crate::commit_store::storage::load_membership_pack(&mut *store, commit_id, pack_id) .await? else { return Err(missing_pack_error("membership", commit_id, pack_id)); }; for locator in locators { let change = load_change_by_locator(&mut *store, &locator, &locator.change_id).await?; changes.push(change); } } Ok(changes) } pub(crate) async fn scan_changes( &self, request: &ChangeScanRequest, ) -> Result, LixError> { scan_changes_from_commit_store(&mut *self.store.lock().await, request).await } } /// Commit-store writer over read visibility and a transaction-local write set. pub(crate) struct CommitStoreWriter<'a, S: ?Sized> { store: &'a mut S, writes: &'a mut StorageWriteSet, } struct PendingCommitDraft<'a> { commit: CommitDraftRef<'a>, authored_changes: Vec>, adopted_changes: Vec>, } impl CommitStoreWriter<'_, S> where S: StorageReader + ?Sized, { /// Validates and stages canonical commit-store writes for complete commits. /// /// Callers provide logical commit facts and borrowed change facts. The /// commit store owns change-id uniqueness, adoption resolution, pack /// locators, and physical namespace writes. pub(crate) async fn stage_commit_draft<'a>( &mut self, commit: CommitDraftRef<'a>, authored_changes: Vec>, adopted_changes: Vec>, ) -> Result { let mut staged = self .stage_commit_drafts([(commit, authored_changes, adopted_changes)]) .await?; staged.pop().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store staged no result for one commit draft", ) }) } /// Validates and stages a tracked commit whose authored rows will be stored /// in the tracked-state delta pack instead of a duplicate commit-store pack. pub(crate) async fn stage_tracked_commit_draft<'a>( &mut self, commit: CommitDraftRef<'a>, authored_changes: Vec>, adopted_changes: Vec>, ) -> Result { let mut staged = self .stage_tracked_commit_drafts([(commit, authored_changes, adopted_changes)]) .await?; staged.pop().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store staged no result for one tracked commit draft", ) }) } /// Validates and stages multiple commit drafts as one commit-store batch. pub(crate) async fn stage_commit_drafts<'a>( &mut self, commits: impl IntoIterator, Vec>, Vec>)>, ) -> Result, LixError> { self.stage_commit_drafts_with_authored_pack(commits, true) .await } /// Validates and stages multiple tracked commit drafts whose authored rows /// will be stored in tracked-state delta packs. pub(crate) async fn stage_tracked_commit_drafts<'a>( &mut self, commits: impl IntoIterator, Vec>, Vec>)>, ) -> Result, LixError> { self.stage_commit_drafts_with_authored_pack(commits, false) .await } async fn stage_commit_drafts_with_authored_pack<'a>( &mut self, commits: impl IntoIterator, Vec>, Vec>)>, write_authored_change_pack: bool, ) -> Result, LixError> { let commits = commits .into_iter() .map( |(commit, authored_changes, adopted_changes)| PendingCommitDraft { commit, authored_changes, adopted_changes, }, ) .collect::>(); let adopted_locators = validate_stage_commits(self.store, &commits).await?; let mut staged = Vec::with_capacity(commits.len()); for commit in commits { let mut adopted_changes = Vec::with_capacity(commit.adopted_changes.len()); for change in &commit.adopted_changes { let Some(locator) = adopted_locators.get(change.id) else { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "validated adopted commit-store change id '{}' has no locator", change.id ), )); }; adopted_changes.push(locator.clone()); } staged.push(if write_authored_change_pack { crate::commit_store::storage::stage_commit( self.writes, commit.commit, commit.authored_changes, adopted_changes, )? } else { crate::commit_store::storage::stage_commit_with_external_authored_pack( self.writes, commit.commit, commit.authored_changes, adopted_changes, )? }); } Ok(staged) } } async fn validate_stage_commits<'a>( store: &mut (impl StorageReader + ?Sized), commits: &[PendingCommitDraft<'a>], ) -> Result, LixError> { validate_new_changes_absent(store, commits).await?; validate_adopted_changes_present(store, commits).await } async fn scan_changes_from_commit_store( store: &mut (impl StorageReader + ?Sized), request: &ChangeScanRequest, ) -> Result, LixError> { let limit = request.limit.unwrap_or(usize::MAX); let commits = crate::commit_store::storage::scan_commits(store).await?; let mut changes = Vec::new(); for commit in commits { if changes.len() >= limit { break; } for pack_id in 0..commit.change_pack_count { if changes.len() >= limit { break; } let Some(mut pack_changes) = crate::commit_store::storage::load_change_pack(store, &commit.id, pack_id).await? else { return Err(missing_pack_error("change", &commit.id, pack_id)); }; let remaining = limit - changes.len(); if pack_changes.len() > remaining { pack_changes.truncate(remaining); } changes.extend(pack_changes.into_iter().map(|record| LocatedChange { record, source_commit_id: commit.id.clone(), source_pack_id: pack_id, })); } if changes.len() < limit { changes.push(located_commit_header_change(commit, 0)); } } Ok(changes) } async fn load_change_by_locator( store: &mut (impl StorageReader + ?Sized), locator: &ChangeLocator, expected_change_id: &str, ) -> Result { let Some(changes) = crate::commit_store::storage::load_change_pack( store, &locator.source_commit_id, locator.source_pack_id, ) .await? else { return Err(missing_pack_error( "change", &locator.source_commit_id, locator.source_pack_id, )); }; let change = changes .get(usize::try_from(locator.source_ordinal).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store change locator ordinal does not fit usize", ) })?) .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store change locator for '{}' points past pack '{}' in commit '{}'", expected_change_id, locator.source_pack_id, locator.source_commit_id ), ) })?; if change.id != expected_change_id || change.id != locator.change_id { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store change locator expected '{}' but found '{}'", expected_change_id, change.id ), )); } Ok(change.clone()) } async fn load_change_by_locator_cached( store: &mut (impl StorageReader + ?Sized), packs_by_locator: &mut BTreeMap<(String, u32), Vec>, locator: &ChangeLocator, expected_change_id: &str, ) -> Result { let key = (locator.source_commit_id.clone(), locator.source_pack_id); if !packs_by_locator.contains_key(&key) { let Some(changes) = crate::commit_store::storage::load_change_pack( store, &locator.source_commit_id, locator.source_pack_id, ) .await? else { return Err(missing_pack_error( "change", &locator.source_commit_id, locator.source_pack_id, )); }; packs_by_locator.insert(key.clone(), changes); } let changes = packs_by_locator.get(&key).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store change pack cache lost a loaded pack", ) })?; let change = changes .get(usize::try_from(locator.source_ordinal).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store change locator ordinal does not fit usize", ) })?) .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store change locator for '{}' points past pack '{}' in commit '{}'", expected_change_id, locator.source_pack_id, locator.source_commit_id ), ) })?; if change.id != expected_change_id || change.id != locator.change_id { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store change locator expected '{}' but found '{}'", expected_change_id, change.id ), )); } Ok(change.clone()) } fn commit_header_change(commit: Commit) -> Change { Change { id: commit.change_id, entity_id: crate::entity_identity::EntityIdentity::single(commit.id), schema_key: "lix_commit".to_string(), file_id: None, snapshot_ref: None, metadata_ref: None, created_at: commit.created_at, } } fn located_commit_header_change(commit: Commit, source_pack_id: u32) -> LocatedChange { let source_commit_id = commit.id.clone(); LocatedChange { record: commit_header_change(commit), source_commit_id, source_pack_id, } } fn missing_pack_error(label: &str, commit_id: &str, pack_id: u32) -> LixError { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("commit-store missing {label} pack ({commit_id}, {pack_id})"), ) } async fn validate_new_changes_absent<'a>( store: &mut (impl StorageReader + ?Sized), commits: &[PendingCommitDraft<'a>], ) -> Result<(), LixError> { let mut change_ids = Vec::new(); let mut seen_change_ids = BTreeSet::new(); for commit in commits { if !seen_change_ids.insert(commit.commit.change_id) { return Err(duplicate_change_id_error(commit.commit.change_id)); } change_ids.push(commit.commit.change_id.to_string()); for change in &commit.authored_changes { if !seen_change_ids.insert(change.id) { return Err(duplicate_change_id_error(change.id)); } change_ids.push(change.id.to_string()); } } let reader = CommitStoreContext::new().reader(&mut *store); let existing_changes = reader.load_change_index_entries(&change_ids).await?; for (change_id, existing) in change_ids.iter().zip(existing_changes) { if existing.is_some() { return Err(LixError::new( LixError::CODE_UNIQUE, format!("commit-store change id '{}' already exists", change_id), )); } } Ok(()) } async fn validate_adopted_changes_present<'a>( store: &mut (impl StorageReader + ?Sized), commits: &[PendingCommitDraft<'a>], ) -> Result, LixError> { let mut expected_changes = Vec::new(); let mut seen_change_ids = BTreeSet::new(); for commit in commits { for change in &commit.adopted_changes { if !seen_change_ids.insert(change.id) { return Err(LixError::new( LixError::CODE_UNIQUE, format!( "adopted commit-store change id '{}' appears more than once in the same transaction", change.id ), )); } expected_changes.push(*change); } } if expected_changes.is_empty() { return Ok(BTreeMap::new()); } let change_ids = expected_changes .iter() .map(|change| change.id.to_string()) .collect::>(); let reader = CommitStoreContext::new().reader(&mut *store); let existing_entries = reader.load_change_index_entries(&change_ids).await?; let mut locators_by_change_id = BTreeMap::new(); for (expected, existing) in expected_changes.into_iter().zip(existing_entries) { match existing { Some(ChangeIndexEntry::PackedChange { locator }) => { let existing_change = load_packed_change(&reader, &locator, expected.id).await?; if !change_matches_ref(&existing_change, expected) { let entity_id = existing_change .entity_id .as_json_array_text() .unwrap_or_else(|_| "".to_string()); return Err(LixError::new( LixError::CODE_UNIQUE, format!( "adopted commit-store change id '{}' exists with different content for schema '{}' entity '{}'", expected.id, existing_change.schema_key, entity_id ), )); } locators_by_change_id.insert(expected.id, locator); } Some(ChangeIndexEntry::CommitHeader { .. }) => { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "adopted commit-store change id '{}' resolves to a commit header, not a packed state change", expected.id ), )); } None => { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "adopted commit-store change id '{}' does not exist", expected.id ), )); } } } Ok(locators_by_change_id) } async fn load_packed_change( reader: &CommitStoreReader, locator: &ChangeLocator, expected_change_id: &str, ) -> Result where S: StorageReader, { let pack = reader .load_change_pack(&locator.source_commit_id, locator.source_pack_id) .await? .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store change pack '{}:{}' for change '{}' is missing", locator.source_commit_id, locator.source_pack_id, expected_change_id ), ) })?; let change = pack .get(usize::try_from(locator.source_ordinal).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store change locator ordinal exceeds usize", ) })?) .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store change locator '{}' points past pack length", expected_change_id ), ) })? .clone(); if change.id != expected_change_id { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store change locator expected '{}' but loaded '{}'", expected_change_id, change.id ), )); } Ok(change) } fn change_matches_ref(change: &Change, expected: ChangeRef<'_>) -> bool { change.id == expected.id && &change.entity_id == expected.entity_id && change.schema_key == expected.schema_key && change.file_id.as_deref() == expected.file_id && change.snapshot_ref.as_ref() == expected.snapshot_ref && change.metadata_ref.as_ref() == expected.metadata_ref && change.created_at == expected.created_at } fn duplicate_change_id_error(change_id: &str) -> LixError { LixError::new( LixError::CODE_UNIQUE, format!( "commit-store change id '{}' appears more than once in the same transaction", change_id ), ) } #[cfg(test)] mod tests { use std::sync::Arc; use crate::backend::testing::UnitTestBackend; use crate::commit_store::{ ChangeIndexEntry, ChangeLocator, CommitDraftRef, CommitStoreContext, }; use crate::entity_identity::EntityIdentity; use crate::json_store::JsonRef; use crate::storage::{StorageContext, StorageWriteSet, StorageWriteTransaction}; use super::*; #[tokio::test] async fn load_changes_materializes_commit_header_and_packed_change() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let parent_ids = vec!["parent-1".to_string()]; let author_account_ids = vec!["author-1".to_string()]; let commit_id = "commit-1".to_string(); let commit_change_id = "commit-change-1".to_string(); let authored_change = test_change("change-1"); CommitStoreContext::new() .writer(transaction.as_mut(), &mut writes) .stage_commit_draft( CommitDraftRef { id: &commit_id, change_id: &commit_change_id, parent_ids: &parent_ids, author_account_ids: &author_account_ids, created_at: "2026-01-01T00:00:00Z", }, vec![authored_change.as_ref()], Vec::new(), ) .await .expect("commit should stage"); writes .apply(&mut transaction.as_mut()) .await .expect("writes should apply"); transaction.commit().await.expect("commit should persist"); let reader = CommitStoreContext::new().reader(storage.clone()); let index_entries = reader .load_change_index_entries(&[ commit_change_id.clone(), authored_change.id.clone(), "missing-change".to_string(), ]) .await .expect("index entries should load"); assert_eq!( index_entries, vec![ Some(ChangeIndexEntry::CommitHeader { commit_id: commit_id.clone(), change_id: commit_change_id.clone(), }), Some(ChangeIndexEntry::PackedChange { locator: ChangeLocator { source_commit_id: commit_id.clone(), source_pack_id: 0, source_ordinal: 0, change_id: authored_change.id.clone(), }, }), None, ] ); let changes = reader .load_changes(&[ commit_change_id.clone(), authored_change.id.clone(), "missing-change".to_string(), ]) .await .expect("changes should load"); assert_eq!(changes.len(), 3); let header_change = changes[0] .as_ref() .expect("commit-header change should materialize"); assert_eq!(header_change.id, commit_change_id); assert_eq!(header_change.entity_id, EntityIdentity::single(&commit_id)); assert_eq!(header_change.schema_key, "lix_commit"); assert_eq!(header_change.file_id, None); assert_eq!(header_change.snapshot_ref, None); assert_eq!(header_change.metadata_ref, None); assert_eq!(header_change.created_at, "2026-01-01T00:00:00Z"); assert_eq!( changes[1] .as_ref() .expect("packed change should decode from change pack"), &authored_change ); assert_eq!(changes[2], None); } #[tokio::test] async fn load_commit_changes_returns_equivalent_authored_and_adopted_changes() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let authored_change = test_change("shared-change-1"); stage_test_commit( storage.clone(), "source-commit", "source-commit-change", vec![authored_change.as_ref()], Vec::new(), ) .await; stage_test_commit( storage.clone(), "adopting-commit", "adopting-commit-change", Vec::new(), vec![authored_change.as_ref()], ) .await; let reader = CommitStoreContext::new().reader(storage.clone()); let source_changes = reader .load_commit_changes("source-commit") .await .expect("source commit changes should load"); let adopting_changes = reader .load_commit_changes("adopting-commit") .await .expect("adopting commit changes should load"); assert_eq!(source_changes, vec![authored_change.clone()]); assert_eq!(adopting_changes, source_changes); assert_eq!( reader .load_membership_pack("adopting-commit", 0) .await .expect("membership pack should load"), Some(vec![ChangeLocator { source_commit_id: "source-commit".to_string(), source_pack_id: 0, source_ordinal: 0, change_id: authored_change.id.clone(), }]) ); } async fn stage_test_commit( storage: StorageContext, commit_id: &str, commit_change_id: &str, authored_changes: Vec>, adopted_changes: Vec>, ) { let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let parent_ids = Vec::new(); let author_account_ids = Vec::new(); CommitStoreContext::new() .writer(transaction.as_mut(), &mut writes) .stage_commit_draft( CommitDraftRef { id: commit_id, change_id: commit_change_id, parent_ids: &parent_ids, author_account_ids: &author_account_ids, created_at: "2026-01-01T00:00:00Z", }, authored_changes, adopted_changes, ) .await .expect("commit should stage"); writes .apply(&mut transaction.as_mut()) .await .expect("writes should apply"); transaction.commit().await.expect("commit should persist"); } fn test_change(id: &str) -> Change { Change { id: id.to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: Some("file-1".to_string()), snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])), metadata_ref: Some(JsonRef::from_hash_bytes([2; 32])), created_at: "2026-01-02T00:00:00Z".to_string(), } } } ================================================ FILE: packages/engine/src/commit_store/materialization.rs ================================================ use crate::commit_store::{LocatedChange, MaterializedChange}; use crate::json_store::{JsonLoadRequestRef, JsonReadScopeRef, JsonRef, JsonStoreReader}; use crate::storage::StorageReader; use crate::{parse_row_metadata, LixError}; pub(crate) async fn materialize_change( json_reader: &mut JsonStoreReader, located: LocatedChange, ) -> Result where S: StorageReader, { let change = located.record; let pack_ids = [located.source_pack_id]; let scope = JsonReadScopeRef::CommitPacks { commit_id: &located.source_commit_id, pack_ids: &pack_ids, }; let snapshot_content = load_optional_json_text( json_reader, change.snapshot_ref.as_ref(), scope, "snapshot_ref", ) .await?; let metadata = match load_optional_json_text( json_reader, change.metadata_ref.as_ref(), scope, "metadata_ref", ) .await? { Some(value) => Some(parse_row_metadata( &value, "commit_store change metadata_ref", )?), None => None, }; Ok(MaterializedChange { id: change.id, entity_id: change.entity_id, schema_key: change.schema_key, file_id: change.file_id, snapshot_content, metadata, created_at: change.created_at, }) } async fn load_optional_json_text( json_reader: &mut JsonStoreReader, json_ref: Option<&JsonRef>, scope: JsonReadScopeRef<'_>, field: &str, ) -> Result, LixError> where S: StorageReader, { let Some(json_ref) = json_ref else { return Ok(None); }; let batch = json_reader .load_bytes_many(JsonLoadRequestRef { refs: std::slice::from_ref(json_ref), scope, }) .await?; let Some(bytes) = batch.into_values().into_iter().next().flatten() else { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit_store change {field} '{}' is missing", json_ref.to_hex() ), )); }; String::from_utf8(bytes).map(Some).map_err(|error| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("commit_store change {field} is not UTF-8 JSON: {error}"), ) }) } ================================================ FILE: packages/engine/src/commit_store/mod.rs ================================================ pub(crate) mod codec; mod context; mod materialization; pub(crate) mod storage; mod types; #[allow(unused_imports)] pub(crate) use context::{CommitStoreContext, CommitStoreReader, CommitStoreWriter}; #[allow(unused_imports)] pub(crate) use materialization::materialize_change; #[allow(unused_imports)] pub(crate) use types::{ Change, ChangeIndexEntry, ChangeLocator, ChangeLocatorRef, ChangePack, ChangePackView, ChangeRef, ChangeScanRequest, Commit, CommitDraftRef, LocatedChange, MaterializedChange, MembershipPack, MembershipPackView, StagedCommitStoreCommit, StoredCommitRef, }; ================================================ FILE: packages/engine/src/commit_store/storage.rs ================================================ use crate::commit_store::{ Change, ChangeIndexEntry, ChangeLocator, ChangeRef, Commit, CommitDraftRef, StagedCommitStoreCommit, StoredCommitRef, }; use crate::storage::{ KvGetGroup, KvGetRequest, KvScanRange, KvScanRequest, StorageReader, StorageWriteSet, }; use crate::LixError; use std::collections::{BTreeMap, BTreeSet}; pub(crate) const COMMIT_NAMESPACE: &str = "commit_store.commit"; pub(crate) const CHANGE_PACK_NAMESPACE: &str = "commit_store.change_pack"; pub(crate) const MEMBERSHIP_PACK_NAMESPACE: &str = "commit_store.membership_pack"; const SINGLE_PACK_ID: u32 = 0; pub(crate) fn stage_commit( writes: &mut StorageWriteSet, commit: CommitDraftRef<'_>, authored_changes: Vec>, adopted_changes: Vec, ) -> Result { stage_commit_with_authored_pack(writes, commit, authored_changes, adopted_changes, true) } pub(crate) fn stage_commit_with_external_authored_pack( writes: &mut StorageWriteSet, commit: CommitDraftRef<'_>, authored_changes: Vec>, adopted_changes: Vec, ) -> Result { stage_commit_with_authored_pack(writes, commit, authored_changes, adopted_changes, false) } fn stage_commit_with_authored_pack( writes: &mut StorageWriteSet, commit: CommitDraftRef<'_>, authored_changes: Vec>, adopted_changes: Vec, write_authored_change_pack: bool, ) -> Result { let stored_commit = StoredCommitRef { id: commit.id, change_id: commit.change_id, parent_ids: commit.parent_ids, author_account_ids: commit.author_account_ids, created_at: commit.created_at, change_pack_count: if authored_changes.is_empty() { 0 } else { 1 }, membership_pack_count: if adopted_changes.is_empty() { 0 } else { 1 }, }; writes.put( COMMIT_NAMESPACE, commit_key(commit.id), crate::commit_store::codec::encode_commit_ref(stored_commit)?, ); let mut authored_locators = Vec::with_capacity(authored_changes.len()); if !authored_changes.is_empty() { if write_authored_change_pack { writes.put( CHANGE_PACK_NAMESPACE, pack_key(commit.id, SINGLE_PACK_ID)?, crate::commit_store::codec::encode_change_pack( commit.id, SINGLE_PACK_ID, &authored_changes, )?, ); } for (source_ordinal, change) in authored_changes.iter().enumerate() { authored_locators.push(ChangeLocator { source_commit_id: commit.id.to_string(), source_pack_id: SINGLE_PACK_ID, source_ordinal: u32::try_from(source_ordinal).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store change pack ordinal exceeds u32", ) })?, change_id: change.id.to_string(), }); } } if !adopted_changes.is_empty() { writes.put( MEMBERSHIP_PACK_NAMESPACE, pack_key(commit.id, SINGLE_PACK_ID)?, crate::commit_store::codec::encode_membership_pack( commit.id, SINGLE_PACK_ID, adopted_changes.iter().map(ChangeLocator::as_ref), )?, ); } Ok(StagedCommitStoreCommit { authored_locators, adopted_locators: adopted_changes, }) } pub(crate) async fn load_commit( store: &mut (impl StorageReader + ?Sized), commit_id: &str, ) -> Result, LixError> { let Some(bytes) = get_one(store, COMMIT_NAMESPACE, commit_key(commit_id)).await? else { return Ok(None); }; crate::commit_store::codec::decode_commit(&bytes).map(Some) } pub(crate) async fn scan_commits( store: &mut (impl StorageReader + ?Sized), ) -> Result, LixError> { let page = store .scan_values(KvScanRequest { namespace: COMMIT_NAMESPACE.to_string(), range: KvScanRange::prefix(Vec::new()), after: None, limit: usize::MAX, }) .await?; page.values .iter() .map(|bytes| crate::commit_store::codec::decode_commit(bytes)) .collect() } pub(crate) async fn load_change_pack( store: &mut (impl StorageReader + ?Sized), commit_id: &str, pack_id: u32, ) -> Result>, LixError> { let Some(bytes) = get_one(store, CHANGE_PACK_NAMESPACE, pack_key(commit_id, pack_id)?).await? else { return load_tracked_authored_change_pack(store, commit_id, pack_id).await; }; let (stored_commit_id, stored_pack_id, changes) = crate::commit_store::codec::decode_change_pack(&bytes)?; ensure_pack_identity( "change pack", commit_id, pack_id, &stored_commit_id, stored_pack_id, )?; Ok(Some(changes)) } pub(crate) async fn load_tracked_authored_change_pack( store: &mut (impl StorageReader + ?Sized), commit_id: &str, pack_id: u32, ) -> Result>, LixError> { let Some(delta_entries) = crate::tracked_state::load_delta_pack(store, commit_id).await? else { return Ok(None); }; let mut changes_by_ordinal = BTreeMap::::new(); for delta in delta_entries { let locator = &delta.value.change_locator; if locator.source_commit_id != commit_id || locator.source_pack_id != pack_id { continue; } let ordinal = locator.source_ordinal; let change = Change { id: locator.change_id.clone(), entity_id: delta.key.entity_id, schema_key: delta.key.schema_key, file_id: delta.key.file_id, snapshot_ref: delta.value.snapshot_ref, metadata_ref: delta.value.metadata_ref, created_at: delta.value.updated_at, }; if changes_by_ordinal.insert(ordinal, change).is_some() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked authored change pack ({commit_id}, {pack_id}) has duplicate ordinal {ordinal}" ), )); } } if changes_by_ordinal.is_empty() { return Ok(None); } let mut changes = Vec::with_capacity(changes_by_ordinal.len()); for (expected_ordinal, (ordinal, change)) in (0u32..).zip(changes_by_ordinal) { if ordinal != expected_ordinal { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked authored change pack ({commit_id}, {pack_id}) is missing ordinal {expected_ordinal}" ), )); } changes.push(change); } Ok(Some(changes)) } pub(crate) async fn load_membership_pack( store: &mut (impl StorageReader + ?Sized), commit_id: &str, pack_id: u32, ) -> Result>, LixError> { let Some(bytes) = get_one( store, MEMBERSHIP_PACK_NAMESPACE, pack_key(commit_id, pack_id)?, ) .await? else { return Ok(None); }; let (stored_commit_id, stored_pack_id, members) = crate::commit_store::codec::decode_membership_pack(&bytes)?; ensure_pack_identity( "membership pack", commit_id, pack_id, &stored_commit_id, stored_pack_id, )?; Ok(Some(members)) } pub(crate) async fn load_change_index_entries( store: &mut (impl StorageReader + ?Sized), change_ids: &[String], ) -> Result>, LixError> { if change_ids.is_empty() { return Ok(Vec::new()); } let mut unresolved = change_ids.iter().cloned().collect::>(); let mut entries_by_change_id = BTreeMap::new(); let commits = scan_commits(store).await?; for commit in commits { if unresolved.remove(&commit.change_id) { entries_by_change_id.insert( commit.change_id.clone(), ChangeIndexEntry::CommitHeader { commit_id: commit.id.clone(), change_id: commit.change_id.clone(), }, ); } if unresolved.is_empty() { break; } for pack_id in 0..commit.change_pack_count { let Some(changes) = load_change_pack(store, &commit.id, pack_id).await? else { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store missing change pack ({}, {pack_id})", commit.id ), )); }; for (source_ordinal, change) in changes.iter().enumerate() { if !unresolved.remove(&change.id) { continue; } entries_by_change_id.insert( change.id.clone(), ChangeIndexEntry::PackedChange { locator: ChangeLocator { source_commit_id: commit.id.clone(), source_pack_id: pack_id, source_ordinal: u32::try_from(source_ordinal).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store change pack ordinal exceeds u32", ) })?, change_id: change.id.clone(), }, }, ); if unresolved.is_empty() { break; } } if unresolved.is_empty() { break; } } if unresolved.is_empty() { break; } } Ok(change_ids .iter() .map(|change_id| entries_by_change_id.get(change_id).cloned()) .collect()) } async fn get_one( store: &mut (impl StorageReader + ?Sized), namespace: &str, key: Vec, ) -> Result>, LixError> { Ok(store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: namespace.to_string(), keys: vec![key], }], }) .await? .groups .into_iter() .next() .and_then(|group| group.single_value_owned())) } fn ensure_pack_identity( label: &str, expected_commit_id: &str, expected_pack_id: u32, actual_commit_id: &str, actual_pack_id: u32, ) -> Result<(), LixError> { if actual_commit_id != expected_commit_id || actual_pack_id != expected_pack_id { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store {label} identity mismatch: expected ({expected_commit_id}, {expected_pack_id}), got ({actual_commit_id}, {actual_pack_id})" ), )); } Ok(()) } fn commit_key(commit_id: &str) -> Vec { commit_id.as_bytes().to_vec() } fn pack_key(commit_id: &str, pack_id: u32) -> Result, LixError> { let commit_id_len = u32::try_from(commit_id.len()).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "commit-store pack key commit id exceeds u32 length", ) })?; let mut key = Vec::with_capacity(8 + commit_id.len()); key.extend_from_slice(&commit_id_len.to_be_bytes()); key.extend_from_slice(commit_id.as_bytes()); key.extend_from_slice(&pack_id.to_be_bytes()); Ok(key) } #[cfg(test)] mod tests { use std::sync::Arc; use crate::backend::testing::UnitTestBackend; use crate::commit_store::CommitDraftRef; use crate::entity_identity::EntityIdentity; use crate::json_store::JsonRef; use crate::storage::{StorageContext, StorageWriteTransaction}; use crate::tracked_state::{TrackedStateContext, TrackedStateDeltaRef}; use super::*; #[tokio::test] async fn stage_commit_writes_all_commit_store_namespaces() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let commit = test_commit(); let change = test_change("change-1"); let adopted = ChangeLocator { source_commit_id: "source-commit".to_string(), source_pack_id: 3, source_ordinal: 7, change_id: "adopted-change".to_string(), }; let staged = stage_commit( &mut writes, CommitDraftRef { id: &commit.id, change_id: &commit.change_id, parent_ids: &commit.parent_ids, author_account_ids: &commit.author_account_ids, created_at: &commit.created_at, }, vec![change.as_ref()], vec![adopted.clone()], ) .expect("commit should stage"); writes .apply(&mut tx.as_mut()) .await .expect("writes should apply"); tx.commit().await.expect("commit should succeed"); assert_eq!( staged.authored_locators, vec![ChangeLocator { source_commit_id: "commit-1".to_string(), source_pack_id: 0, source_ordinal: 0, change_id: "change-1".to_string(), }] ); assert_eq!(staged.adopted_locators, vec![adopted.clone()]); let mut reader = storage.clone(); assert_eq!( load_commit(&mut reader, "commit-1") .await .expect("commit should load"), Some(commit) ); assert_eq!( load_change_pack(&mut reader, "commit-1", 0) .await .expect("change pack should load"), Some(vec![change]) ); assert_eq!( load_membership_pack(&mut reader, "commit-1", 0) .await .expect("membership pack should load"), Some(vec![adopted]) ); let index_entries = load_change_index_entries( &mut reader, &["commit-change-1".to_string(), "change-1".to_string()], ) .await .expect("index entries should load"); assert_eq!( index_entries, vec![ Some(ChangeIndexEntry::CommitHeader { commit_id: "commit-1".to_string(), change_id: "commit-change-1".to_string(), }), Some(ChangeIndexEntry::PackedChange { locator: ChangeLocator { source_commit_id: "commit-1".to_string(), source_pack_id: 0, source_ordinal: 0, change_id: "change-1".to_string(), }, }), ] ); } #[tokio::test] async fn tracked_commit_change_pack_loads_from_delta_pack() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let commit = test_commit(); let change = test_change("change-1"); let staged = stage_commit_with_external_authored_pack( &mut writes, CommitDraftRef { id: &commit.id, change_id: &commit.change_id, parent_ids: &commit.parent_ids, author_account_ids: &commit.author_account_ids, created_at: &commit.created_at, }, vec![change.as_ref()], Vec::new(), ) .expect("tracked commit should stage"); let deltas = [TrackedStateDeltaRef { change: change.as_ref(), locator: staged.authored_locators[0].as_ref(), created_at: "2026-01-01T00:00:00Z", updated_at: "2026-01-02T00:00:00Z", }]; TrackedStateContext::new() .writer(&mut tx.as_mut(), &mut writes) .stage_delta(&commit.id, None, &deltas) .await .expect("tracked delta should stage"); writes .apply(&mut tx.as_mut()) .await .expect("writes should apply"); tx.commit().await.expect("commit should succeed"); let mut reader = storage.clone(); assert_eq!( get_one( &mut reader, CHANGE_PACK_NAMESPACE, pack_key("commit-1", 0).unwrap() ) .await .expect("direct change pack lookup should succeed"), None ); assert_eq!( load_change_pack(&mut reader, "commit-1", 0) .await .expect("tracked change pack should load"), Some(vec![Change { created_at: "2026-01-02T00:00:00Z".to_string(), ..change.clone() }]) ); assert_eq!( load_change_index_entries(&mut reader, &["change-1".to_string()]) .await .expect("index entries should load"), vec![Some(ChangeIndexEntry::PackedChange { locator: staged.authored_locators[0].clone(), })] ); } #[tokio::test] async fn tracked_commit_change_pack_rejects_sparse_delta_ordinals() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let commit = test_commit(); let change = test_change("change-1"); let sparse_locator = ChangeLocator { source_commit_id: commit.id.clone(), source_pack_id: 0, source_ordinal: 1, change_id: change.id.clone(), }; let deltas = [TrackedStateDeltaRef { change: change.as_ref(), locator: sparse_locator.as_ref(), created_at: "2026-01-01T00:00:00Z", updated_at: "2026-01-02T00:00:00Z", }]; TrackedStateContext::new() .writer(&mut tx.as_mut(), &mut writes) .stage_delta(&commit.id, None, &deltas) .await .expect("tracked delta should stage"); writes .apply(&mut tx.as_mut()) .await .expect("writes should apply"); tx.commit().await.expect("commit should succeed"); let mut reader = storage.clone(); let error = load_change_pack(&mut reader, "commit-1", 0) .await .expect_err("sparse tracked authored ordinals should reject"); assert!( error.to_string().contains("missing ordinal 0"), "error should mention missing ordinal: {error}" ); } fn test_commit() -> Commit { Commit { id: "commit-1".to_string(), change_id: "commit-change-1".to_string(), parent_ids: vec!["parent-1".to_string()], author_account_ids: Vec::new(), created_at: "2026-01-01T00:00:00Z".to_string(), change_pack_count: 1, membership_pack_count: 1, } } fn test_change(id: &str) -> Change { Change { id: id.to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: None, snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])), metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), } } } ================================================ FILE: packages/engine/src/commit_store/types.rs ================================================ use crate::entity_identity::EntityIdentity; use crate::json_store::JsonRef; /// Physical append/locality unit for commit metadata and derived commit SQL /// surfaces. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct Commit { pub(crate) id: String, pub(crate) change_id: String, pub(crate) parent_ids: Vec, pub(crate) author_account_ids: Vec, pub(crate) created_at: String, pub(crate) change_pack_count: u32, pub(crate) membership_pack_count: u32, } impl Commit { pub(crate) fn as_ref(&self) -> StoredCommitRef<'_> { StoredCommitRef { id: &self.id, change_id: &self.change_id, parent_ids: &self.parent_ids, author_account_ids: &self.author_account_ids, created_at: &self.created_at, change_pack_count: self.change_pack_count, membership_pack_count: self.membership_pack_count, } } } /// Zero-copy view of stored [`Commit`] bytes. #[derive(Debug, Clone, Copy)] pub(crate) struct StoredCommitRef<'a> { pub(crate) id: &'a str, pub(crate) change_id: &'a str, pub(crate) parent_ids: &'a [String], pub(crate) author_account_ids: &'a [String], pub(crate) created_at: &'a str, pub(crate) change_pack_count: u32, pub(crate) membership_pack_count: u32, } /// Zero-copy view of a logical commit supplied before physical packing. #[derive(Debug, Clone, Copy)] pub(crate) struct CommitDraftRef<'a> { pub(crate) id: &'a str, pub(crate) change_id: &'a str, pub(crate) parent_ids: &'a [String], pub(crate) author_account_ids: &'a [String], pub(crate) created_at: &'a str, } /// Logical entity mutation fact stored in a commit change pack. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct Change { pub(crate) id: String, pub(crate) entity_id: EntityIdentity, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot_ref: Option, pub(crate) metadata_ref: Option, pub(crate) created_at: String, } /// Read-boundary view of a commit-store change with JSON refs resolved. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct MaterializedChange { pub(crate) id: String, pub(crate) entity_id: EntityIdentity, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot_content: Option, pub(crate) metadata: Option, pub(crate) created_at: String, } /// Commit-store change plus the physical pack that owns its JSON payloads. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct LocatedChange { pub(crate) record: Change, pub(crate) source_commit_id: String, pub(crate) source_pack_id: u32, } impl Change { pub(crate) fn as_ref(&self) -> ChangeRef<'_> { ChangeRef { id: &self.id, entity_id: &self.entity_id, schema_key: &self.schema_key, file_id: self.file_id.as_deref(), snapshot_ref: self.snapshot_ref.as_ref(), metadata_ref: self.metadata_ref.as_ref(), created_at: &self.created_at, } } } /// Zero-copy view of [`Change`]. #[derive(Debug, Clone, Copy)] pub(crate) struct ChangeRef<'a> { pub(crate) id: &'a str, pub(crate) entity_id: &'a EntityIdentity, pub(crate) schema_key: &'a str, pub(crate) file_id: Option<&'a str>, pub(crate) snapshot_ref: Option<&'a JsonRef>, pub(crate) metadata_ref: Option<&'a JsonRef>, pub(crate) created_at: &'a str, } /// Logical scan request for the `lix_change` SQL surface over commit_store. #[derive(Debug, Clone, Default)] pub(crate) struct ChangeScanRequest { pub(crate) limit: Option, } /// Commit-local physical pack of newly authored change payloads. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct ChangePack { pub(crate) commit_id: String, pub(crate) pack_id: u32, pub(crate) changes: Vec, } impl ChangePack { pub(crate) fn as_view(&self) -> ChangePackView<'_> { ChangePackView { commit_id: &self.commit_id, pack_id: self.pack_id, changes: &self.changes, } } } /// Zero-copy view for a decoded [`ChangePack`]. #[derive(Debug, Clone, Copy)] pub(crate) struct ChangePackView<'a> { pub(crate) commit_id: &'a str, pub(crate) pack_id: u32, pub(crate) changes: &'a [Change], } /// Storage location of an existing change payload. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct ChangeLocator { pub(crate) source_commit_id: String, pub(crate) source_pack_id: u32, pub(crate) source_ordinal: u32, pub(crate) change_id: String, } impl ChangeLocator { pub(crate) fn as_ref(&self) -> ChangeLocatorRef<'_> { ChangeLocatorRef { source_commit_id: &self.source_commit_id, source_pack_id: self.source_pack_id, source_ordinal: self.source_ordinal, change_id: &self.change_id, } } } /// Zero-copy view of [`ChangeLocator`]. #[derive(Debug, Clone, Copy)] pub(crate) struct ChangeLocatorRef<'a> { pub(crate) source_commit_id: &'a str, pub(crate) source_pack_id: u32, pub(crate) source_ordinal: u32, pub(crate) change_id: &'a str, } /// Exact lookup entry for a derived-surface-visible change id. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) enum ChangeIndexEntry { CommitHeader { commit_id: String, change_id: String, }, PackedChange { locator: ChangeLocator, }, } /// Commit-local physical pack of adopted/shared membership locators. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct MembershipPack { pub(crate) commit_id: String, pub(crate) pack_id: u32, pub(crate) members: Vec, } impl MembershipPack { pub(crate) fn as_view(&self) -> MembershipPackView<'_> { MembershipPackView { commit_id: &self.commit_id, pack_id: self.pack_id, members: &self.members, } } } /// Zero-copy view for a decoded [`MembershipPack`]. #[derive(Debug, Clone, Copy)] pub(crate) struct MembershipPackView<'a> { pub(crate) commit_id: &'a str, pub(crate) pack_id: u32, pub(crate) members: &'a [ChangeLocator], } /// Locators produced while staging a commit. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct StagedCommitStoreCommit { pub(crate) authored_locators: Vec, pub(crate) adopted_locators: Vec, } ================================================ FILE: packages/engine/src/common/error.rs ================================================ use serde_json::{json, Value as JsonValue}; /// Structured error type surfaced by Lix to every SDK binding. /// /// Carries a machine-readable [`code`](Self::code), a human-readable /// [`message`](Self::message), and an optional [`hint`](Self::hint) /// suggesting how to recover. Hints follow the Postgres/rustc convention: /// `message` states what went wrong in factual terms, and `hint` offers a /// possible fix when one is known. /// /// ``` /// use lix_engine::LixError; /// /// let err = LixError::new( /// "LIX_ERROR_UNSUPPORTED_WRITE_EXPRESSION", /// "json(...) is not supported", /// ) /// .with_hint("use lix_json('...') instead"); /// /// assert_eq!(err.hint(), Some("use lix_json('...') instead")); /// ``` #[derive(Debug, Clone, PartialEq, Eq)] pub struct LixError { pub code: String, pub message: String, pub hint: Option, pub details: Option, } impl LixError { /// True fallback — use when no more specific category fits. Producing /// sites should prefer the categorized codes below whenever possible; /// the SDK contract is that `LIX_ERROR_UNKNOWN` is the *last* resort, /// never the default. pub const CODE_UNKNOWN: &'static str = "LIX_ERROR_UNKNOWN"; /// SQL text could not be parsed. pub const CODE_PARSE_ERROR: &'static str = "LIX_PARSE_ERROR"; /// A SQL function name could not be resolved. pub const CODE_UDF_NOT_FOUND: &'static str = "LIX_UDF_NOT_FOUND"; /// A SQL expression or function argument had an incompatible type. pub const CODE_TYPE_MISMATCH: &'static str = "LIX_TYPE_MISMATCH"; /// A Lix JSON path argument used another dialect's path language instead /// of Lix's canonical variadic key/index segments. pub const CODE_INVALID_JSON_PATH: &'static str = "LIX_INVALID_JSON_PATH"; /// SQL syntax belongs to another dialect and is outside the Lix SQL /// surface. pub const CODE_DIALECT_UNSUPPORTED: &'static str = "LIX_DIALECT_UNSUPPORTED"; /// SQL parameters could not be bound to placeholders. pub const CODE_BINDING_ERROR: &'static str = "LIX_BINDING_ERROR"; /// A caller supplied an invalid SQL parameter value or parameter list. pub const CODE_INVALID_PARAM: &'static str = "LIX_INVALID_PARAM"; /// A SQL table or view name could not be resolved. pub const CODE_TABLE_NOT_FOUND: &'static str = "LIX_TABLE_NOT_FOUND"; /// A SQL column name could not be resolved in the available projection. pub const CODE_COLUMN_NOT_FOUND: &'static str = "LIX_COLUMN_NOT_FOUND"; /// A SQL write violated a primary-key, unique, NOT NULL, or other /// relational constraint. pub const CODE_CONSTRAINT_VIOLATION: &'static str = "LIX_CONSTRAINT_VIOLATION"; /// A SQL write targeted a read-only internal/component surface. pub const CODE_READ_ONLY: &'static str = "LIX_ERROR_READ_ONLY"; /// A history table was queried without an explicit commit/version range. pub const CODE_HISTORY_FILTER_REQUIRED: &'static str = "LIX_HISTORY_FILTER_REQUIRED"; /// SQL syntax is valid, but the feature is intentionally outside the Lix /// SQL surface. pub const CODE_UNSUPPORTED_SQL: &'static str = "LIX_UNSUPPORTED_SQL"; /// SQL planning succeeded far enough to produce a physical runtime shape /// that the current engine target cannot execute safely. pub const CODE_UNSUPPORTED_SQL_RUNTIME_PLAN: &'static str = "LIX_UNSUPPORTED_SQL_RUNTIME_PLAN"; /// Storage/backend IO failed while executing an operation. pub const CODE_STORAGE_ERROR: &'static str = "LIX_STORAGE_ERROR"; /// An internal engine invariant failed. pub const CODE_INTERNAL_ERROR: &'static str = "LIX_INTERNAL_ERROR"; /// Write-time failure where user data did not conform to a registered /// schema (type mismatch, missing required field, pattern violation, /// additionalProperties, etc.). Raised from the JSON-Schema validator /// run over a candidate row's snapshot. pub const CODE_SCHEMA_VALIDATION: &'static str = "LIX_ERROR_SCHEMA_VALIDATION"; /// A foreign-key constraint could not be satisfied. Covers both the /// insert-side "no matching target row" failure and the delete-side /// "still referenced" (restrict) failure. pub const CODE_FOREIGN_KEY: &'static str = "LIX_ERROR_FOREIGN_KEY"; /// A row references a non-null `file_id` that has no matching `lix_file` /// descriptor in the same effective version scope. pub const CODE_FILE_NOT_FOUND: &'static str = "LIX_ERROR_FILE_NOT_FOUND"; /// A primary-key or `x-lix-unique` constraint was violated — another /// row already owns the value(s) for the declared pointer group. pub const CODE_UNIQUE: &'static str = "LIX_ERROR_UNIQUE"; /// An `INSERT ... VALUES (...)` expression is not supported by the /// public write surface (e.g. `json(...)`, subqueries, arbitrary SQL /// expressions). Users should wrap inline JSON with `lix_json(...)`. pub const CODE_UNSUPPORTED_WRITE_EXPRESSION: &'static str = "LIX_ERROR_UNSUPPORTED_WRITE_EXPRESSION"; /// The schema JSON itself (the *definition*, not a row against it) is /// malformed — a missing `x-lix-key`, a JSON-Pointer without the /// leading slash, a reserved-namespace collision, or any other /// meta-schema validation failure. pub const CODE_SCHEMA_DEFINITION: &'static str = "LIX_ERROR_SCHEMA_DEFINITION"; /// The logical Lix handle/session has been closed and cannot run further /// operations. Close is a resource-release lifecycle boundary, not a /// durability boundary. pub const CODE_CLOSED: &'static str = "LIX_ERROR_CLOSED"; /// A merge found incompatible changes to the same tracked-state identity. pub const CODE_MERGE_CONFLICT: &'static str = "LIX_MERGE_CONFLICT"; /// A caller referenced a version id that has no matching version ref. pub const CODE_VERSION_NOT_FOUND: &'static str = "LIX_VERSION_NOT_FOUND"; /// A staged row's storage scope flags disagree, such as a global row not /// using the reserved global version id. pub const CODE_INVALID_STORAGE_SCOPE: &'static str = "LIX_ERROR_INVALID_STORAGE_SCOPE"; /// Merge graph analysis found multiple equally valid merge bases. pub const CODE_AMBIGUOUS_MERGE_BASE: &'static str = "LIX_AMBIGUOUS_MERGE_BASE"; /// A merge request is well-formed but nonsensical for the commit graph, /// such as merging a version into itself. pub const CODE_INVALID_MERGE: &'static str = "LIX_INVALID_MERGE"; pub fn new(code: impl Into, message: impl Into) -> Self { Self { code: code.into(), message: message.into(), hint: None, details: None, } } pub fn unknown(message: impl Into) -> Self { Self::new("LIX_ERROR_UNKNOWN", message) } pub fn version_not_found( version_id: impl Into, operation: impl Into, role: impl Into, ) -> Self { let version_id = version_id.into(); let operation = operation.into(); let role = role.into(); Self::new( Self::CODE_VERSION_NOT_FOUND, format!("version '{version_id}' was not found"), ) .with_details(json!({ "version_id": version_id, "operation": operation, "role": role, })) } pub fn ambiguous_merge_base( left_commit_id: impl Into, right_commit_id: impl Into, candidates: Vec, ) -> Self { let left_commit_id = left_commit_id.into(); let right_commit_id = right_commit_id.into(); Self::new( Self::CODE_AMBIGUOUS_MERGE_BASE, format!("ambiguous merge base between '{left_commit_id}' and '{right_commit_id}'"), ) .with_details(json!({ "left_commit_id": left_commit_id, "right_commit_id": right_commit_id, "candidates": candidates, })) } pub fn invalid_self_merge(version_id: impl Into) -> Self { let version_id = version_id.into(); Self::new( Self::CODE_INVALID_MERGE, format!("cannot merge version '{version_id}' into itself"), ) .with_details(json!({ "operation": "merge_version", "target_version_id": version_id, "source_version_id": version_id, })) } /// Attach a hint to this error. Consumers render hints alongside the /// primary message (e.g. a CLI prints them as `hint: `). /// /// ``` /// use lix_engine::LixError; /// /// let err = LixError::new("CODE", "boom").with_hint("try this"); /// assert_eq!(err.hint(), Some("try this")); /// ``` pub fn with_hint(mut self, hint: impl Into) -> Self { self.hint = Some(hint.into()); self } /// Attach machine-readable details to this error. pub fn with_details(mut self, details: JsonValue) -> Self { self.details = Some(details); self } /// Return the attached hint, if any. /// /// Returns `None` when no hint was attached at the error's producer /// site. This is the accessor SDK consumers should prefer over /// reading the `hint` field directly — it returns `Option<&str>`, /// avoiding the need for `.as_deref()` at the call site. /// /// ``` /// use lix_engine::LixError; /// /// let without_hint = LixError::new("CODE", "boom"); /// assert_eq!(without_hint.hint(), None); /// /// let with_hint = LixError::new("CODE", "boom").with_hint("fix it"); /// assert_eq!(with_hint.hint(), Some("fix it")); /// ``` pub fn hint(&self) -> Option<&str> { self.hint.as_deref() } pub fn message_with_hint(&self) -> String { match self.hint() { Some(hint) => format!("{}\nhint: {hint}", self.message), None => self.message.clone(), } } pub fn format(&self) -> String { let mut s = format!("code: {}\nmessage: {}", self.code, self.message); if let Some(hint) = &self.hint { s.push_str(&format!("\nhint: {hint}")); } s } } impl std::fmt::Display for LixError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "{}", self.format()) } } impl std::error::Error for LixError {} #[cfg(test)] mod tests { use super::*; #[test] fn format_without_hint_omits_hint_line() { let err = LixError::new("LIX_ERROR_FOO", "something went wrong"); assert_eq!( err.format(), "code: LIX_ERROR_FOO\nmessage: something went wrong" ); assert!(err.hint.is_none()); } #[test] fn format_with_hint_appends_hint_line() { let err = LixError::new("LIX_ERROR_FOO", "something went wrong").with_hint("try the fix"); assert_eq!( err.format(), "code: LIX_ERROR_FOO\nmessage: something went wrong\nhint: try the fix" ); } #[test] fn with_hint_is_chainable_and_replaces_prior_hint() { let err = LixError::new("LIX_ERROR_FOO", "desc") .with_hint("first") .with_hint("second"); assert_eq!(err.hint.as_deref(), Some("second")); } #[test] fn new_defaults_hint_to_none() { let err = LixError::new("CODE", "desc"); assert_eq!(err.hint, None); } #[test] fn unknown_defaults_hint_to_none() { let err = LixError::unknown("desc"); assert_eq!(err.code, "LIX_ERROR_UNKNOWN"); assert_eq!(err.hint, None); } } ================================================ FILE: packages/engine/src/common/fingerprint.rs ================================================ pub(crate) fn stable_content_fingerprint_hex(data: &[u8]) -> String { blake3::hash(data).to_hex().to_string() } ================================================ FILE: packages/engine/src/common/fs_path.rs ================================================ //! Canonical Lix filesystem paths live in this module. //! //! Contract: //! //! - Canonical internal form is an absolute slash-separated Lix filesystem //! path, structurally aligned with RFC 3986 `path-absolute` / RFC 8089 file //! URI paths. //! - RFC 3986/8089 URI spelling is a boundary serialization, not the internal //! identity form. //! - Each non-empty segment is enforced with an RFC 8264 PRECIS //! `IdentifierClass` profile, case-preserved and NFC-normalized. //! - Percent encoding is accepted only as boundary input. Canonical internal //! paths store decoded Unicode segments, never percent triplets. //! - Dot segments are rejected rather than rewritten because Lix paths are //! stable logical identities, not URI references being resolved against a //! base path. //! //! Canonicalization order: //! //! 1. Validate and decode RFC 3986 percent triplets in each segment. //! 2. Normalize decoded segment text to NFC. //! 3. Apply PRECIS IdentifierClass enforcement. //! 4. Reject Lix structural sentinels and separators. //! //! Fixed standard-derived rules: //! //! - Path shape follows the absolute-path grammar used by RFC 3986/RFC 8089. //! - Segment text follows RFC 8264 PRECIS IdentifierClass semantics. //! - Comparison is exact-string and case-sensitive after canonicalization. //! //! Lix profile rules: //! //! - File paths never end with `/`. //! - Directory paths always end with `/`. //! - `NUL` is rejected in all segments. //! - `/`, `\`, empty segments, `.`, and `..` are rejected in all non-root //! segments. //! - `%`, `?`, and `#` are reserved for URI boundary syntax and are rejected //! in canonical internal segments. //! - Segments cannot begin with a combining mark. //! - Root is represented as the normalized directory path `/`. //! - Git/CLI import and ASCII-only URI serialization are boundary adapters, //! not part of the core `fs_path` contract. //! //! Length policy: //! //! - Each canonical segment is capped at 255 bytes, matching common //! filesystem component limits. //! - Each full canonical path is capped at 4096 bytes. //! - Raw boundary input is separately capped before normalization so oversized //! URI spellings cannot reach Unicode processing. //! //! Runtime strategy: //! //! - This module keeps Lix structural checks local and delegates Unicode //! segment validity to the PRECIS implementation. //! - `iref` is an RFC 3987 / RFC 3986 shape oracle in tests, not the runtime //! segment authority. //! //! Glossary: //! //! - Raw input path: caller-provided path before normalization. //! - Normalized path: path after NFC normalization. //! - Canonical path: stored path after full normalization/canonicalization. //! - File path: canonical path naming a file, without a trailing slash. //! - Directory path: canonical path naming a directory, with a trailing slash. //! - Internal path form: the canonical Unicode-bearing representation used by //! the engine. //! - Boundary URI form: an ASCII-only serialization used when interoperating //! with URI-only systems. use precis_profiles::precis_core::profile::Profile; use precis_profiles::UsernameCasePreserved; use unicode_normalization::{char::is_combining_mark, UnicodeNormalization}; use crate::LixError; use std::fmt; use std::ops::Deref; const MAX_CANONICAL_PATH_BYTES: usize = 4096; const MAX_CANONICAL_PATH_SEGMENT_BYTES: usize = 255; const MAX_RAW_PATH_INPUT_BYTES: usize = 16 * 1024; #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct NormalizedDirectoryPath(String); impl NormalizedDirectoryPath { #[cfg(test)] pub(crate) fn try_from_path(path: &str) -> Result { normalize_directory_path(path).map(Self) } pub(crate) fn from_normalized(path: String) -> Self { Self(path) } pub(crate) fn as_str(&self) -> &str { self.0.as_str() } } impl Deref for NormalizedDirectoryPath { type Target = str; fn deref(&self) -> &Self::Target { self.as_str() } } impl fmt::Display for NormalizedDirectoryPath { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(self.as_str()) } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct NormalizedFilePath(String); impl NormalizedFilePath { pub(crate) fn from_normalized(path: String) -> Self { Self(path) } pub(crate) fn as_str(&self) -> &str { self.0.as_str() } } impl Deref for NormalizedFilePath { type Target = str; fn deref(&self) -> &Self::Target { self.as_str() } } impl fmt::Display for NormalizedFilePath { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(self.as_str()) } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct ParsedFilePath { pub(crate) normalized_path: NormalizedFilePath, pub(crate) directory_path: Option, pub(crate) name: String, } impl ParsedFilePath { pub(crate) fn try_from_path(path: &str) -> Result { parse_file_path(path) } } type PathResult = Result; #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum PathError { MissingLeadingSlash, UnexpectedTrailingSlashOnFilePath, MissingTrailingSlashOnDirectoryPath, EmptySegment, DotSegment, SlashInSegment, Backslash, InvalidPercentEncoding, InvalidPathSegmentCodePoint, PathTooLong, RawPathInputTooLong, SegmentTooLong, NulByte, InvalidRootUsage, #[cfg(test)] InvalidDirectoryParentPath, } impl PathError { fn into_lix_error(self) -> LixError { let (code, message, hint) = match self { Self::MissingLeadingSlash => ( "LIX_ERROR_PATH_MISSING_LEADING_SLASH", "path must start with '/'", Some("prefix the path with '/'"), ), Self::UnexpectedTrailingSlashOnFilePath => ( "LIX_ERROR_PATH_UNEXPECTED_TRAILING_SLASH_ON_FILE", "file path must not end with '/'", Some("remove the trailing slash or use a directory path instead"), ), Self::MissingTrailingSlashOnDirectoryPath => ( "LIX_ERROR_PATH_MISSING_TRAILING_SLASH_ON_DIRECTORY", "directory path must end with '/'", Some("append a trailing slash or use a file path instead"), ), Self::EmptySegment => ( "LIX_ERROR_PATH_EMPTY_SEGMENT", "path must not contain empty segments", Some("remove duplicate slashes like '//'"), ), Self::DotSegment => ( "LIX_ERROR_PATH_DOT_SEGMENT", "path segment cannot be '.' or '..'", Some("use a real segment name instead of '.' or '..'"), ), Self::SlashInSegment => ( "LIX_ERROR_PATH_SLASH_IN_SEGMENT", "path segment must not contain '/'", Some("pass a single segment name, not a full path"), ), Self::Backslash => ( "LIX_ERROR_PATH_BACKSLASH", "path must not contain '\\'", Some("use '/' separators instead of '\\'"), ), Self::InvalidPercentEncoding => ( "LIX_ERROR_PATH_INVALID_PERCENT_ENCODING", "path contains invalid percent encoding", Some("use valid percent triplets only for URI boundary input; '%' is not allowed in canonical path segments"), ), Self::InvalidPathSegmentCodePoint => ( "LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT", "path segment contains a character that is not allowed in canonical Lix paths", Some("canonical paths use RFC 8264 PRECIS IdentifierClass segments; use URI percent encoding only at boundaries"), ), Self::PathTooLong => ( "LIX_ERROR_PATH_TOO_LONG", "path is too long", Some("keep canonical paths at or below 4096 bytes"), ), Self::RawPathInputTooLong => ( "LIX_ERROR_PATH_INPUT_TOO_LONG", "path input is too long", Some("keep raw path input at or below 16384 bytes"), ), Self::SegmentTooLong => ( "LIX_ERROR_PATH_SEGMENT_TOO_LONG", "path segment is too long", Some("keep each canonical path segment at or below 255 bytes"), ), Self::NulByte => ( "LIX_ERROR_PATH_NUL_BYTE", "path must not contain a NUL byte", Some("remove the NUL byte from the path"), ), Self::InvalidRootUsage => ( "LIX_ERROR_PATH_INVALID_ROOT_USAGE", "root '/' is only valid as a directory path", Some("use '/' as a directory path, never as a file path"), ), #[cfg(test)] Self::InvalidDirectoryParentPath => ( "LIX_ERROR_PATH_INVALID_DIRECTORY_PARENT", "directory parent path must be a normalized directory path", Some("pass '/' or a path ending with '/' as the parent directory"), ), }; let err = LixError::new(code, message); match hint { Some(hint) => err.with_hint(hint), None => err, } } } pub(crate) fn normalize_path_segment(raw: &str) -> Result { normalize_path_segment_impl(raw).map_err(PathError::into_lix_error) } fn normalize_path_segment_impl(raw: &str) -> PathResult { ensure_raw_path_input_len(raw)?; let normalized = raw.nfc().collect::(); let canonical = normalize_validated_path_segment(&normalized)?; if canonical == "." || canonical == ".." { return Err(PathError::DotSegment); } Ok(canonical) } fn validate_path_segment_chars(normalized: &str) -> PathResult { if normalized.is_empty() { return Err(PathError::EmptySegment); } if normalized.contains('\0') { return Err(PathError::NulByte); } if normalized.contains('/') { return Err(PathError::SlashInSegment); } if normalized.contains('\\') { return Err(PathError::Backslash); } if !segment_has_valid_percent_encoding(&normalized) { return Err(PathError::InvalidPercentEncoding); } let decoded = decode_percent_encoded_segment(normalized)?; validate_decoded_path_segment_structure(&decoded)?; Ok(decoded) } fn normalize_validated_path_segment(normalized: &str) -> PathResult { let decoded = validate_path_segment_chars(normalized)?; ensure_canonical_segment_len(&decoded)?; let canonical = enforce_precis_segment(&decoded)?; ensure_canonical_segment_len(&canonical)?; Ok(canonical) } fn decode_percent_encoded_segment(segment: &str) -> PathResult { let bytes = segment.as_bytes(); let mut decoded = Vec::with_capacity(segment.len()); let mut index = 0usize; while index < bytes.len() { if bytes[index] == b'%' { decoded.push((hex_value(bytes[index + 1]) << 4) | hex_value(bytes[index + 2])); index += 3; continue; } let ch = segment[index..] .chars() .next() .expect("slice at char boundary should yield a char"); let mut utf8 = [0u8; 4]; decoded.extend_from_slice(ch.encode_utf8(&mut utf8).as_bytes()); index += ch.len_utf8(); } String::from_utf8(decoded).map_err(|_| PathError::InvalidPathSegmentCodePoint) } fn hex_value(byte: u8) -> u8 { match byte { b'0'..=b'9' => byte - b'0', b'a'..=b'f' => 10 + (byte - b'a'), b'A'..=b'F' => 10 + (byte - b'A'), _ => unreachable!("hex_value only called after percent validation"), } } fn segment_has_valid_percent_encoding(segment: &str) -> bool { let bytes = segment.as_bytes(); let mut index = 0usize; while index < bytes.len() { if bytes[index] == b'%' { if index + 2 >= bytes.len() { return false; } let hi = bytes[index + 1]; let lo = bytes[index + 2]; if !hi.is_ascii_hexdigit() || !lo.is_ascii_hexdigit() { return false; } index += 3; continue; } index += 1; } true } fn validate_decoded_path_segment_structure(segment: &str) -> PathResult<()> { if segment.contains('\0') { return Err(PathError::NulByte); } if segment.contains('/') { return Err(PathError::SlashInSegment); } if segment.contains('\\') { return Err(PathError::Backslash); } if segment.contains('%') || segment.contains('?') || segment.contains('#') { return Err(PathError::InvalidPathSegmentCodePoint); } if segment.chars().next().is_some_and(is_combining_mark) { return Err(PathError::InvalidPathSegmentCodePoint); } Ok(()) } fn enforce_precis_segment(segment: &str) -> PathResult { UsernameCasePreserved::new() .enforce(segment) .map(|segment| segment.into_owned()) .map_err(|_| PathError::InvalidPathSegmentCodePoint) } fn normalize_file_path_impl(path: &str) -> PathResult { ensure_raw_path_input_len(path)?; let normalized = path.nfc().collect::(); if !normalized.starts_with('/') { return Err(PathError::MissingLeadingSlash); } if normalized == "/" { return Err(PathError::InvalidRootUsage); } if normalized.ends_with('/') { return Err(PathError::UnexpectedTrailingSlashOnFilePath); } if normalized.contains('\\') { return Err(PathError::Backslash); } if normalized.contains("//") { return Err(PathError::EmptySegment); } let segments = normalized .split('/') .filter(|segment| !segment.is_empty()) .collect::>(); if segments.is_empty() { return Err(PathError::EmptySegment); } let canonical_segments = canonicalize_path_segments(&segments)?; if canonical_segments.is_empty() { return Err(PathError::InvalidRootUsage); } let canonical = format!("/{}", canonical_segments.join("/")); ensure_canonical_path_len(&canonical)?; Ok(canonical) } pub(crate) fn normalize_directory_path(path: &str) -> Result { normalize_directory_path_impl(path).map_err(PathError::into_lix_error) } fn normalize_directory_path_impl(path: &str) -> PathResult { ensure_raw_path_input_len(path)?; let normalized = path.nfc().collect::(); if !normalized.starts_with('/') { return Err(PathError::MissingLeadingSlash); } if normalized.contains('\\') { return Err(PathError::Backslash); } if normalized.contains("//") { return Err(PathError::EmptySegment); } if normalized == "/" { return Ok("/".to_string()); } if !normalized.ends_with('/') { return Err(PathError::MissingTrailingSlashOnDirectoryPath); } let segments = normalized .split('/') .filter(|segment| !segment.is_empty()) .collect::>(); let normalized_segments = canonicalize_path_segments(&segments)?; if normalized_segments.is_empty() { return Ok("/".to_string()); } let canonical = format!("/{}/", normalized_segments.join("/")); ensure_canonical_path_len(&canonical)?; Ok(canonical) } fn canonicalize_path_segments(segments: &[&str]) -> PathResult> { let mut canonical_segments = Vec::with_capacity(segments.len()); for segment in segments { let normalized_segment = normalize_validated_path_segment(segment)?; match normalized_segment.as_str() { "." | ".." => return Err(PathError::DotSegment), _ => canonical_segments.push(normalized_segment), } } Ok(canonical_segments) } fn ensure_canonical_path_len(path: &str) -> PathResult<()> { if path.len() > MAX_CANONICAL_PATH_BYTES { Err(PathError::PathTooLong) } else { Ok(()) } } fn ensure_raw_path_input_len(path: &str) -> PathResult<()> { if path.len() > MAX_RAW_PATH_INPUT_BYTES { Err(PathError::RawPathInputTooLong) } else { Ok(()) } } fn ensure_canonical_segment_len(segment: &str) -> PathResult<()> { if segment.len() > MAX_CANONICAL_PATH_SEGMENT_BYTES { Err(PathError::SegmentTooLong) } else { Ok(()) } } pub(crate) fn parse_file_path(path: &str) -> Result { parse_file_path_impl(path).map_err(PathError::into_lix_error) } fn parse_file_path_impl(path: &str) -> PathResult { let normalized_path = normalize_file_path_impl(path)?; let segments = normalized_path .split('/') .filter(|segment| !segment.is_empty()) .collect::>(); let file_name = segments .last() .ok_or(PathError::InvalidRootUsage)? .to_string(); let directory_path = if segments.len() > 1 { Some(NormalizedDirectoryPath::from_normalized(format!( "/{}/", segments[..segments.len() - 1].join("/") ))) } else { None }; Ok(ParsedFilePath { normalized_path: NormalizedFilePath::from_normalized(normalized_path), directory_path, name: file_name, }) } pub(crate) fn directory_ancestor_paths(path: &str) -> Vec { ancestor_directory_paths(path) } fn ancestor_directory_paths(path: &str) -> Vec { let segments = path .trim_matches('/') .split('/') .filter(|segment| !segment.is_empty()) .collect::>(); if segments.len() <= 1 { return Vec::new(); } let mut ancestors = Vec::with_capacity(segments.len() - 1); let mut prefix_segments: Vec<&str> = Vec::with_capacity(segments.len() - 1); for segment in segments.iter().take(segments.len() - 1) { prefix_segments.push(segment); ancestors.push(format!("/{}/", prefix_segments.join("/"))); } ancestors } pub(crate) fn parent_directory_path(path: &str) -> Option { let segments = path .trim_matches('/') .split('/') .filter(|segment| !segment.is_empty()) .collect::>(); if segments.len() <= 1 { return None; } Some(format!("/{}/", segments[..segments.len() - 1].join("/"))) } pub(crate) fn directory_name_from_path(path: &str) -> Option { path.trim_matches('/') .split('/') .filter(|segment| !segment.is_empty()) .next_back() .map(|segment| segment.to_string()) } #[cfg(test)] pub(crate) fn compose_directory_path(parent_path: &str, name: &str) -> Result { let normalized_name = normalize_path_segment_impl(name).map_err(PathError::into_lix_error)?; if parent_path == "/" { Ok(format!("/{normalized_name}/")) } else if parent_path.starts_with('/') && parent_path.ends_with('/') { Ok(format!("{parent_path}{normalized_name}/")) } else { Err(PathError::InvalidDirectoryParentPath.into_lix_error()) } } #[cfg(test)] mod tests { use super::*; use iref::iri::Path as IriPath; #[derive(Clone, Copy, Debug)] enum NormalizationKind { File, Directory, Segment, } #[derive(Clone, Copy, Debug)] enum LixFixtureKind { File, Directory, } #[derive(Clone, Copy, Debug)] struct RfcFixture { label: &'static str, input: &'static str, } #[derive(Clone, Copy, Debug)] struct LixProfileFixture { label: &'static str, kind: LixFixtureKind, input: &'static str, oracle_accepts: bool, expected: Result<&'static str, PathError>, } #[derive(Clone, Copy, Debug)] struct NormalizationFixture { label: &'static str, kind: NormalizationKind, input: &'static str, expected: &'static str, } fn assert_path_error(result: PathResult, expected: PathError) { assert_eq!(result.unwrap_err(), expected); } fn iri_oracle_accepts(path: &str) -> bool { IriPath::new(path).is_ok() } fn normalize_with_kind(kind: NormalizationKind, input: &str) -> Result { match kind { NormalizationKind::File => { normalize_file_path_impl(input).map_err(PathError::into_lix_error) } NormalizationKind::Directory => normalize_directory_path(input), NormalizationKind::Segment => normalize_path_segment(input), } } fn normalize_file_path(path: &str) -> Result { normalize_file_path_impl(path).map_err(PathError::into_lix_error) } fn assert_lix_profile_fixture(fixture: LixProfileFixture) { assert_eq!( iri_oracle_accepts(fixture.input), fixture.oracle_accepts, "iref oracle mismatch for {} ({})", fixture.label, fixture.input ); match fixture.kind { LixFixtureKind::File => match fixture.expected { Ok(expected) => assert_eq!( normalize_file_path(fixture.input).as_deref(), Ok(expected), "unexpected file result for {} ({})", fixture.label, fixture.input ), Err(expected) => { assert_path_error(normalize_file_path_impl(fixture.input), expected) } }, LixFixtureKind::Directory => match fixture.expected { Ok(expected) => assert_eq!( normalize_directory_path(fixture.input).as_deref(), Ok(expected), "unexpected directory result for {} ({})", fixture.label, fixture.input ), Err(expected) => { assert_path_error(normalize_directory_path_impl(fixture.input), expected) } }, } } const RFC_POSITIVE_FIXTURES: &[RfcFixture] = &[ RfcFixture { label: "absolute unicode file path", input: "/unicodé/段落.md", }, RfcFixture { label: "path with pchar punctuation", input: "/docs/hello:world@x!$&'()*+,;=.md", }, ]; const RFC_NEGATIVE_FIXTURES: &[RfcFixture] = &[ RfcFixture { label: "invalid percent triplet", input: "/docs/%zz.md", }, RfcFixture { label: "truncated percent triplet", input: "/docs/%2", }, RfcFixture { label: "raw space is not allowed in an ipath", input: "/docs/file name.md", }, RfcFixture { label: "raw fragment delimiter is not part of the path grammar", input: "/docs/#hash", }, RfcFixture { label: "private use code point is excluded from ucschar", input: "/docs/\u{E000}.md", }, ]; const LIX_PROFILE_POSITIVE_FIXTURES: &[LixProfileFixture] = &[ LixProfileFixture { label: "root directory is representable", kind: LixFixtureKind::Directory, input: "/", oracle_accepts: true, expected: Ok("/"), }, LixProfileFixture { label: "directory paths require trailing slash", kind: LixFixtureKind::Directory, input: "/docs/", oracle_accepts: true, expected: Ok("/docs/"), }, LixProfileFixture { label: "file paths stay slashless at the end", kind: LixFixtureKind::File, input: "/docs/readme.md", oracle_accepts: true, expected: Ok("/docs/readme.md"), }, ]; const LIX_PROFILE_NEGATIVE_FIXTURES: &[LixProfileFixture] = &[ LixProfileFixture { label: "relative-looking path is valid RFC syntax but not a Lix path", kind: LixFixtureKind::File, input: "docs/readme.md", oracle_accepts: true, expected: Err(PathError::MissingLeadingSlash), }, LixProfileFixture { label: "file paths reject trailing slash even though RFC syntax allows it", kind: LixFixtureKind::File, input: "/docs/", oracle_accepts: true, expected: Err(PathError::UnexpectedTrailingSlashOnFilePath), }, LixProfileFixture { label: "directory paths reject missing trailing slash even though RFC syntax allows it", kind: LixFixtureKind::Directory, input: "/docs", oracle_accepts: true, expected: Err(PathError::MissingTrailingSlashOnDirectoryPath), }, LixProfileFixture { label: "empty segments are valid RFC paths but banned by the Lix profile", kind: LixFixtureKind::File, input: "/docs//guide.md", oracle_accepts: true, expected: Err(PathError::EmptySegment), }, LixProfileFixture { label: "root is not a valid file path", kind: LixFixtureKind::File, input: "/", oracle_accepts: true, expected: Err(PathError::InvalidRootUsage), }, LixProfileFixture { label: "percent-encoded spaces are valid URI syntax but not Lix segment identity", kind: LixFixtureKind::File, input: "/docs/%20notes.md", oracle_accepts: true, expected: Err(PathError::InvalidPathSegmentCodePoint), }, LixProfileFixture { label: "bidi formatting is rejected by the Lix validator even though iref accepts it", kind: LixFixtureKind::File, input: "/docs/\u{202E}.md", oracle_accepts: true, expected: Err(PathError::InvalidPathSegmentCodePoint), }, LixProfileFixture { label: "dot segments are valid RFC syntax but banned by the Lix profile", kind: LixFixtureKind::File, input: "/docs/../guide.md", oracle_accepts: true, expected: Err(PathError::DotSegment), }, ]; const NORMALIZATION_FIXTURES: &[NormalizationFixture] = &[ NormalizationFixture { label: "nfc composition happens before validation", kind: NormalizationKind::File, input: "/Cafe\u{0301}.md", expected: "/Café.md", }, NormalizationFixture { label: "percent-encoded segment text is decoded before storage", kind: NormalizationKind::Directory, input: "/docs/%43afe%CC%81/", expected: "/docs/Café/", }, NormalizationFixture { label: "unreserved percent encoding is decoded", kind: NormalizationKind::File, input: "/docs/%7e%41.md", expected: "/docs/~A.md", }, NormalizationFixture { label: "root survives directory normalization", kind: NormalizationKind::Directory, input: "/", expected: "/", }, NormalizationFixture { label: "segment normalization decodes unreserved percent triplets", kind: NormalizationKind::Segment, input: "%7ehello", expected: "~hello", }, ]; #[test] fn rfc_positive_path_fixtures_agree_with_iref() { for fixture in RFC_POSITIVE_FIXTURES { assert!( iri_oracle_accepts(fixture.input), "iref should accept {} ({})", fixture.label, fixture.input ); assert!( normalize_file_path_impl(fixture.input).is_ok(), "lix should accept {} ({})", fixture.label, fixture.input ); } } #[test] fn rfc_negative_path_fixtures_agree_with_iref() { for fixture in RFC_NEGATIVE_FIXTURES { assert!( !iri_oracle_accepts(fixture.input), "iref should reject {} ({})", fixture.label, fixture.input ); assert!( normalize_file_path_impl(fixture.input).is_err(), "lix should reject {} ({})", fixture.label, fixture.input ); } } #[test] fn lix_profile_positive_fixtures_are_pinned() { for fixture in LIX_PROFILE_POSITIVE_FIXTURES { assert_lix_profile_fixture(*fixture); } } #[test] fn lix_profile_negative_fixtures_document_divergence_from_the_oracle() { for fixture in LIX_PROFILE_NEGATIVE_FIXTURES { assert_lix_profile_fixture(*fixture); } } #[test] fn normalization_fixture_table_covers_canonicalization_rules() { for fixture in NORMALIZATION_FIXTURES { assert_eq!( normalize_with_kind(fixture.kind, fixture.input).as_deref(), Ok(fixture.expected), "unexpected normalized value for {} ({})", fixture.label, fixture.input ); } } #[test] fn accepts_normalized_file_paths_with_unicode_and_percent_encoding() { for path in [ "/docs/readme.md", "/a/b/c.txt", "/dash--path", "/unicodé/段落.md", "/docs/hello:world@x!$&'()*+,;=.md", ] { assert!( normalize_file_path(path).is_ok(), "expected valid path {path}" ); } } #[test] fn rejects_structural_file_path_anomalies() { assert_path_error(normalize_file_path_impl("/"), PathError::InvalidRootUsage); assert_path_error( normalize_file_path_impl("/trailing/"), PathError::UnexpectedTrailingSlashOnFilePath, ); assert_path_error( normalize_file_path_impl("no-leading"), PathError::MissingLeadingSlash, ); assert_path_error( normalize_file_path_impl("/bad//double"), PathError::EmptySegment, ); } #[test] fn rejects_file_paths_with_dot_segments() { for path in [ "/docs/./file", "/docs/../file", "/docs/%2e/file", "/docs/%2E%2E/file", ] { assert_path_error(normalize_file_path_impl(path), PathError::DotSegment); } } #[test] fn rejects_file_paths_with_invalid_characters() { for path in ["/docs/file?.md", "/docs/#hash", "/docs/file name.md"] { assert_path_error( normalize_file_path_impl(path), PathError::InvalidPathSegmentCodePoint, ); } } #[test] fn rejects_file_paths_and_segments_over_length_limits() { let segment_at_limit = "a".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES); let path_at_limit = format!("/{segment_at_limit}"); assert_eq!( normalize_file_path(&path_at_limit).as_deref(), Ok(path_at_limit.as_str()) ); let segment_over_limit = "a".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES + 1); assert_path_error( normalize_file_path_impl(&format!("/{segment_over_limit}")), PathError::SegmentTooLong, ); assert_path_error( normalize_path_segment_impl(&segment_over_limit), PathError::SegmentTooLong, ); let mut segments = Vec::new(); let mut raw_len = 1usize; while raw_len <= MAX_CANONICAL_PATH_BYTES { segments.push("abcd"); raw_len = 1 + segments.join("/").len(); } assert_path_error( normalize_file_path_impl(&format!("/{}", segments.join("/"))), PathError::PathTooLong, ); } #[test] fn rejects_file_paths_with_private_use_and_noncharacter_code_points() { for path in ["/docs/\u{E000}.md", "/docs/\u{FDD0}.md"] { assert_path_error( normalize_file_path_impl(path), PathError::InvalidPathSegmentCodePoint, ); } } #[test] fn rejects_file_paths_with_bidi_formatting_characters() { for path in ["/docs/\u{200E}.md", "/docs/\u{202E}.md"] { assert_path_error( normalize_file_path_impl(path), PathError::InvalidPathSegmentCodePoint, ); } } #[test] fn rejects_default_ignorable_and_invisible_segment_characters() { for path in [ "/docs/a\u{200B}b.md", // ZERO WIDTH SPACE "/docs/a\u{200C}b.md", // ZERO WIDTH NON-JOINER "/docs/a\u{200D}b.md", // ZERO WIDTH JOINER "/docs/a\u{2060}b.md", // WORD JOINER "/docs/a\u{00AD}b.md", // SOFT HYPHEN "/docs/a\u{034F}b.md", // COMBINING GRAPHEME JOINER "/docs/a\u{180E}b.md", // MONGOLIAN VOWEL SEPARATOR "/docs/a\u{FEFF}b.md", // ZERO WIDTH NO-BREAK SPACE ] { assert_path_error( normalize_file_path_impl(path), PathError::InvalidPathSegmentCodePoint, ); } } #[test] fn rejects_unicode_separators_and_leading_combining_marks() { for path in [ "/docs/a\u{00A0}b.md", // NO-BREAK SPACE "/docs/a\u{2028}b.md", // LINE SEPARATOR "/docs/a\u{2029}b.md", // PARAGRAPH SEPARATOR "/docs/\u{0301}.md", // COMBINING ACUTE ACCENT ] { assert_path_error( normalize_file_path_impl(path), PathError::InvalidPathSegmentCodePoint, ); } } #[test] fn validates_percent_encoding_in_file_paths() { assert_eq!( normalize_file_path("/docs/%43afe%CC%81.md").as_deref(), Ok("/docs/Café.md") ); assert_path_error( normalize_file_path_impl("/docs/%zz.md"), PathError::InvalidPercentEncoding, ); assert_path_error( normalize_file_path_impl("/docs/abc%.md"), PathError::InvalidPercentEncoding, ); assert_path_error( normalize_file_path_impl("/docs/abc%2.md"), PathError::InvalidPercentEncoding, ); } #[test] fn applies_segment_length_limit_to_canonical_text_not_percent_encoded_boundary_spelling() { let encoded_segment_at_limit = "%61".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES); let canonical_segment_at_limit = "a".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES); assert_eq!( normalize_file_path(&format!("/{encoded_segment_at_limit}")).as_deref(), Ok(format!("/{canonical_segment_at_limit}").as_str()) ); assert_eq!( normalize_directory_path(&format!("/{encoded_segment_at_limit}/")).as_deref(), Ok(format!("/{canonical_segment_at_limit}/").as_str()) ); let encoded_segment_over_limit = "%61".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES + 1); assert_path_error( normalize_file_path_impl(&format!("/{encoded_segment_over_limit}")), PathError::SegmentTooLong, ); assert_path_error( normalize_directory_path_impl(&format!("/{encoded_segment_over_limit}/")), PathError::SegmentTooLong, ); } #[test] fn rejects_raw_path_input_over_length_budget_before_unicode_processing() { let huge_file_path = format!("/{}", "a".repeat(1024 * 1024)); assert_path_error( normalize_file_path_impl(&huge_file_path), PathError::RawPathInputTooLong, ); let huge_directory_path = format!("/{}/", "a".repeat(1024 * 1024)); assert_path_error( normalize_directory_path_impl(&huge_directory_path), PathError::RawPathInputTooLong, ); } #[test] fn rejects_percent_encoded_forbidden_code_points_in_file_paths() { for (path, expected) in [ ("/docs/%00evil.md", PathError::NulByte), ("/docs/%2Fevil.md", PathError::SlashInSegment), ("/docs/%5Cevil.md", PathError::Backslash), ("/docs/%25evil.md", PathError::InvalidPathSegmentCodePoint), ("/docs/%3Fevil.md", PathError::InvalidPathSegmentCodePoint), ("/docs/%23evil.md", PathError::InvalidPathSegmentCodePoint), ( "/docs/%E2%80%AEevil.md", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%E2%80%8Eevil.md", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%E2%81%A0evil.md", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%C2%ADevil.md", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%CD%8Fevil.md", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%E1%A0%8Eevil.md", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%EF%BB%BFevil.md", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%EF%B7%90evil.md", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%EE%80%80evil.md", PathError::InvalidPathSegmentCodePoint, ), ("/docs/%FFevil.md", PathError::InvalidPathSegmentCodePoint), ] { assert_path_error(normalize_file_path_impl(path), expected); } } #[test] fn rejects_percent_encoded_forbidden_code_points_in_directory_paths() { for (path, expected) in [ ("/docs/%00evil/", PathError::NulByte), ("/docs/%2Fevil/", PathError::SlashInSegment), ("/docs/%5Cevil/", PathError::Backslash), ( "/docs/%E2%80%AEevil/", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%E2%80%8Eevil/", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%E2%81%A0evil/", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%EF%BB%BFevil/", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%EF%B7%90evil/", PathError::InvalidPathSegmentCodePoint, ), ( "/docs/%EE%80%80evil/", PathError::InvalidPathSegmentCodePoint, ), ("/docs/%FFevil/", PathError::InvalidPathSegmentCodePoint), ] { assert_path_error(normalize_directory_path_impl(path), expected); } } #[test] fn canonicalizes_percent_encoding_in_file_paths() { assert_eq!( normalize_file_path("/docs/%7e%41%2e%2E.md").as_deref(), Ok("/docs/~A...md") ); assert_path_error( normalize_file_path_impl("/docs/%2fkept%3aencoded"), PathError::SlashInSegment, ); } #[test] fn normalization_is_stable_on_renormalization() { let once = normalize_file_path("/docs/%7e/%41.md").expect("first normalization"); let twice = normalize_file_path(&once).expect("second normalization"); assert_eq!(once, twice); } #[test] fn accepts_and_rejects_directory_paths_like_legacy_rules() { for path in ["/", "/docs/", "/docs/guides/", "/unicodé/章节/"] { assert!( normalize_directory_path(path).is_ok(), "expected valid directory path {path}" ); } assert_path_error( normalize_directory_path_impl("/file.md"), PathError::MissingTrailingSlashOnDirectoryPath, ); assert_path_error( normalize_directory_path_impl("/docs"), PathError::MissingTrailingSlashOnDirectoryPath, ); assert_path_error( normalize_directory_path_impl("/docs/ "), PathError::MissingTrailingSlashOnDirectoryPath, ); assert_path_error( normalize_directory_path_impl("/docs/ /"), PathError::InvalidPathSegmentCodePoint, ); assert_path_error( normalize_directory_path_impl("no-leading"), PathError::MissingLeadingSlash, ); assert_path_error( normalize_directory_path_impl("/docs/%zz/"), PathError::InvalidPercentEncoding, ); } #[test] fn canonicalizes_directory_paths() { assert_eq!( normalize_directory_path("/docs/%43afe%CC%81/").as_deref(), Ok("/docs/Café/") ); } #[test] fn rejects_directory_paths_and_segments_over_length_limits() { let segment_at_limit = "a".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES); let path_at_limit = format!("/{segment_at_limit}/"); assert_eq!( normalize_directory_path(&path_at_limit).as_deref(), Ok(path_at_limit.as_str()) ); let segment_over_limit = "a".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES + 1); assert_path_error( normalize_directory_path_impl(&format!("/{segment_over_limit}/")), PathError::SegmentTooLong, ); let mut segments = Vec::new(); let mut raw_len = 1usize; while raw_len <= MAX_CANONICAL_PATH_BYTES { segments.push("abcd"); raw_len = 2 + segments.join("/").len(); } assert_path_error( normalize_directory_path_impl(&format!("/{}/", segments.join("/"))), PathError::PathTooLong, ); } #[test] fn rejects_directory_paths_with_dot_segments() { for path in ["/docs/./", "/docs/../", "/docs/%2e/", "/docs/%2E%2E/"] { assert_path_error(normalize_directory_path_impl(path), PathError::DotSegment); } } #[test] fn represents_root_as_a_normalized_directory_path() { let root = NormalizedDirectoryPath::try_from_path("/").expect("root path"); assert_eq!(root.as_str(), "/"); assert_eq!( root, NormalizedDirectoryPath::from_normalized("/".to_string()) ); } #[test] fn root_parent_and_top_level_parent_are_absent() { assert_eq!(parent_directory_path("/"), None); assert_eq!(parent_directory_path("/top-level.txt"), None); } #[test] fn compose_directory_path_under_root() { assert_eq!(compose_directory_path("/", "docs").as_deref(), Ok("/docs/")); } #[test] fn exposes_stable_lix_errors_with_hints() { let missing_leading = normalize_file_path("docs/readme.md").expect_err("leading slash"); assert_eq!(missing_leading.code, "LIX_ERROR_PATH_MISSING_LEADING_SLASH"); assert_eq!(missing_leading.hint(), Some("prefix the path with '/'")); let bad_percent = normalize_file_path("/docs/%zz.md").expect_err("bad percent"); assert_eq!(bad_percent.code, "LIX_ERROR_PATH_INVALID_PERCENT_ENCODING"); assert_eq!( bad_percent.hint(), Some("use valid percent triplets only for URI boundary input; '%' is not allowed in canonical path segments") ); let root_file = normalize_file_path("/").expect_err("root as file"); assert_eq!(root_file.code, "LIX_ERROR_PATH_INVALID_ROOT_USAGE"); assert_eq!( root_file.hint(), Some("use '/' as a directory path, never as a file path") ); let long_segment = normalize_file_path(&format!( "/{}", "a".repeat(MAX_CANONICAL_PATH_SEGMENT_BYTES + 1) )) .expect_err("long segment"); assert_eq!(long_segment.code, "LIX_ERROR_PATH_SEGMENT_TOO_LONG"); assert_eq!( long_segment.hint(), Some("keep each canonical path segment at or below 255 bytes") ); let long_input = normalize_file_path(&format!("/{}", "a".repeat(MAX_RAW_PATH_INPUT_BYTES + 1))) .expect_err("long raw input"); assert_eq!(long_input.code, "LIX_ERROR_PATH_INPUT_TOO_LONG"); assert_eq!( long_input.hint(), Some("keep raw path input at or below 16384 bytes") ); } } ================================================ FILE: packages/engine/src/common/identity.rs ================================================ use std::borrow::Borrow; use std::fmt; use std::ops::Deref; use crate::LixError; use serde::{Deserialize, Deserializer, Serialize, Serializer}; macro_rules! canonical_identity_type { ($name:ident, $label:literal) => { #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub struct $name(String); impl $name { pub fn new(value: impl Into) -> Result { let value = value.into(); validate_non_empty_identity_value($label, value).map(Self) } pub fn as_str(&self) -> &str { &self.0 } pub fn into_inner(self) -> String { self.0 } } impl TryFrom for $name { type Error = LixError; fn try_from(value: String) -> Result { Self::new(value) } } impl TryFrom<&str> for $name { type Error = LixError; fn try_from(value: &str) -> Result { Self::new(value) } } impl From<$name> for String { fn from(value: $name) -> Self { value.0 } } impl Deref for $name { type Target = str; fn deref(&self) -> &Self::Target { self.0.as_str() } } impl AsRef for $name { fn as_ref(&self) -> &str { self.0.as_str() } } impl Borrow for $name { fn borrow(&self) -> &str { self.0.as_str() } } impl fmt::Display for $name { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { self.0.fmt(f) } } impl PartialEq<&str> for $name { fn eq(&self, other: &&str) -> bool { self.0 == *other } } impl PartialEq<$name> for &str { fn eq(&self, other: &$name) -> bool { *self == other.0 } } impl Serialize for $name { fn serialize(&self, serializer: S) -> Result where S: Serializer, { serializer.serialize_str(&self.0) } } impl<'de> Deserialize<'de> for $name { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { let value = String::deserialize(deserializer)?; Self::new(value).map_err(serde::de::Error::custom) } } }; } canonical_identity_type!(EntityId, "entity_id"); canonical_identity_type!(FileId, "file_id"); canonical_identity_type!(VersionId, "version_id"); canonical_identity_type!(CanonicalSchemaKey, "schema_key"); canonical_identity_type!(CanonicalPluginKey, "plugin_key"); pub(crate) fn validate_non_empty_identity_value( label: &str, value: impl Into, ) -> Result { let value = value.into(); if value.is_empty() { return Err(LixError::new( LixError::CODE_INVALID_PARAM, format!("{label} must be non-empty"), )); } Ok(value) } pub(crate) fn json_pointer_get<'a>( value: &'a serde_json::Value, pointer: &[String], ) -> Option<&'a serde_json::Value> { let mut current = value; for segment in pointer { match current { serde_json::Value::Object(object) => current = object.get(segment)?, serde_json::Value::Array(array) => { let index = segment.parse::().ok()?; current = array.get(index)?; } _ => return None, } } Some(current) } ================================================ FILE: packages/engine/src/common/json_pointer.rs ================================================ use crate::LixError; pub(crate) fn parse_json_pointer(pointer: &str) -> Result, LixError> { if pointer.is_empty() { return Ok(Vec::new()); } if !pointer.starts_with('/') { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("invalid JSON pointer '{pointer}'"), )); } pointer[1..] .split('/') .map(decode_json_pointer_segment) .collect() } pub(crate) fn format_json_pointer(segments: &[String]) -> String { if segments.is_empty() { return String::new(); } format!( "/{}", segments .iter() .map(|segment| segment.replace('~', "~0").replace('/', "~1")) .collect::>() .join("/") ) } pub(crate) fn top_level_property_name(pointer: &str) -> Result, LixError> { if pointer.is_empty() { return Ok(None); } if !pointer.starts_with('/') { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("invalid JSON pointer '{pointer}'"), )); } let segment = pointer[1..].split('/').next().unwrap_or_default(); Ok(Some(decode_json_pointer_segment(segment)?)) } fn decode_json_pointer_segment(segment: &str) -> Result { let mut out = String::new(); let mut chars = segment.chars(); while let Some(ch) = chars.next() { if ch != '~' { out.push(ch); continue; } match chars.next() { Some('0') => out.push('~'), Some('1') => out.push('/'), _ => { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "invalid JSON pointer escape", )) } } } Ok(out) } ================================================ FILE: packages/engine/src/common/metadata.rs ================================================ use crate::LixError; pub(crate) fn parse_row_metadata( value: &str, context: impl AsRef, ) -> Result { let metadata = parse_row_metadata_value(value, context)?; Ok(serde_json::to_string(&metadata).expect("serde_json::Value metadata serializes")) } pub(crate) fn parse_row_metadata_value( value: &str, context: impl AsRef, ) -> Result { let metadata = serde_json::from_str::(value).map_err(|error| { LixError::new( "LIX_ERROR_INVALID_JSON", format!("{} metadata is invalid JSON: {error}", context.as_ref()), ) })?; validate_row_metadata(&metadata, context)?; Ok(metadata) } pub(crate) fn validate_row_metadata( metadata: &serde_json::Value, context: impl AsRef, ) -> Result<(), LixError> { if metadata.is_object() { return Ok(()); } Err(LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!("{} metadata must be a JSON object", context.as_ref()), )) } pub(crate) fn serialize_row_metadata(metadata: &String) -> String { metadata.clone() } ================================================ FILE: packages/engine/src/common/mod.rs ================================================ pub(crate) mod error; pub(crate) mod fingerprint; pub(crate) mod fs_path; pub(crate) mod identity; pub(crate) mod json_pointer; pub(crate) mod metadata; pub(crate) mod types; pub(crate) mod wire; pub use error::LixError; pub(crate) use fingerprint::stable_content_fingerprint_hex; pub(crate) use fs_path::{ directory_ancestor_paths, directory_name_from_path, normalize_directory_path, normalize_path_segment, parent_directory_path, ParsedFilePath, }; pub(crate) use identity::{json_pointer_get, validate_non_empty_identity_value}; pub use identity::{CanonicalPluginKey, CanonicalSchemaKey, EntityId, FileId, VersionId}; pub(crate) use json_pointer::{format_json_pointer, parse_json_pointer, top_level_property_name}; pub(crate) use metadata::{ parse_row_metadata, parse_row_metadata_value, serialize_row_metadata, validate_row_metadata, }; pub use types::{LixNotice, NullableKeyFilter, SqlQueryResult, Value, WriteReceipt}; pub use wire::{WireQueryResult, WireValue}; ================================================ FILE: packages/engine/src/common/types.rs ================================================ use std::ops::Deref; #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)] pub enum Value { Null, Boolean(bool), Integer(i64), Real(f64), Text(String), Json(serde_json::Value), Blob(Vec), } #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub enum NullableKeyFilter { Any, Null, Value(T), } impl Default for NullableKeyFilter { fn default() -> Self { Self::Any } } impl NullableKeyFilter { pub fn is_any(&self) -> bool { matches!(self, Self::Any) } pub fn as_value(&self) -> Option<&T> { match self { Self::Value(value) => Some(value), Self::Any | Self::Null => None, } } pub fn as_ref(&self) -> NullableKeyFilter<&T> { match self { Self::Any => NullableKeyFilter::Any, Self::Null => NullableKeyFilter::Null, Self::Value(value) => NullableKeyFilter::Value(value), } } pub fn from_nullable(value: Option) -> Self { match value { Some(value) => Self::Value(value), None => Self::Null, } } } impl NullableKeyFilter where T: Deref, { pub fn as_deref(&self) -> NullableKeyFilter<&T::Target> { match self { Self::Any => NullableKeyFilter::Any, Self::Null => NullableKeyFilter::Null, Self::Value(value) => NullableKeyFilter::Value(value.deref()), } } } impl NullableKeyFilter { pub fn matches(&self, candidate: Option<&T>) -> bool { match self { Self::Any => true, Self::Null => candidate.is_none(), Self::Value(expected) => candidate == Some(expected), } } } #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)] pub struct SqlQueryResult { pub rows: Vec>, #[serde(default)] pub columns: Vec, #[serde(default)] pub notices: Vec, } #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub struct LixNotice { pub code: String, pub message: String, #[serde(default, skip_serializing_if = "Option::is_none")] pub hint: Option, } #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)] pub struct WriteReceipt { #[serde(default, skip_serializing_if = "Option::is_none")] pub state_commit_sequence: Option, } impl WriteReceipt { pub fn is_empty(&self) -> bool { self.state_commit_sequence.is_none() } } ================================================ FILE: packages/engine/src/common/wire.rs ================================================ use crate::{LixError, LixNotice, SqlQueryResult, Value}; use base64::Engine as _; use serde::{Deserialize, Serialize}; #[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] #[serde(tag = "kind", rename_all = "lowercase")] pub enum WireValue { Null { value: () }, Bool { value: bool }, Int { value: i64 }, Float { value: f64 }, Text { value: String }, Json { value: serde_json::Value }, Blob { base64: String }, } #[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] pub struct WireQueryResult { pub rows: Vec>, #[serde(default)] pub columns: Vec, #[serde(default)] pub notices: Vec, } impl WireValue { pub fn try_from_engine(value: &Value) -> Result { match value { Value::Null => Ok(Self::Null { value: () }), Value::Boolean(value) => Ok(Self::Bool { value: *value }), Value::Integer(value) => Ok(Self::Int { value: *value }), Value::Real(value) => { if !value.is_finite() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "cannot encode non-finite float value to wire format".to_string(), hint: None, details: None, }); } Ok(Self::Float { value: *value }) } Value::Text(value) => Ok(Self::Text { value: value.clone(), }), Value::Json(value) => Ok(Self::Json { value: value.clone(), }), Value::Blob(value) => Ok(Self::Blob { base64: base64::engine::general_purpose::STANDARD.encode(value), }), } } pub fn try_into_engine(self) -> Result { match self { Self::Null { .. } => Ok(Value::Null), Self::Bool { value } => Ok(Value::Boolean(value)), Self::Int { value } => Ok(Value::Integer(value)), Self::Float { value } => { if !value.is_finite() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "cannot decode non-finite float value from wire format" .to_string(), hint: None, details: None, }); } Ok(Value::Real(value)) } Self::Text { value } => Ok(Value::Text(value)), Self::Json { value } => Ok(Value::Json(value)), Self::Blob { base64 } => { let decoded = base64::engine::general_purpose::STANDARD .decode(base64.as_bytes()) .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("failed to decode wire blob base64: {error}"), hint: None, details: None, })?; Ok(Value::Blob(decoded)) } } } } impl WireQueryResult { pub fn try_from_engine(result: &SqlQueryResult) -> Result { let mut rows = Vec::with_capacity(result.rows.len()); for row in &result.rows { let mut wire_row = Vec::with_capacity(row.len()); for value in row { wire_row.push(WireValue::try_from_engine(value)?); } rows.push(wire_row); } Ok(Self { rows, columns: result.columns.clone(), notices: result.notices.clone(), }) } pub fn try_into_engine(self) -> Result { let mut rows = Vec::with_capacity(self.rows.len()); for row in self.rows { let mut engine_row = Vec::with_capacity(row.len()); for value in row { engine_row.push(value.try_into_engine()?); } rows.push(engine_row); } Ok(SqlQueryResult { rows, columns: self.columns, notices: self.notices, }) } } #[cfg(test)] mod tests { use super::{WireQueryResult, WireValue}; use crate::{LixNotice, SqlQueryResult, Value}; use serde_json::json; #[test] fn value_roundtrip_preserves_all_variants() { let original = vec![ Value::Null, Value::Boolean(true), Value::Integer(42), Value::Real(1.5), Value::Text("hello".to_string()), Value::Json(json!({"hello": "world"})), Value::Blob(vec![1, 2, 3]), ]; for value in original { let wire = WireValue::try_from_engine(&value).expect("to wire should succeed"); let roundtrip = wire .try_into_engine() .expect("from wire to engine should succeed"); assert_eq!(roundtrip, value); } } #[test] fn query_result_roundtrip_preserves_rows_and_columns() { let original = SqlQueryResult { rows: vec![ vec![ Value::Integer(1), Value::Text("a".to_string()), Value::Blob(vec![0x41, 0x42]), ], vec![Value::Null, Value::Boolean(false), Value::Real(2.5)], ], columns: vec!["i".to_string(), "t".to_string(), "b".to_string()], notices: vec![LixNotice { code: "LIX_TEST_NOTICE".to_string(), message: "test notice".to_string(), hint: Some("test hint".to_string()), }], }; let wire = WireQueryResult::try_from_engine(&original).expect("to wire should succeed"); let roundtrip = wire .try_into_engine() .expect("from wire to engine should succeed"); assert_eq!(roundtrip, original); } #[test] fn canonical_json_uses_lowercase_kinds_only() { let wire = WireQueryResult { rows: vec![vec![ WireValue::Null { value: () }, WireValue::Bool { value: true }, WireValue::Int { value: 1 }, WireValue::Float { value: 1.5 }, WireValue::Text { value: "hello".to_string(), }, WireValue::Json { value: json!({"hello": "world"}), }, WireValue::Blob { base64: "AQI=".to_string(), }, ]], columns: vec!["a".to_string()], notices: Vec::new(), }; let serialized = serde_json::to_string(&wire).expect("wire query result should serialize to json"); assert!(serialized.contains("\"kind\":\"null\"")); assert!(serialized.contains("\"kind\":\"bool\"")); assert!(serialized.contains("\"kind\":\"int\"")); assert!(serialized.contains("\"kind\":\"float\"")); assert!(serialized.contains("\"kind\":\"text\"")); assert!(serialized.contains("\"kind\":\"json\"")); assert!(serialized.contains("\"kind\":\"blob\"")); assert!(!serialized.contains("\"kind\":\"Null\"")); assert!(!serialized.contains("\"kind\":\"Bool\"")); assert!(!serialized.contains("\"kind\":\"Integer\"")); assert!(!serialized.contains("\"kind\":\"Real\"")); assert!(!serialized.contains("\"kind\":\"Text\"")); assert!(!serialized.contains("\"kind\":\"Json\"")); assert!(!serialized.contains("\"kind\":\"Blob\"")); } #[test] fn null_shape_is_explicitly_canonical() { let value = WireValue::Null { value: () }; let json = serde_json::to_value(value).expect("wire value should serialize"); assert_eq!(json, json!({ "kind": "null", "value": null })); } } ================================================ FILE: packages/engine/src/domain.rs ================================================ use crate::entity_identity::EntityIdentity; use crate::live_state::MaterializedLiveStateRow; use crate::{NullableKeyFilter, GLOBAL_VERSION_ID}; /// Validation/storage coordinate for repository facts. /// /// A domain is the complete scope in which a row identity is meaningful: /// version, durability, and file scope. Projection methods on this type are /// deliberately named so callers cannot silently erase part of the coordinate. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) struct Domain { version_id: String, untracked: bool, file_scope: DomainFileScope, } impl Domain { pub(crate) fn exact_file( version_id: impl Into, untracked: bool, file_id: Option, ) -> Self { Self { version_id: version_id.into(), untracked, file_scope: DomainFileScope::Exact(file_id), } } pub(crate) fn any_file(version_id: impl Into, untracked: bool) -> Self { Self { version_id: version_id.into(), untracked, file_scope: DomainFileScope::Any, } } pub(crate) fn schema_catalog(version_id: impl Into, untracked: bool) -> Self { Self::any_file(version_id, untracked) } pub(crate) fn for_live_row(row: &MaterializedLiveStateRow) -> Self { Self::exact_file(row.version_id.clone(), row.untracked, row.file_id.clone()) } pub(crate) fn schema_catalog_domain(&self) -> Self { // Schema definitions are version + durability scoped. They are not // owned by a data file, so schema catalog lookup deliberately erases // row file scope into `Any`. Self::schema_catalog(self.version_id.clone(), self.untracked) } pub(crate) fn version_id(&self) -> &str { &self.version_id } pub(crate) fn untracked(&self) -> bool { self.untracked } pub(crate) fn fingerprint_component(&self) -> String { let file_scope = match &self.file_scope { DomainFileScope::Any => "*".to_string(), DomainFileScope::Exact(Some(file_id)) => format!("={file_id}"), DomainFileScope::Exact(None) => "=".to_string(), }; format!("{}|{}|{}", self.version_id, self.untracked, file_scope) } #[cfg(test)] pub(crate) fn file_scope(&self) -> &DomainFileScope { &self.file_scope } pub(crate) fn is_exact_file(&self, file_id: &Option) -> bool { matches!(&self.file_scope, DomainFileScope::Exact(exact) if exact == file_id) } pub(crate) fn with_untracked(&self, untracked: bool) -> Self { Self { version_id: self.version_id.clone(), untracked, file_scope: self.file_scope.clone(), } } pub(crate) fn with_file_scope(&self, file_scope: DomainFileScope) -> Self { Self { version_id: self.version_id.clone(), untracked: self.untracked, file_scope, } } pub(crate) fn with_exact_file_scope(&self, file_id: Option) -> Self { self.with_file_scope(DomainFileScope::Exact(file_id)) } pub(crate) fn file_filters(&self) -> Vec> { match &self.file_scope { DomainFileScope::Any => Vec::new(), DomainFileScope::Exact(file_id) => vec![nullable_filter_from_option(file_id)], } } pub(crate) fn contains(&self, row: &MaterializedLiveStateRow) -> bool { row.version_id == self.version_id && row.untracked == self.untracked && committed_row_is_exact_version_scoped(row, &self.version_id) && match &self.file_scope { DomainFileScope::Any => true, DomainFileScope::Exact(file_id) => row.file_id == *file_id, } } fn reachable_target_domains(&self) -> Vec { if self.untracked { vec![self.with_untracked(false), self.clone()] } else { vec![self.clone()] } } fn source_domains_that_can_reach(&self) -> Vec { if self.untracked { vec![self.clone()] } else { vec![self.clone(), self.with_untracked(true)] } } fn can_reach(&self, target: &Self) -> bool { self.version_id == target.version_id && self.file_scope == target.file_scope && (self.untracked || !target.untracked) } pub(crate) fn schema_catalog_domains(&self) -> Vec { self.schema_catalog_domain().reachable_target_domains() } pub(crate) fn fk_target_domains(&self) -> Vec { self.reachable_target_domains() } pub(crate) fn fk_source_domains_for_target(&self) -> Vec { self.source_domains_that_can_reach() } pub(crate) fn file_owner_domains(&self) -> Vec { self.reachable_target_domains() } pub(crate) fn directory_parent_domains(&self) -> Vec { self.reachable_target_domains() } pub(crate) fn version_descriptor_domains_for_ref_delete(&self) -> Vec { self.source_domains_that_can_reach() } pub(crate) fn file_scoped_row_domains_for_file_descriptor_delete(&self) -> Vec { self.source_domains_that_can_reach() } pub(crate) fn validation_scope_contains_constraint_domain(&self, target: &Self) -> bool { self.can_reach(target) } pub(crate) fn tombstone_domain_affects_validation_scope( &self, validation_scope: &Self, ) -> bool { self.can_reach(validation_scope) } } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) enum DomainFileScope { Any, Exact(Option), } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] pub(crate) struct DomainRowIdentity { domain: Domain, schema_key: String, entity_id: EntityIdentity, } impl DomainRowIdentity { pub(crate) fn new( domain: Domain, schema_key: impl Into, entity_id: EntityIdentity, ) -> Self { Self { domain, schema_key: schema_key.into(), entity_id, } } pub(crate) fn from_live_row(row: &MaterializedLiveStateRow) -> Self { Self::new( Domain::for_live_row(row), row.schema_key.clone(), row.entity_id.clone(), ) } pub(crate) fn in_domain( domain: Domain, schema_key: impl Into, entity_id: EntityIdentity, ) -> Self { Self::new(domain, schema_key, entity_id) } #[cfg(test)] pub(crate) fn exact( version_id: impl Into, untracked: bool, file_id: Option, schema_key: impl Into, entity_id: EntityIdentity, ) -> Self { Self::new( Domain::exact_file(version_id, untracked, file_id), schema_key, entity_id, ) } pub(crate) fn with_domain(&self, domain: Domain) -> Self { Self { domain, schema_key: self.schema_key.clone(), entity_id: self.entity_id.clone(), } } pub(crate) fn domain(&self) -> &Domain { &self.domain } pub(crate) fn schema_key(&self) -> &str { &self.schema_key } pub(crate) fn schema_key_owned(&self) -> String { self.schema_key.clone() } pub(crate) fn entity_id(&self) -> &EntityIdentity { &self.entity_id } pub(crate) fn entity_id_owned(&self) -> EntityIdentity { self.entity_id.clone() } pub(crate) fn matches_parts( &self, domain: &Domain, schema_key: &str, entity_id: &EntityIdentity, ) -> bool { &self.domain == domain && self.schema_key == schema_key && &self.entity_id == entity_id } pub(crate) fn reachable_target_identities(&self) -> Vec { self.domain .fk_target_domains() .into_iter() .map(|domain| self.with_domain(domain)) .collect() } pub(crate) fn source_identities_that_can_reach(&self) -> Vec { self.domain .fk_source_domains_for_target() .into_iter() .map(|domain| self.with_domain(domain)) .collect() } } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) struct DomainSchemaIdentity { domain: Domain, schema_key: String, } impl DomainSchemaIdentity { pub(crate) fn new(domain: Domain, schema_key: impl Into) -> Self { Self { domain: domain.schema_catalog_domain(), schema_key: schema_key.into(), } } pub(crate) fn fingerprint_component(&self) -> String { format!( "{}|{}", self.domain.fingerprint_component(), self.schema_key ) } } pub(crate) fn committed_row_is_exact_version_scoped( row: &MaterializedLiveStateRow, version_id: &str, ) -> bool { row.version_id == version_id && row.global == (row.version_id == GLOBAL_VERSION_ID) } fn nullable_filter_from_option(value: &Option) -> NullableKeyFilter { match value { Some(value) => NullableKeyFilter::Value(value.clone()), None => NullableKeyFilter::Null, } } ================================================ FILE: packages/engine/src/engine.rs ================================================ use std::sync::Arc; use crate::binary_cas::BinaryCasContext; use crate::catalog::CatalogContext; use crate::commit_graph::CommitGraphContext; use crate::commit_store::CommitStoreContext; use crate::entity_identity::EntityIdentity; use crate::init::InitReceipt; use crate::live_state::LiveStateContext; use crate::live_state::LiveStateRowRequest; use crate::session::SessionContext; use crate::storage::{StorageContext, StorageWriteSet}; use crate::tracked_state::TrackedStateContext; use crate::untracked_state::UntrackedStateContext; use crate::version::{VersionContext, VersionRefReader}; use crate::GLOBAL_VERSION_ID; use crate::{Backend, LixError, NullableKeyFilter}; #[derive(Clone)] pub struct Engine { storage: StorageContext, tracked_state: Arc, live_state: Arc, version_ctx: Arc, binary_cas: Arc, commit_store: Arc, catalog_context: Arc, } impl Engine { /// Seeds an empty backend with the engine repository bootstrap facts. /// /// Initialization is a storage lifecycle operation, separate from runtime /// construction. Call this before `Engine::new(...)` for a brand-new /// backend. pub async fn initialize( backend: Box, ) -> Result { let backend: Arc = Arc::from(backend); let storage = StorageContext::new(backend); let commit_store = CommitStoreContext::new(); crate::init::initialize( storage, &commit_store, &TrackedStateContext::new(), &UntrackedStateContext::new(), ) .await } /// Creates a clean DataFusion-first engine over an initialized backend. /// /// SessionContext, execution, and transaction overlays are layered below the /// instance instead of being hidden behind a legacy boot path. pub async fn new(backend: Box) -> Result { let backend: Arc = Arc::from(backend); let storage = StorageContext::new(backend); let tracked_state = Arc::new(TrackedStateContext::new()); let untracked_state = Arc::new(UntrackedStateContext::new()); let commit_store = Arc::new(CommitStoreContext::new()); let commit_graph = CommitGraphContext::new(); let live_state = Arc::new(LiveStateContext::new( tracked_state.as_ref().clone(), *untracked_state, commit_graph, )); let version_ctx = Arc::new(VersionContext::new(Arc::clone(&untracked_state))); assert_initialized(storage.clone(), live_state.as_ref()).await?; // SessionContext::execute later projects these stable state contexts into one // execution-scoped SQL context, optionally wrapped by a transaction // overlay for writes. Ok(Self { binary_cas: Arc::new(BinaryCasContext::new()), commit_store, storage, tracked_state, live_state, version_ctx, catalog_context: Arc::new(CatalogContext::new()), }) } pub(crate) fn storage(&self) -> StorageContext { self.storage.clone() } /// Loads the current commit head for a version. /// /// This is the public engine-level form of the typed `version_ref` context: /// callers should not need to know that version heads are represented as /// untracked `lix_version_ref` rows in live_state. pub async fn load_version_head_commit_id( &self, version_id: &str, ) -> Result, LixError> { let mut transaction = self.storage.begin_read_transaction().await?; let result = self .version_ctx .ref_reader(transaction.as_mut()) .load_head_commit_id(version_id) .await; match result { Ok(result) => { transaction.rollback().await?; Ok(result) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } pub async fn open_session( &self, active_version_id: impl Into, ) -> Result { SessionContext::open( active_version_id.into(), self.storage(), Arc::clone(&self.live_state), Arc::clone(&self.tracked_state), Arc::clone(&self.binary_cas), Arc::clone(&self.commit_store), Arc::clone(&self.version_ctx), Arc::clone(&self.catalog_context), ) .await } pub async fn open_workspace_session(&self) -> Result { SessionContext::open_workspace( self.storage(), Arc::clone(&self.live_state), Arc::clone(&self.tracked_state), Arc::clone(&self.binary_cas), Arc::clone(&self.commit_store), Arc::clone(&self.version_ctx), Arc::clone(&self.catalog_context), ) .await } /// Materializes the tracked serving projection root for one version from commit_store. /// /// This is intentionally an engine-level operation: callers should not need /// to know which KV namespaces back changelog, commit graph, or tracked /// state. The current version head is read from the live-state facade so /// materialization uses the same moving-ref visibility as normal execution. pub async fn rebuild_tracked_state_for_version( &self, version_id: &str, ) -> Result<(), LixError> { let head_commit_id = self .load_version_head_commit_id(version_id) .await? .ok_or_else(|| { LixError::version_not_found( version_id.to_string(), "rebuild_tracked_state_for_version", "target", ) })?; let storage = self.storage(); let mut transaction = storage.begin_write_transaction().await?; let mut writes = StorageWriteSet::new(); let materialize_result = self .tracked_state .materializer( transaction.as_mut(), &mut writes, self.commit_store.as_ref(), ) .materialize_root_at(&head_commit_id) .await; if let Err(error) = materialize_result { let _ = transaction.rollback().await; return Err(error); } if let Err(error) = writes.apply(&mut transaction.as_mut()).await { let _ = transaction.rollback().await; return Err(error); } transaction.commit().await } } async fn assert_initialized( storage: StorageContext, live_state: &LiveStateContext, ) -> Result<(), LixError> { let mut transaction = storage.begin_read_transaction().await?; let reader = live_state.reader(transaction.as_mut()); let result = reader .load_row(&LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: EntityIdentity::single("lix_id"), file_id: NullableKeyFilter::Null, }) .await; let initialized = match result { Ok(row) => { transaction.rollback().await?; row.is_some() } Err(error) => { let _ = transaction.rollback().await; return Err(error); } }; if initialized { return Ok(()); } Err(LixError::new( "LIX_ERROR_NOT_INITIALIZED", "engine backend is not initialized; call Engine::initialize(...) before Engine::new(...)", )) } ================================================ FILE: packages/engine/src/entity_identity.rs ================================================ use serde_json::Value as JsonValue; use crate::common::json_pointer_get; use crate::LixError; /// Logical entity identity derived from a schema primary key. /// /// Keep this as typed tuple data inside engine. SQL `entity_id` surfaces /// should use the JSON-array projection. #[derive( Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, serde::Serialize, serde::Deserialize, )] pub(crate) struct EntityIdentity { pub(crate) parts: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) enum EntityIdentityError { EmptyPrimaryKey, EmptyPrimaryKeyPath { index: usize }, EmptyPrimaryKeyValue { index: usize }, MissingPrimaryKeyValue { index: usize }, UnsupportedPrimaryKeyValue { index: usize }, InvalidEncodedEntityIdentity, } impl std::fmt::Display for EntityIdentityError { fn fmt(&self, formatter: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match self { Self::EmptyPrimaryKey => { write!(formatter, "primary key must contain at least one path") } Self::EmptyPrimaryKeyPath { index } => { write!( formatter, "primary-key path at index {index} must not be empty" ) } Self::EmptyPrimaryKeyValue { index } => { write!( formatter, "primary-key value at index {index} must not be empty" ) } Self::MissingPrimaryKeyValue { index } => { write!(formatter, "primary-key value at index {index} is missing") } Self::UnsupportedPrimaryKeyValue { index } => write!( formatter, "primary-key value at index {index} must be a JSON string" ), Self::InvalidEncodedEntityIdentity => { write!( formatter, "encoded entity identity must be a non-empty JSON array of strings" ) } } } } impl EntityIdentity { pub(crate) fn single(value: impl Into) -> Self { Self { parts: vec![value.into()], } } #[cfg(test)] pub(crate) fn tuple(parts: Vec) -> Result { if parts.is_empty() { return Err(EntityIdentityError::EmptyPrimaryKey); } if let Some((index, _)) = parts.iter().enumerate().find(|(_, part)| part.is_empty()) { return Err(EntityIdentityError::EmptyPrimaryKeyValue { index }); } Ok(Self { parts }) } pub(crate) fn from_primary_key_paths( snapshot: &JsonValue, primary_key_paths: &[Vec], ) -> Result { if primary_key_paths.is_empty() { return Err(EntityIdentityError::EmptyPrimaryKey); } let mut parts = Vec::with_capacity(primary_key_paths.len()); for (index, path) in primary_key_paths.iter().enumerate() { if path.is_empty() { return Err(EntityIdentityError::EmptyPrimaryKeyPath { index }); } let Some(value) = json_pointer_get(snapshot, path) else { return Err(EntityIdentityError::MissingPrimaryKeyValue { index }); }; parts.push(string_part_from_json_value(value, index)?); } Ok(Self { parts }) } pub(crate) fn as_json_array_value(&self) -> Result { if self.parts.is_empty() { return Err(LixError::unknown( "entity identity must contain at least one primary-key part", )); } Ok(JsonValue::Array( self.parts .iter() .map(|part| JsonValue::String(part.clone())) .collect(), )) } pub(crate) fn as_json_array_text(&self) -> Result { serde_json::to_string(&self.as_json_array_value()?).map_err(|error| { LixError::unknown(format!("failed to encode entity id as JSON: {error}")) }) } pub(crate) fn as_single_string(&self) -> Result<&str, LixError> { if self.parts.is_empty() { return Err(LixError::unknown( "entity identity must contain at least one primary-key part", )); } if let [value] = self.parts.as_slice() { return Ok(value.as_str()); } Err(LixError::unknown( "entity identity is not a single string primary-key tuple", )) } pub(crate) fn as_single_string_owned(&self) -> Result { Ok(self.as_single_string()?.to_owned()) } pub(crate) fn from_json_array_text(entity_id: &str) -> Result { let value = serde_json::from_str::(entity_id) .map_err(|_| EntityIdentityError::InvalidEncodedEntityIdentity)?; Self::from_json_array_value(&value) } pub(crate) fn from_json_array_value( entity_id: &JsonValue, ) -> Result { let JsonValue::Array(values) = entity_id else { return Err(EntityIdentityError::InvalidEncodedEntityIdentity); }; if values.is_empty() { return Err(EntityIdentityError::EmptyPrimaryKey); } let mut parts = Vec::with_capacity(values.len()); for (index, value) in values.iter().enumerate() { parts.push(string_part_from_json_value(value, index)?); } Ok(Self { parts }) } } fn string_part_from_json_value( value: &JsonValue, index: usize, ) -> Result { match value { JsonValue::String(value) if value.is_empty() => { Err(EntityIdentityError::EmptyPrimaryKeyValue { index }) } JsonValue::String(value) => Ok(value.clone()), _ => Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index }), } } pub(crate) fn canonical_json_text(value: &JsonValue) -> serde_json::Result { serde_json::to_string(&canonical_json_value(value)) } fn canonical_json_value(value: &JsonValue) -> JsonValue { match value { JsonValue::Array(values) => { JsonValue::Array(values.iter().map(canonical_json_value).collect()) } JsonValue::Object(object) => { let mut entries = object.iter().collect::>(); entries.sort_by(|(left, _), (right, _)| left.cmp(right)); let mut canonical = serde_json::Map::new(); for (key, value) in entries { canonical.insert(key.clone(), canonical_json_value(value)); } JsonValue::Object(canonical) } _ => value.clone(), } } #[cfg(test)] mod tests { use serde_json::json; use super::*; #[test] fn single_string_identity_projects_to_single_string() { let identity = EntityIdentity::single("plain-id"); assert_eq!( identity.as_single_string().expect("projection should work"), "plain-id" ); } #[test] fn single_identity_projects_to_json_array_entity_id() { let identity = EntityIdentity::single("plain-id"); assert_eq!( identity .as_json_array_text() .expect("projection should work"), "[\"plain-id\"]" ); } #[test] fn composite_identity_projects_to_json_array_entity_id() { let identity = EntityIdentity::tuple(vec!["namespace".to_string(), "42".to_string()]) .expect("tuple identity"); assert_eq!( identity .as_json_array_text() .expect("projection should work"), "[\"namespace\",\"42\"]" ); } #[test] fn entity_id_json_array_roundtrips() { let identity = EntityIdentity::tuple(vec!["namespace".to_string(), "42".to_string()]) .expect("tuple identity"); let encoded = identity .as_json_array_text() .expect("projection should work"); assert_eq!( EntityIdentity::from_json_array_text(&encoded).expect("decode should work"), identity ); } #[test] fn entity_id_json_array_rejects_empty_string_part() { assert_eq!( EntityIdentity::from_json_array_text("[\"\"]"), Err(EntityIdentityError::EmptyPrimaryKeyValue { index: 0 }) ); } #[test] fn tuple_rejects_empty_string_part() { assert_eq!( EntityIdentity::tuple(vec!["namespace".to_string(), "".to_string()]), Err(EntityIdentityError::EmptyPrimaryKeyValue { index: 1 }) ); } #[test] fn entity_id_json_array_does_not_collide_on_delimiter_like_values() { let left = EntityIdentity::tuple(vec!["a~b".to_string(), "c".to_string()]) .expect("left tuple identity"); let right = EntityIdentity::tuple(vec!["a".to_string(), "b~c".to_string()]) .expect("right tuple identity"); assert_ne!( left.as_json_array_text().expect("left should encode"), right.as_json_array_text().expect("right should encode") ); } #[test] fn composite_identity_rejects_single_string_projection() { let identity = EntityIdentity::tuple(vec!["namespace".to_string(), "42".to_string()]) .expect("tuple identity"); assert!(identity.as_single_string().is_err()); } #[test] fn composite_identity_does_not_collide_on_delimiter_like_values() { let left = EntityIdentity::tuple(vec!["a~b".to_string(), "1".to_string()]) .expect("left tuple identity"); let right = EntityIdentity::tuple(vec!["a".to_string(), "b~1".to_string()]) .expect("right tuple identity"); assert_ne!( left.as_json_array_text().expect("left should encode"), right.as_json_array_text().expect("right should encode") ); } #[test] fn from_primary_key_paths_derives_ordered_parts() { let snapshot = json!({ "namespace": "messages", "locale": "en" }); let identity = EntityIdentity::from_primary_key_paths( &snapshot, &[vec!["namespace".to_string()], vec!["locale".to_string()]], ) .expect("primary key should derive"); assert_eq!( identity, EntityIdentity { parts: vec!["messages".to_string(), "en".to_string()], } ); } #[test] fn entity_id_json_array_rejects_non_string_parts() { assert_eq!( EntityIdentity::from_json_array_text("[\"namespace\",42]"), Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 1 }) ); assert_eq!( EntityIdentity::from_json_array_text("[\"namespace\",null]"), Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 1 }) ); assert_eq!( EntityIdentity::from_json_array_text("[[\"nested\"]]"), Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 0 }) ); } #[test] fn from_primary_key_paths_rejects_non_string_parts() { let snapshot = json!({ "namespace": "messages", "index": 7 }); assert_eq!( EntityIdentity::from_primary_key_paths( &snapshot, &[vec!["namespace".to_string()], vec!["index".to_string()],], ), Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 1 }) ); } #[test] fn from_primary_key_paths_rejects_empty_string_parts() { let snapshot = json!({ "namespace": "messages", "id": "" }); assert_eq!( EntityIdentity::from_primary_key_paths( &snapshot, &[vec!["namespace".to_string()], vec!["id".to_string()],], ), Err(EntityIdentityError::EmptyPrimaryKeyValue { index: 1 }) ); } #[test] fn from_primary_key_paths_rejects_nested_json_parts() { let snapshot = json!({ "entity_id": ["welcome.title", "en"], "schema_key": "message" }); assert_eq!( EntityIdentity::from_primary_key_paths( &snapshot, &[ vec!["entity_id".to_string()], vec!["schema_key".to_string()], ], ), Err(EntityIdentityError::UnsupportedPrimaryKeyValue { index: 0 }) ); } #[test] fn from_primary_key_paths_rejects_missing_parts() { let snapshot = json!({ "id": "a" }); assert_eq!( EntityIdentity::from_primary_key_paths(&snapshot, &[vec!["missing".to_string()]]), Err(EntityIdentityError::MissingPrimaryKeyValue { index: 0 }) ); } } ================================================ FILE: packages/engine/src/functions/context.rs ================================================ use crate::functions::{ state, DeterministicFunctionProvider, DeterministicSequence, FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider, }; use crate::live_state::LiveStateReader; use crate::storage::StorageWriteSet; use crate::LixError; /// Execution-scoped runtime function context. /// /// Lower layers should only receive function providers. This context owns the /// lifecycle at the session/transaction boundary: prepare the right function /// source before execution and persist deterministic sequence progress after /// successful execution. pub(crate) struct FunctionContext { functions: FunctionProviderHandle, bookkeeping_timestamp: String, } impl FunctionContext { /// Prepares the runtime function provider for one execution. /// /// If deterministic mode is absent or disabled, the context uses system /// functions. If enabled, it starts from the persisted sequence + 1. pub(crate) async fn prepare(live_state: &dyn LiveStateReader) -> Result { let mode = state::load_mode(live_state).await?; let mut bookkeeping_functions = SystemFunctionProvider; let bookkeeping_timestamp = bookkeeping_functions.timestamp(); if !mode.enabled { return Ok(Self { functions: SharedFunctionProvider::new( Box::new(SystemFunctionProvider) as Box ), bookkeeping_timestamp, }); } let sequence = state::load_sequence(live_state).await?; Ok(Self { functions: SharedFunctionProvider::new(Box::new(DeterministicFunctionProvider::new( sequence.next_sequence(), mode.timestamp_shuffle, )) as Box), bookkeeping_timestamp, }) } /// Returns the engine-owned provider used by SQL and transaction staging. pub(crate) fn provider(&self) -> FunctionProviderHandle { self.functions.clone() } /// Persists deterministic sequence progress if this execution used any. /// /// System functions report no sequence state, so this is a no-op when /// deterministic mode is disabled. pub(crate) async fn stage_persist_if_needed( &self, writes: &mut StorageWriteSet, ) -> Result<(), LixError> { let Some(highest_seen) = self.functions.deterministic_sequence_persist_highest_seen() else { return Ok(()); }; state::stage_sequence( writes, DeterministicSequence { highest_seen }, &self.bookkeeping_timestamp, ) .await } } #[cfg(test)] mod tests { use std::sync::Arc; use crate::backend::testing::UnitTestBackend; use crate::functions::state::{DETERMINISTIC_MODE_KEY, DETERMINISTIC_SEQUENCE_KEY}; use crate::functions::{state::load_sequence, DeterministicSequence}; use crate::live_state::LiveStateContext; use crate::storage::StorageContext; use crate::GLOBAL_VERSION_ID; use super::*; fn live_state_context() -> LiveStateContext { LiveStateContext::new( crate::tracked_state::TrackedStateContext::new(), crate::untracked_state::UntrackedStateContext::new(), crate::commit_graph::CommitGraphContext::new(), ) } #[tokio::test] async fn prepare_uses_system_functions_when_mode_missing() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); let reader = live_state.reader(storage.clone()); let context = FunctionContext::prepare(&reader) .await .expect("runtime context should prepare"); assert_eq!( context .provider() .deterministic_sequence_persist_highest_seen(), None ); } #[tokio::test] async fn prepare_starts_deterministic_functions_at_sequence_zero() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); crate::test_support::seed_global_version_head(storage.clone()).await; write_key_value( storage.clone(), DETERMINISTIC_MODE_KEY, serde_json::json!({ "enabled": true, }), ) .await; let reader = live_state.reader(storage.clone()); let context = FunctionContext::prepare(&reader) .await .expect("runtime context should prepare"); let functions = context.provider(); assert_eq!( functions.call_uuid_v7(), "01920000-0000-7000-8000-000000000000" ); assert_eq!(functions.call_timestamp(), "1970-01-01T00:00:00.001Z"); assert_eq!( context .provider() .deterministic_sequence_persist_highest_seen(), Some(1) ); } #[tokio::test] async fn prepare_continues_from_persisted_sequence() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); crate::test_support::seed_global_version_head(storage.clone()).await; write_key_value( storage.clone(), DETERMINISTIC_MODE_KEY, serde_json::json!({ "enabled": true, }), ) .await; write_key_value( storage.clone(), DETERMINISTIC_SEQUENCE_KEY, serde_json::json!(41), ) .await; let reader = live_state.reader(storage.clone()); let context = FunctionContext::prepare(&reader) .await .expect("runtime context should prepare"); let functions = context.provider(); assert_eq!( functions.call_uuid_v7(), "01920000-0000-7000-8000-00000000002a" ); assert_eq!( context .provider() .deterministic_sequence_persist_highest_seen(), Some(42) ); } #[tokio::test] async fn persist_if_needed_writes_sequence_when_deterministic_functions_advanced() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); crate::test_support::seed_global_version_head(storage.clone()).await; write_key_value( storage.clone(), DETERMINISTIC_MODE_KEY, serde_json::json!({ "enabled": true, }), ) .await; let context = { let reader = live_state.reader(storage.clone()); FunctionContext::prepare(&reader) .await .expect("runtime context should prepare") }; context.provider().call_uuid_v7(); let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); context .stage_persist_if_needed(&mut writes) .await .expect("sequence should stage"); writes .apply(&mut tx.as_mut()) .await .expect("sequence should apply"); tx.commit().await.expect("transaction should commit"); let reader = live_state.reader(storage.clone()); let sequence = load_sequence(&reader).await.expect("sequence should load"); assert_eq!(sequence, DeterministicSequence { highest_seen: 0 }); } #[tokio::test] async fn persist_if_needed_is_noop_for_system_functions() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); let reader = live_state.reader(storage.clone()); let context = FunctionContext::prepare(&reader) .await .expect("runtime context should prepare"); let tx = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); context .stage_persist_if_needed(&mut writes) .await .expect("persist should no-op"); assert!(writes.is_empty()); tx.commit().await.expect("transaction should commit"); let reader = live_state.reader(storage.clone()); let sequence = load_sequence(&reader) .await .expect("missing sequence should load"); assert_eq!(sequence, DeterministicSequence::uninitialized()); } async fn write_key_value(storage: StorageContext, key: &str, value: serde_json::Value) { let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let snapshot_content = serde_json::to_string(&serde_json::json!({ "key": key, "value": value, })) .expect("snapshot should serialize"); let mut writes = StorageWriteSet::new(); let row = crate::untracked_state::UntrackedStateRow { entity_id: crate::entity_identity::EntityIdentity::single(key), schema_key: "lix_key_value".to_string(), file_id: None, snapshot_content: Some(snapshot_content), metadata: None, created_at: "1970-01-01T00:00:00.000Z".to_string(), updated_at: "1970-01-01T00:00:00.000Z".to_string(), global: true, version_id: GLOBAL_VERSION_ID.to_string(), }; crate::untracked_state::UntrackedStateContext::new() .writer(&mut writes) .stage_rows(std::iter::once(row.as_ref())) .expect("test key-value should stage"); writes .apply(&mut tx.as_mut()) .await .expect("test key-value should apply"); tx.commit().await.expect("transaction should commit"); } } ================================================ FILE: packages/engine/src/functions/deterministic.rs ================================================ use crate::functions::FunctionProvider; const DETERMINISTIC_UUID_COUNTER_MASK: u64 = 0x0000_FFFF_FFFF_FFFF; /// Deterministic function provider for engine execution. /// /// The provider is pure runtime state: it does not load or persist the sequence /// itself. Session/transaction code owns that boundary so tests can decide when /// deterministic state is read and written. #[derive(Debug, Clone)] pub(crate) struct DeterministicFunctionProvider { next_sequence: i64, timestamp_shuffle: bool, highest_seen: Option, } impl DeterministicFunctionProvider { pub(crate) fn new(next_sequence: i64, timestamp_shuffle: bool) -> Self { Self { next_sequence, timestamp_shuffle, highest_seen: None, } } pub(crate) fn highest_seen(&self) -> Option { self.highest_seen } fn take_sequence(&mut self) -> i64 { let current = self.next_sequence; self.next_sequence += 1; self.highest_seen = Some(current); current } } impl FunctionProvider for DeterministicFunctionProvider { fn uuid_v7(&mut self) -> String { let counter = self.take_sequence(); let counter_bits = (counter as u64) & DETERMINISTIC_UUID_COUNTER_MASK; format!("01920000-0000-7000-8000-{counter_bits:012x}") } fn timestamp(&mut self) -> String { let counter = self.take_sequence(); let millis = if self.timestamp_shuffle { shuffled_timestamp_millis(counter) } else { counter }; let dt = chrono::DateTime::::from_timestamp_millis(millis) .unwrap_or(chrono::DateTime::::UNIX_EPOCH); dt.to_rfc3339_opts(chrono::SecondsFormat::Millis, true) } fn deterministic_sequence_persist_highest_seen(&self) -> Option { self.highest_seen() } } fn shuffled_timestamp_millis(counter: i64) -> i64 { const WINDOW: i64 = 1000; const MULTIPLIER: i64 = 733; const OFFSET: i64 = 271; let cycle = counter.div_euclid(WINDOW); let within = counter.rem_euclid(WINDOW); let shuffled = (within * MULTIPLIER + OFFSET).rem_euclid(WINDOW); cycle * WINDOW + shuffled } #[cfg(test)] mod tests { use super::*; use crate::functions::DeterministicSequence; #[test] fn deterministic_uuid_uses_sequence_counter() { let mut provider = DeterministicFunctionProvider::new(0, false); assert_eq!(provider.uuid_v7(), "01920000-0000-7000-8000-000000000000"); assert_eq!(provider.uuid_v7(), "01920000-0000-7000-8000-000000000001"); assert_eq!(provider.highest_seen(), Some(1)); } #[test] fn deterministic_timestamp_uses_sequence_counter() { let mut provider = DeterministicFunctionProvider::new(1, false); assert_eq!(provider.timestamp(), "1970-01-01T00:00:00.001Z"); assert_eq!(provider.highest_seen(), Some(1)); } #[test] fn deterministic_timestamp_shuffle_can_be_non_monotonic() { let mut provider = DeterministicFunctionProvider::new(0, true); let first = provider.timestamp(); let second = provider.timestamp(); assert!(second < first); assert_eq!(provider.highest_seen(), Some(1)); } #[test] fn deterministic_sequence_can_start_after_persisted_highest_seen() { let sequence = DeterministicSequence { highest_seen: 41 }; let mut provider = DeterministicFunctionProvider::new(sequence.next_sequence(), false); assert_eq!(provider.uuid_v7(), "01920000-0000-7000-8000-00000000002a"); assert_eq!(provider.highest_seen(), Some(42)); } } ================================================ FILE: packages/engine/src/functions/mod.rs ================================================ //! Engine runtime function boundary. //! //! Sessions prepare one function context per execution. SQL, providers, and //! transaction staging receive only a function provider; deterministic mode is //! resolved privately inside this module. mod context; mod deterministic; mod provider; mod state; mod types; pub(crate) use context::FunctionContext; pub(crate) use deterministic::DeterministicFunctionProvider; pub(crate) use provider::{ FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider, }; pub(crate) use types::{DeterministicMode, DeterministicSequence}; ================================================ FILE: packages/engine/src/functions/provider.rs ================================================ use std::sync::{Arc, Mutex}; use crate::cel::CelFunctionProvider; /// Engine-owned runtime function provider trait. pub(crate) trait FunctionProvider: Send { fn uuid_v7(&mut self) -> String; fn timestamp(&mut self) -> String; fn deterministic_sequence_persist_highest_seen(&self) -> Option { None } } pub(crate) type FunctionProviderHandle = SharedFunctionProvider>; /// Shareable function provider used across SQL planning, UDFs, and staging. pub(crate) struct SharedFunctionProvider

{ inner: Arc>, } impl

Clone for SharedFunctionProvider

{ fn clone(&self) -> Self { Self { inner: Arc::clone(&self.inner), } } } impl

SharedFunctionProvider

{ pub(crate) fn new(provider: P) -> Self { Self { inner: Arc::new(Mutex::new(provider)), } } fn with_lock(&self, f: impl FnOnce(&P) -> R) -> R { let guard = self .inner .lock() .expect("engine function provider mutex poisoned"); f(&guard) } fn with_lock_mut(&self, f: impl FnOnce(&mut P) -> R) -> R { let mut guard = self .inner .lock() .expect("engine function provider mutex poisoned"); f(&mut guard) } } impl

SharedFunctionProvider

where P: FunctionProvider, { pub(crate) fn call_uuid_v7(&self) -> String { self.with_lock_mut(|provider| provider.uuid_v7()) } pub(crate) fn call_timestamp(&self) -> String { self.with_lock_mut(|provider| provider.timestamp()) } pub(crate) fn deterministic_sequence_persist_highest_seen(&self) -> Option { self.with_lock(|provider| provider.deterministic_sequence_persist_highest_seen()) } } impl

CelFunctionProvider for SharedFunctionProvider

where P: FunctionProvider + Send + 'static, { fn call_uuid_v7(&self) -> String { SharedFunctionProvider::call_uuid_v7(self) } fn call_timestamp(&self) -> String { SharedFunctionProvider::call_timestamp(self) } } impl

FunctionProvider for SharedFunctionProvider

where P: FunctionProvider, { fn uuid_v7(&mut self) -> String { self.call_uuid_v7() } fn timestamp(&mut self) -> String { self.call_timestamp() } fn deterministic_sequence_persist_highest_seen(&self) -> Option { SharedFunctionProvider::deterministic_sequence_persist_highest_seen(self) } } impl FunctionProvider for Box where T: FunctionProvider + ?Sized, { fn uuid_v7(&mut self) -> String { (**self).uuid_v7() } fn timestamp(&mut self) -> String { (**self).timestamp() } fn deterministic_sequence_persist_highest_seen(&self) -> Option { (**self).deterministic_sequence_persist_highest_seen() } } /// System-backed engine function provider. #[derive(Debug, Default, Clone, Copy)] pub(crate) struct SystemFunctionProvider; impl FunctionProvider for SystemFunctionProvider { fn uuid_v7(&mut self) -> String { uuid::Uuid::now_v7().to_string() } fn timestamp(&mut self) -> String { chrono::Utc::now().to_rfc3339_opts(chrono::SecondsFormat::Millis, true) } } ================================================ FILE: packages/engine/src/functions/state.rs ================================================ use serde_json::Value as JsonValue; use std::sync::Arc; use crate::entity_identity::EntityIdentity; use crate::functions::{DeterministicMode, DeterministicSequence}; use crate::json_store::NormalizedJson; use crate::live_state::{LiveStateReader, LiveStateRowRequest, MaterializedLiveStateRow}; use crate::storage::StorageWriteSet; use crate::untracked_state::UntrackedStateContext; use crate::untracked_state::UntrackedStateRow; use crate::GLOBAL_VERSION_ID; use crate::{LixError, NullableKeyFilter}; pub(crate) const DETERMINISTIC_MODE_KEY: &str = "lix_deterministic_mode"; pub(crate) const DETERMINISTIC_SEQUENCE_KEY: &str = "lix_deterministic_sequence_number"; const KEY_VALUE_SCHEMA_KEY: &str = "lix_key_value"; /// Loads deterministic-mode settings from visible live state. /// /// Missing mode means deterministic execution is disabled. Malformed mode rows /// are errors because they would make runtime function behavior ambiguous. pub(crate) async fn load_mode( live_state: &dyn LiveStateReader, ) -> Result { let Some(row) = load_key_value_row(live_state, DETERMINISTIC_MODE_KEY).await? else { return Ok(DeterministicMode::disabled()); }; let value = key_value_payload(&row, DETERMINISTIC_MODE_KEY)?; parse_mode_value(value) } /// Loads the persisted deterministic sequence position. /// /// Missing sequence means no deterministic values have been produced yet, so /// execution starts at sequence zero. pub(crate) async fn load_sequence( live_state: &dyn LiveStateReader, ) -> Result { let Some(row) = load_key_value_row(live_state, DETERMINISTIC_SEQUENCE_KEY).await? else { return Ok(DeterministicSequence::uninitialized()); }; let value = key_value_payload(&row, DETERMINISTIC_SEQUENCE_KEY)?; parse_sequence_value(value) } /// Persists the highest deterministic sequence value used by an execution. /// /// The row is untracked global `lix_key_value` state: it is durable local /// runtime state, not a changelog fact. pub(crate) async fn stage_sequence( writes: &mut StorageWriteSet, sequence: DeterministicSequence, timestamp: &str, ) -> Result<(), LixError> { let snapshot_content = serde_json::to_string(&serde_json::json!({ "key": DETERMINISTIC_SEQUENCE_KEY, "value": sequence.highest_seen, })) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("deterministic sequence snapshot serialization failed: {error}"), ) })?; let snapshot = NormalizedJson::from_arc_unchecked(Arc::from(snapshot_content.as_str())); let row = deterministic_key_value_row(DETERMINISTIC_SEQUENCE_KEY, snapshot.as_str(), timestamp)?; UntrackedStateContext::new() .writer(writes) .stage_rows(std::iter::once(row.as_ref())) } async fn load_key_value_row( live_state: &dyn LiveStateReader, key: &str, ) -> Result, LixError> { live_state .load_row(&LiveStateRowRequest { schema_key: KEY_VALUE_SCHEMA_KEY.to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: EntityIdentity::single(key), file_id: NullableKeyFilter::Null, }) .await } fn key_value_payload(row: &MaterializedLiveStateRow, key: &str) -> Result { let snapshot_content = row.snapshot_content.as_deref().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("deterministic key-value row '{key}' is missing snapshot_content"), ) })?; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("deterministic key-value row '{key}' has invalid JSON: {error}"), ) })?; let stored_key = snapshot.get("key").and_then(JsonValue::as_str); if stored_key != Some(key) { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("deterministic key-value row '{key}' has mismatched key field"), )); } snapshot.get("value").cloned().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("deterministic key-value row '{key}' is missing value"), ) }) } fn parse_mode_value(value: JsonValue) -> Result { let Some(object) = value.as_object() else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "deterministic mode value must be an object", )); }; let enabled = object .get("enabled") .and_then(JsonValue::as_bool) .unwrap_or(false); if !enabled { return Ok(DeterministicMode::disabled()); } let timestamp_shuffle = object .get("timestamp_shuffle") .and_then(JsonValue::as_bool) .unwrap_or(false); Ok(DeterministicMode { enabled, timestamp_shuffle, }) } fn parse_sequence_value(value: JsonValue) -> Result { let Some(highest_seen) = value.as_i64() else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "deterministic sequence value must be an integer", )); }; Ok(DeterministicSequence { highest_seen }) } fn deterministic_key_value_row( key: &str, snapshot_content: &str, timestamp: &str, ) -> Result { Ok(UntrackedStateRow { entity_id: crate::entity_identity::EntityIdentity::single(key), schema_key: KEY_VALUE_SCHEMA_KEY.to_string(), file_id: None, snapshot_content: Some(snapshot_content.to_string()), metadata: None, created_at: timestamp.to_string(), updated_at: timestamp.to_string(), global: true, version_id: GLOBAL_VERSION_ID.to_string(), }) } #[cfg(test)] mod tests { use std::sync::Arc; use crate::backend::testing::UnitTestBackend; use crate::live_state::{LiveStateContext, LiveStateRowRequest}; use crate::storage::StorageContext; use super::*; fn live_state_context() -> LiveStateContext { LiveStateContext::new( crate::tracked_state::TrackedStateContext::new(), crate::untracked_state::UntrackedStateContext::new(), crate::commit_graph::CommitGraphContext::new(), ) } #[tokio::test] async fn missing_mode_is_disabled() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); let reader = live_state.reader(storage.clone()); let mode = load_mode(&reader) .await .expect("missing mode should decode"); assert_eq!(mode, DeterministicMode::disabled()); } #[tokio::test] async fn valid_mode_decodes_flags() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); crate::test_support::seed_global_version_head(storage.clone()).await; write_test_key_value( storage.clone(), DETERMINISTIC_MODE_KEY, serde_json::json!({ "enabled": true, "timestamp_shuffle": true, }), ) .await; let reader = live_state.reader(storage.clone()); let mode = load_mode(&reader).await.expect("valid mode should decode"); assert_eq!( mode, DeterministicMode { enabled: true, timestamp_shuffle: true, } ); } #[tokio::test] async fn missing_sequence_is_uninitialized() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); let reader = live_state.reader(storage.clone()); let sequence = load_sequence(&reader) .await .expect("missing sequence should decode"); assert_eq!(sequence, DeterministicSequence::uninitialized()); } #[tokio::test] async fn valid_sequence_decodes_highest_seen() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); crate::test_support::seed_global_version_head(storage.clone()).await; write_test_key_value( storage.clone(), DETERMINISTIC_SEQUENCE_KEY, serde_json::json!(41), ) .await; let reader = live_state.reader(storage.clone()); let sequence = load_sequence(&reader) .await .expect("valid sequence should decode"); assert_eq!(sequence, DeterministicSequence { highest_seen: 41 }); assert_eq!(sequence.next_sequence(), 42); } #[tokio::test] async fn write_sequence_persists_untracked_global_key_value() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let live_state = live_state_context(); crate::test_support::seed_global_version_head(storage.clone()).await; let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); stage_sequence( &mut writes, DeterministicSequence { highest_seen: 7 }, "1970-01-01T00:00:00.000Z", ) .await .expect("sequence should stage"); writes .apply(&mut tx.as_mut()) .await .expect("sequence should apply"); tx.commit().await.expect("transaction should commit"); let reader = live_state.reader(storage.clone()); let row = reader .load_row(&LiveStateRowRequest { schema_key: KEY_VALUE_SCHEMA_KEY.to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single( DETERMINISTIC_SEQUENCE_KEY, ), file_id: NullableKeyFilter::Null, }) .await .expect("sequence row should load") .expect("sequence row should exist"); assert!(row.untracked); assert!(row.global); assert_eq!(row.change_id, None); assert_eq!(row.commit_id, None); assert_eq!( row.snapshot_content.as_deref(), Some("{\"key\":\"lix_deterministic_sequence_number\",\"value\":7}") ); } async fn write_test_key_value(storage: StorageContext, key: &str, value: JsonValue) { let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); let snapshot_content = serde_json::to_string(&serde_json::json!({ "key": key, "value": value, })) .expect("snapshot should serialize"); let mut writes = StorageWriteSet::new(); let row = deterministic_key_value_row(key, &snapshot_content, "1970-01-01T00:00:00.000Z") .expect("test key-value should canonicalize"); UntrackedStateContext::new() .writer(&mut writes) .stage_rows(std::iter::once(row.as_ref())) .expect("test key-value should stage"); writes .apply(&mut tx.as_mut()) .await .expect("test key-value should apply"); tx.commit().await.expect("transaction should commit"); } } ================================================ FILE: packages/engine/src/functions/types.rs ================================================ /// Decoded deterministic-mode setting. /// /// Storage can decide where this setting lives. The type only describes the /// behavior engine should apply while preparing runtime functions. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) struct DeterministicMode { pub(crate) enabled: bool, pub(crate) timestamp_shuffle: bool, } impl DeterministicMode { pub(crate) fn disabled() -> Self { Self { enabled: false, timestamp_shuffle: false, } } } /// Persisted deterministic sequence position. /// /// `highest_seen` is the last sequence value returned by the runtime provider. /// The next deterministic execution starts at `highest_seen + 1`. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) struct DeterministicSequence { pub(crate) highest_seen: i64, } impl DeterministicSequence { pub(crate) fn uninitialized() -> Self { Self { highest_seen: -1 } } pub(crate) fn next_sequence(self) -> i64 { self.highest_seen + 1 } } ================================================ FILE: packages/engine/src/init.rs ================================================ use crate::commit_store::{Change, CommitDraftRef, CommitStoreContext}; use crate::entity_identity::EntityIdentity; use crate::functions::{ FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider, }; use crate::json_store::{JsonRef, JsonStoreContext, JsonWritePlacementRef, NormalizedJsonRef}; use crate::schema::{ registered_schema_entity_id, schema_key_from_definition, seed_schema_definitions, }; use crate::storage::{StorageContext, StorageWriteSet}; use crate::tracked_state::{TrackedStateContext, TrackedStateDeltaRef}; use crate::untracked_state::{UntrackedStateContext, UntrackedStateRow}; use crate::version::{VERSION_DESCRIPTOR_SCHEMA_KEY, VERSION_REF_SCHEMA_KEY}; use crate::LixError; use crate::GLOBAL_VERSION_ID; use serde_json::json; #[cfg(test)] use std::sync::Arc; const KEY_VALUE_SCHEMA_KEY: &str = "lix_key_value"; const LIX_ID_KEY: &str = "lix_id"; const WORKSPACE_VERSION_KEY: &str = "lix_workspace_version_id"; const REGISTERED_SCHEMA_KEY: &str = "lix_registered_schema"; /// Pure seed plan for initializing an engine repository. /// /// Tracked bootstrap facts go to the commit store. Moving refs such as /// `lix_version_ref` are seeded as untracked local state so repository heads /// can advance without becoming commit members. pub(crate) struct InitSeedPlan { commit: InitSeedCommit, changes: Vec, untracked_rows: Vec, pub(crate) receipt: InitReceipt, } #[derive(Debug, Clone, PartialEq, Eq)] struct InitSeedCommit { id: String, change_id: String, parent_ids: Vec, author_account_ids: Vec, created_at: String, } #[derive(Debug, Clone, PartialEq, Eq)] struct InitSeedChange { id: String, entity_id: EntityIdentity, schema_key: String, snapshot_content: String, created_at: String, } #[derive(Debug, Clone, PartialEq, Eq)] struct InitSeedLiveRow { entity_id: EntityIdentity, schema_key: String, snapshot_content: String, created_at: String, updated_at: String, global: bool, version_id: String, } /// Values generated while planning the initial repository seed. #[derive(Debug, Clone, PartialEq, Eq)] pub struct InitReceipt { pub lix_id: String, pub global_version_id: String, pub main_version_id: String, pub initial_commit_id: String, } /// Builds the canonical bootstrap changes for a new engine repository. /// /// The initial commit tracks durable content rows. Version refs are moving /// pointers and therefore live in untracked local state instead of the commit. pub(crate) fn plan_init_seed(functions: FunctionProviderHandle) -> Result { let main_version_id = functions.call_uuid_v7(); let lix_id = functions.call_uuid_v7(); let initial_commit_id = functions.call_uuid_v7(); let timestamp = functions.call_timestamp(); let mut registered_schema_changes = Vec::new(); for schema in seed_schema_definitions() { let key = schema_key_from_definition(schema)?; registered_schema_changes.push(canonical_change( functions.call_uuid_v7(), registered_schema_entity_id(&key.schema_key)?, REGISTERED_SCHEMA_KEY, registered_schema_snapshot(schema)?, ×tamp, )); } let global_version_descriptor_change = canonical_change( GLOBAL_VERSION_ID.to_string(), EntityIdentity::single(GLOBAL_VERSION_ID), VERSION_DESCRIPTOR_SCHEMA_KEY, version_descriptor_snapshot(GLOBAL_VERSION_ID, "global", true)?, ×tamp, ); let main_version_descriptor_change = canonical_change( functions.call_uuid_v7(), EntityIdentity::single(&main_version_id), VERSION_DESCRIPTOR_SCHEMA_KEY, version_descriptor_snapshot(&main_version_id, "main", false)?, ×tamp, ); let kv_lix_id_change = canonical_change( functions.call_uuid_v7(), EntityIdentity::single(LIX_ID_KEY), KEY_VALUE_SCHEMA_KEY, key_value_snapshot(LIX_ID_KEY, &lix_id)?, ×tamp, ); let initial_commit = InitSeedCommit { id: initial_commit_id.clone(), change_id: functions.call_uuid_v7(), parent_ids: Vec::new(), author_account_ids: Vec::new(), created_at: timestamp.clone(), }; let global_version_ref_row = untracked_row( EntityIdentity::single(GLOBAL_VERSION_ID), VERSION_REF_SCHEMA_KEY, version_ref_snapshot(GLOBAL_VERSION_ID, &initial_commit_id)?, ×tamp, ); let main_version_ref_row = untracked_row( EntityIdentity::single(&main_version_id), VERSION_REF_SCHEMA_KEY, version_ref_snapshot(&main_version_id, &initial_commit_id)?, ×tamp, ); let workspace_version_row = untracked_row( EntityIdentity::single(WORKSPACE_VERSION_KEY), KEY_VALUE_SCHEMA_KEY, key_value_snapshot(WORKSPACE_VERSION_KEY, &main_version_id)?, ×tamp, ); Ok(InitSeedPlan { commit: initial_commit, changes: registered_schema_changes .into_iter() .chain([ global_version_descriptor_change, main_version_descriptor_change, kv_lix_id_change, ]) .collect(), untracked_rows: vec![ global_version_ref_row, main_version_ref_row, workspace_version_row, ], receipt: InitReceipt { lix_id, global_version_id: GLOBAL_VERSION_ID.to_string(), main_version_id, initial_commit_id, }, }) } /// Initializes an empty engine repository in one backend transaction. /// /// The pure seed planner decides which bootstrap facts exist. This function is /// only responsible for durably writing those facts to their owning stores: /// commit_store for tracked changes, and live_state for the serving projection /// plus untracked moving refs. pub(crate) async fn initialize( storage: StorageContext, commit_store: &CommitStoreContext, tracked_state: &TrackedStateContext, untracked_state: &UntrackedStateContext, ) -> Result { let functions = SharedFunctionProvider::new( Box::new(SystemFunctionProvider) as Box ); let plan = plan_init_seed(functions)?; let receipt = plan.receipt.clone(); let mut transaction = storage.begin_write_transaction().await?; let mut writes = StorageWriteSet::new(); let authored_changes = plan .changes .iter() .map(seed_change_to_commit_store_change) .collect::, _>>()?; JsonStoreContext::new().writer().stage_batch( &mut writes, JsonWritePlacementRef::CommitPack { commit_id: &plan.commit.id, pack_id: 0, }, plan.changes .iter() .map(|change| NormalizedJsonRef::new(change.snapshot_content.as_str())), )?; let staged_commit = { let commit = CommitDraftRef { id: &plan.commit.id, change_id: &plan.commit.change_id, parent_ids: &plan.commit.parent_ids, author_account_ids: &plan.commit.author_account_ids, created_at: &plan.commit.created_at, }; let mut writer = commit_store.writer(transaction.as_mut(), &mut writes); writer .stage_tracked_commit_draft( commit, authored_changes.iter().map(Change::as_ref).collect(), Vec::new(), ) .await? }; let untracked_rows = plan .untracked_rows .iter() .map(untracked_state_row_from_seed) .collect::, _>>()?; { untracked_state .writer(&mut writes) .stage_rows(untracked_rows.iter().map(|row| row.as_ref()))?; let deltas = authored_changes .iter() .zip(&staged_commit.authored_locators) .map(|(change, locator)| TrackedStateDeltaRef { change: change.as_ref(), locator: locator.as_ref(), created_at: &change.created_at, updated_at: &change.created_at, }) .collect::>(); let mut writer = tracked_state.writer(transaction.as_mut(), &mut writes); writer .stage_delta(&receipt.initial_commit_id, None, &deltas) .await?; } writes.apply(&mut transaction.as_mut()).await?; transaction.commit().await?; Ok(receipt) } fn seed_change_to_commit_store_change(change: &InitSeedChange) -> Result { Ok(Change { id: change.id.clone(), entity_id: change.entity_id.clone(), schema_key: change.schema_key.clone(), file_id: None, snapshot_ref: Some(JsonRef::for_content(change.snapshot_content.as_bytes())), metadata_ref: None, created_at: change.created_at.clone(), }) } fn untracked_state_row_from_seed(row: &InitSeedLiveRow) -> Result { Ok(UntrackedStateRow { entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: None, snapshot_content: Some(row.snapshot_content.clone()), metadata: None, created_at: row.created_at.clone(), updated_at: row.updated_at.clone(), global: row.global, version_id: row.version_id.clone(), }) } fn untracked_row( entity_id: EntityIdentity, schema_key: &str, snapshot_content: String, timestamp: &str, ) -> InitSeedLiveRow { InitSeedLiveRow { entity_id, schema_key: schema_key.to_string(), snapshot_content, created_at: timestamp.to_string(), updated_at: timestamp.to_string(), global: true, version_id: GLOBAL_VERSION_ID.to_string(), } } fn canonical_change( id: String, entity_id: EntityIdentity, schema_key: &str, snapshot_content: String, created_at: &str, ) -> InitSeedChange { InitSeedChange { id, entity_id, schema_key: schema_key.to_string(), snapshot_content, created_at: created_at.to_string(), } } fn version_descriptor_snapshot(id: &str, name: &str, hidden: bool) -> Result { encode_snapshot(json!({ "id": id, "name": name, "hidden": hidden, })) } fn key_value_snapshot(key: &str, value: &str) -> Result { encode_snapshot(json!({ "key": key, "value": value, })) } fn registered_schema_snapshot(schema: &serde_json::Value) -> Result { encode_snapshot(json!({ "value": schema, })) } fn version_ref_snapshot(id: &str, commit_id: &str) -> Result { encode_snapshot(json!({ "id": id, "commit_id": commit_id, })) } fn encode_snapshot(value: serde_json::Value) -> Result { serde_json::to_string(&value).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("engine init seed snapshot serialization failed: {error}"), ) }) } #[cfg(test)] mod tests { use serde_json::Value as JsonValue; use super::*; use crate::backend::{testing::UnitTestBackend, Backend}; use crate::functions::{FunctionProvider, SharedFunctionProvider}; use crate::storage::StorageContext; use crate::tracked_state::TrackedStateContext; use crate::untracked_state::UntrackedStateContext; #[test] fn plan_init_seed_returns_tracked_changes_and_untracked_workspace_state() { let plan = plan_init_seed(test_functions()).expect("init seed should plan"); assert_eq!(plan.changes.len(), seed_schema_definitions().len() + 3); assert_eq!(plan.untracked_rows.len(), 3); assert_eq!(plan.receipt.global_version_id, GLOBAL_VERSION_ID); assert_eq!(plan.receipt.main_version_id, "test-uuid-1"); assert_eq!(plan.receipt.lix_id, "test-uuid-2"); assert_eq!(plan.receipt.initial_commit_id, "test-uuid-3"); } #[test] fn plan_init_seed_commit_header_tracks_schema_registrations_descriptor_and_lix_id_changes() { let plan = plan_init_seed(test_functions()).expect("init seed should plan"); assert_eq!(plan.commit.id, plan.receipt.initial_commit_id); assert_eq!(plan.commit.change_id, "test-uuid-21"); assert!(plan.commit.parent_ids.is_empty()); assert!(plan.commit.author_account_ids.is_empty()); assert_eq!(plan.commit.created_at, "test-timestamp-1"); let change_ids = plan .changes .iter() .map(|change| change.id.as_str()) .collect::>(); assert_eq!(change_ids.len(), seed_schema_definitions().len() + 3); assert!(change_ids.contains(&"global")); assert!(!change_ids.contains(&plan.commit.change_id.as_str())); let registered_schema_change_ids = plan .changes .iter() .filter(|change| change.schema_key == REGISTERED_SCHEMA_KEY) .map(|change| change.id.as_str()) .collect::>(); for change_id in registered_schema_change_ids { assert!(change_ids.contains(&change_id)); } } #[test] fn plan_init_seed_registers_seed_schemas_as_initial_commit_rows() { let plan = plan_init_seed(test_functions()).expect("init seed should plan"); let registered_schema_changes = plan .changes .iter() .filter(|change| change.schema_key == REGISTERED_SCHEMA_KEY) .collect::>(); assert_eq!( registered_schema_changes.len(), seed_schema_definitions().len() ); assert!(registered_schema_changes.iter().any(|change| { snapshot(change) .pointer("/value/x-lix-key") .and_then(JsonValue::as_str) == Some(REGISTERED_SCHEMA_KEY) })); assert!(registered_schema_changes.iter().any(|change| { snapshot(change) .pointer("/value/x-lix-key") .and_then(JsonValue::as_str) == Some(KEY_VALUE_SCHEMA_KEY) })); } #[test] fn plan_init_seed_version_refs_point_to_initial_commit() { let plan = plan_init_seed(test_functions()).expect("init seed should plan"); let version_refs = plan .untracked_rows .iter() .filter(|row| row.schema_key == VERSION_REF_SCHEMA_KEY) .collect::>(); assert_eq!(version_refs.len(), 2); assert!(plan .changes .iter() .all(|change| change.schema_key != VERSION_REF_SCHEMA_KEY)); for row in version_refs { assert_eq!(row.schema_key, VERSION_REF_SCHEMA_KEY); assert_eq!(row.version_id, GLOBAL_VERSION_ID); let snapshot = untracked_snapshot(row); assert_eq!( snapshot.get("commit_id").and_then(JsonValue::as_str), Some(plan.receipt.initial_commit_id.as_str()) ); } } #[test] fn plan_init_seed_workspace_version_points_to_main_version() { let plan = plan_init_seed(test_functions()).expect("init seed should plan"); let workspace_row = plan .untracked_rows .iter() .find(|row| { row.schema_key == KEY_VALUE_SCHEMA_KEY && row.entity_id == crate::entity_identity::EntityIdentity::single(WORKSPACE_VERSION_KEY) }) .expect("workspace version row should exist"); assert_eq!(workspace_row.version_id, GLOBAL_VERSION_ID); assert!(workspace_row.global); let snapshot = untracked_snapshot(workspace_row); assert_eq!( snapshot.get("key").and_then(JsonValue::as_str), Some(WORKSPACE_VERSION_KEY) ); assert_eq!( snapshot.get("value").and_then(JsonValue::as_str), Some(plan.receipt.main_version_id.as_str()) ); } #[tokio::test] async fn initialize_writes_initial_commit_through_commit_store() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend); let commit_store = CommitStoreContext::new(); let tracked_state = TrackedStateContext::new(); let untracked_state = UntrackedStateContext::new(); let receipt = initialize( storage.clone(), &commit_store, &tracked_state, &untracked_state, ) .await .expect("engine should initialize"); let reader = commit_store.reader(storage.clone()); let commit = reader .load_commit(&receipt.initial_commit_id) .await .expect("commit should load") .expect("initial commit should exist"); assert_eq!(commit.id, receipt.initial_commit_id); assert_eq!(commit.change_pack_count, 1); assert_eq!(commit.membership_pack_count, 0); let change_pack = reader .load_change_pack(&commit.id, 0) .await .expect("change pack should load") .expect("initial change pack should exist"); assert_eq!(change_pack.len(), seed_schema_definitions().len() + 3); assert!(change_pack .iter() .all(|change| change.id != commit.change_id)); let entries = reader .load_change_index_entries(&[commit.change_id.clone(), "global".to_string()]) .await .expect("change index should load"); assert!(entries[0].is_some()); assert!(entries[1].is_some()); } fn snapshot(change: &InitSeedChange) -> JsonValue { serde_json::from_str(&change.snapshot_content).expect("snapshot should be JSON") } fn untracked_snapshot(row: &InitSeedLiveRow) -> JsonValue { serde_json::from_str(&row.snapshot_content).expect("snapshot should be JSON") } fn test_functions() -> FunctionProviderHandle { SharedFunctionProvider::new( Box::new(TestFunctionProvider::default()) as Box ) } #[derive(Default)] struct TestFunctionProvider { uuid_count: usize, timestamp_count: usize, } impl FunctionProvider for TestFunctionProvider { fn uuid_v7(&mut self) -> String { self.uuid_count += 1; format!("test-uuid-{}", self.uuid_count) } fn timestamp(&mut self) -> String { self.timestamp_count += 1; format!("test-timestamp-{}", self.timestamp_count) } } } ================================================ FILE: packages/engine/src/json_store/compression.rs ================================================ use crate::LixError; #[cfg(not(target_arch = "wasm32"))] pub(crate) fn compress_json_payload(json_data: &[u8]) -> Result, LixError> { zstd::bulk::compress(json_data, 1).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("json compression failed: {error}"), hint: None, details: None, }) } #[cfg(target_arch = "wasm32")] pub(crate) fn compress_json_payload(json_data: &[u8]) -> Result, LixError> { Ok(ruzstd::encoding::compress_to_vec( json_data, ruzstd::encoding::CompressionLevel::Fastest, )) } #[cfg(not(target_arch = "wasm32"))] pub(crate) fn decode_json_zstd_payload( compressed_payload: &[u8], uncompressed_len: usize, hash_hex: &str, ) -> Result, LixError> { zstd::bulk::decompress(compressed_payload, uncompressed_len).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("json decompression failed for ref '{hash_hex}': {error}"), hint: None, details: None, }) } #[cfg(target_arch = "wasm32")] pub(crate) fn decode_json_zstd_payload( compressed_payload: &[u8], _uncompressed_len: usize, _hash_hex: &str, ) -> Result, LixError> { use std::io::Read as _; let mut decoder = ruzstd::decoding::StreamingDecoder::new(compressed_payload).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("json decompression failed: {error}"), hint: None, details: None, })?; let mut output = Vec::new(); decoder.read_to_end(&mut output).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("json decompression failed: {error}"), hint: None, details: None, })?; Ok(output) } #[cfg(test)] mod tests { use super::*; #[test] fn zstd_payload_roundtrips() { let json = "zstd-friendly text ".repeat(2048); let compressed = compress_json_payload(json.as_bytes()).expect("should compress"); assert!(compressed.len() < json.len()); let hash_hex = blake3::hash(json.as_bytes()).to_hex().to_string(); let decoded = decode_json_zstd_payload(&compressed, json.len(), &hash_hex).expect("should decode"); assert_eq!(decoded, json.as_bytes()); } } ================================================ FILE: packages/engine/src/json_store/context.rs ================================================ use crate::json_store::store; use crate::json_store::types::{ JsonLoadBatch, JsonLoadRequestRef, JsonProjection, JsonProjectionBatch, JsonProjectionLoadRequestRef, JsonRef, JsonValueBatch, JsonWritePlacementRef, NormalizedJsonRef, }; use crate::storage::{KvGetGroup, StorageReader, StorageWriteSet}; use crate::LixError; use std::collections::{HashMap, HashSet}; const PACK_LOCAL_MAX_JSON_BYTES: usize = 64 * 1024; #[derive(Debug, Clone, Copy)] pub(crate) struct JsonStoreContext; impl JsonStoreContext { pub(crate) fn new() -> Self { Self } pub(crate) fn reader(&self, store: S) -> JsonStoreReader where S: StorageReader, { JsonStoreReader { store } } pub(crate) fn writer(&self) -> JsonStoreWriter { JsonStoreWriter::new() } pub(crate) async fn load_bytes_many( &self, store: &mut impl StorageReader, request: JsonLoadRequestRef<'_>, ) -> Result { store::load_json_bytes_many_in_scope(store, request.refs, request.scope) .await .map(JsonLoadBatch::new) } pub(crate) fn commit_pack_get_group(&self, commit_id: &str, pack_id: u32) -> KvGetGroup { KvGetGroup { namespace: store::JSON_PACK_NAMESPACE.to_string(), keys: vec![store::pack_key(commit_id, pack_id)], } } pub(crate) fn decode_pack_refs(&self, bytes: &[u8]) -> Result, LixError> { store::decode_json_pack_refs(bytes) } } pub(crate) struct JsonStoreReader { store: S, } impl Clone for JsonStoreReader where S: Clone, { fn clone(&self) -> Self { Self { store: self.store.clone(), } } } impl JsonStoreReader where S: StorageReader, { pub(crate) async fn load_bytes_many( &mut self, request: JsonLoadRequestRef<'_>, ) -> Result { store::load_json_bytes_many_in_scope(&mut self.store, request.refs, request.scope) .await .map(JsonLoadBatch::new) } pub(crate) async fn load_values_many( &mut self, request: JsonLoadRequestRef<'_>, ) -> Result { let refs = request.refs; let values = self .load_bytes_many(request) .await? .into_values() .into_iter() .enumerate() .map(|(index, bytes)| match bytes { Some(bytes) => serde_json::from_slice(&bytes).map(Some).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!( "json ref '{}' is invalid JSON: {error}", refs[index].to_hex() ), ) }), None => Ok(None), }) .collect::, _>>()?; Ok(JsonValueBatch::new(values)) } pub(crate) async fn load_projections_many( &mut self, request: JsonProjectionLoadRequestRef<'_>, ) -> Result { let values = self .load_values_many(JsonLoadRequestRef { refs: request.refs, scope: request.scope, }) .await? .into_values() .into_iter() .map(|value| { value.map(|value| { JsonProjection::new( request .paths .iter() .map(|path| value.pointer(path.as_str()).cloned()) .collect(), ) }) }) .collect(); Ok(JsonProjectionBatch::new(values)) } } pub(crate) struct JsonStoreWriter; #[derive(Debug, Clone, Default)] pub(crate) struct JsonStageBatchReport { pub(crate) refs: Vec, pub(crate) pack_indexes: HashMap<[u8; 32], usize>, } impl JsonStoreWriter { fn new() -> Self { Self } pub(crate) fn stage_batch<'a>( &mut self, writes: &mut StorageWriteSet, placement: JsonWritePlacementRef<'a>, payloads: impl IntoIterator>, ) -> Result, LixError> { self.stage_batch_report(writes, placement, payloads) .map(|report| report.refs) } pub(crate) fn stage_batch_report<'a>( &mut self, writes: &mut StorageWriteSet, placement: JsonWritePlacementRef<'a>, payloads: impl IntoIterator>, ) -> Result { let mut unique_encoded = Vec::new(); let mut order = Vec::new(); let mut seen = HashSet::new(); for payload in payloads { let encoded = match payload.trusted_json_ref() { Some(json_ref) => store::encode_json_str_with_ref(payload.normalized(), json_ref)?, None => store::encode_json_str(payload.normalized())?, }; let hash: [u8; 32] = encoded .json_ref .as_hash_bytes() .try_into() .expect("json ref hash is fixed size"); #[cfg(feature = "storage-benches")] crate::storage_bench::record_json_store_stage_bytes(hash); order.push(encoded.json_ref); if seen.insert(hash) { unique_encoded.push(encoded); } } let pack_local = matches!(placement, JsonWritePlacementRef::CommitPack { .. }); let mut pack_indexes = HashMap::new(); if let JsonWritePlacementRef::CommitPack { commit_id, pack_id } = placement { let pack_entries = unique_encoded .iter() .filter(|encoded| encoded.uncompressed_len <= PACK_LOCAL_MAX_JSON_BYTES) .collect::>(); for (index, encoded) in pack_entries.iter().enumerate() { pack_indexes.insert(*encoded.json_ref.as_hash_array(), index); } if !pack_entries.is_empty() { let encoded_pack = store::encode_json_pack(&pack_entries)?; writes.put( store::JSON_PACK_NAMESPACE, store::pack_key(commit_id, pack_id), encoded_pack, ); } } for encoded in &unique_encoded { if pack_local && encoded.uncompressed_len <= PACK_LOCAL_MAX_JSON_BYTES { continue; } writes.put( store::JSON_NAMESPACE, encoded.json_ref.as_hash_bytes().to_vec(), store::encode_direct_json_payload(encoded), ); } Ok(JsonStageBatchReport { refs: order, pack_indexes, }) } } #[cfg(test)] mod tests { use std::sync::Arc; use super::*; use crate::backend::testing::UnitTestBackend; use crate::json_store::types::JsonReadScopeRef; use crate::storage::StorageContext; #[tokio::test] async fn commit_local_batch_writes_pack_without_direct_rows() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let context = JsonStoreContext::new(); let first = "{\"value\":\"first\"}"; let second = "{\"value\":\"second\"}"; let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); context .writer() .stage_batch( &mut writes, JsonWritePlacementRef::CommitPack { commit_id: "commit-a", pack_id: 0, }, [ NormalizedJsonRef::new(first), NormalizedJsonRef::new(second), ], ) .expect("json pack should stage"); writes .apply(&mut transaction.as_mut()) .await .expect("json pack should apply"); transaction .commit() .await .expect("transaction should commit"); let refs = [ JsonRef::for_content(first.as_bytes()), JsonRef::for_content(second.as_bytes()), ]; let unknown = context .reader(storage.clone()) .load_bytes_many(JsonLoadRequestRef { refs: &refs, scope: JsonReadScopeRef::OutOfBand, }) .await .expect("unknown load should check direct rows"); assert_eq!(unknown.into_values(), vec![None, None]); let pack_ids = [0]; let packed = context .reader(storage.clone()) .load_bytes_many(JsonLoadRequestRef { refs: &refs, scope: JsonReadScopeRef::CommitPacks { commit_id: "commit-a", pack_ids: &pack_ids, }, }) .await .expect("packed load should hydrate"); assert_eq!( packed.into_values(), vec![ Some(first.as_bytes().to_vec()), Some(second.as_bytes().to_vec()) ] ); } #[tokio::test] async fn commit_local_batch_dedupes_pack_payloads_but_returns_request_order() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let context = JsonStoreContext::new(); let first = "{\"value\":\"first\"}"; let second = "{\"value\":\"second\"}"; let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let staged_refs = context .writer() .stage_batch( &mut writes, JsonWritePlacementRef::CommitPack { commit_id: "commit-a", pack_id: 0, }, [ NormalizedJsonRef::new(first), NormalizedJsonRef::new(first), NormalizedJsonRef::new(second), ], ) .expect("json pack should stage"); writes .apply(&mut transaction.as_mut()) .await .expect("json pack should apply"); transaction .commit() .await .expect("transaction should commit"); let first_ref = JsonRef::for_content(first.as_bytes()); let second_ref = JsonRef::for_content(second.as_bytes()); assert_eq!(staged_refs, vec![first_ref, first_ref, second_ref]); let refs = [first_ref, second_ref]; let unknown = context .reader(storage.clone()) .load_bytes_many(JsonLoadRequestRef { refs: &refs, scope: JsonReadScopeRef::OutOfBand, }) .await .expect("unknown load should check direct rows"); assert_eq!(unknown.into_values(), vec![None, None]); let pack_ids = [0]; let packed = context .reader(storage.clone()) .load_bytes_many(JsonLoadRequestRef { refs: &refs, scope: JsonReadScopeRef::CommitPacks { commit_id: "commit-a", pack_ids: &pack_ids, }, }) .await .expect("packed load should hydrate"); assert_eq!( packed.into_values(), vec![ Some(first.as_bytes().to_vec()), Some(second.as_bytes().to_vec()) ] ); } #[tokio::test] async fn commit_local_batch_accepts_trusted_prehashed_payload() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let context = JsonStoreContext::new(); let json = "{\"value\":\"prehashed\"}"; let json_ref = JsonRef::for_content(json.as_bytes()); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let refs = context .writer() .stage_batch( &mut writes, JsonWritePlacementRef::CommitPack { commit_id: "commit-a", pack_id: 0, }, [NormalizedJsonRef::trusted_prehashed(json, json_ref)], ) .expect("prehashed json should stage"); assert_eq!(refs, vec![json_ref]); writes .apply(&mut transaction.as_mut()) .await .expect("json pack should apply"); transaction .commit() .await .expect("transaction should commit"); let pack_ids = [0]; let packed = context .reader(storage.clone()) .load_bytes_many(JsonLoadRequestRef { refs: &refs, scope: JsonReadScopeRef::CommitPacks { commit_id: "commit-a", pack_ids: &pack_ids, }, }) .await .expect("prehashed payload should hydrate"); assert_eq!(packed.into_values(), vec![Some(json.as_bytes().to_vec())]); } } ================================================ FILE: packages/engine/src/json_store/encoded.rs ================================================ use crate::json_store::types::JsonRef; use std::borrow::Cow; #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum JsonCodec { Raw, Zstd, } pub(crate) struct EncodedJson<'a> { pub(crate) json_ref: JsonRef, pub(crate) codec: JsonCodec, pub(crate) uncompressed_len: usize, pub(crate) data: Cow<'a, [u8]>, } ================================================ FILE: packages/engine/src/json_store/mod.rs ================================================ pub(crate) mod compression; pub(crate) mod context; mod encoded; pub(crate) mod store; pub(crate) mod types; #[allow(unused_imports)] pub(crate) use context::{JsonStoreContext, JsonStoreReader, JsonStoreWriter}; pub(crate) use types::{ JsonLoadRequestRef, JsonReadScopeRef, JsonRef, JsonWritePlacementRef, NormalizedJson, NormalizedJsonRef, }; ================================================ FILE: packages/engine/src/json_store/store.rs ================================================ use crate::json_store::compression::{compress_json_payload, decode_json_zstd_payload}; use crate::json_store::encoded::{EncodedJson, JsonCodec}; use crate::json_store::types::{JsonReadScopeRef, JsonRef}; use crate::storage::{KvGetGroup, KvGetRequest, StorageReader}; use crate::LixError; use std::borrow::Cow; use std::collections::HashMap; pub(crate) const JSON_NAMESPACE: &str = "json_store.json"; pub(crate) const JSON_PACK_NAMESPACE: &str = "json_store.pack"; const STORED_JSON_MAGIC: &[u8] = b"lix-json:v1"; const STORED_JSON_HEADER_LEN: usize = STORED_JSON_MAGIC.len() + 1 + 8; const STORED_JSON_PACK_MAGIC: &[u8] = b"lix-json-pack:v2"; const STORED_JSON_PACK_ENTRY_HEADER_LEN: usize = 32 + 1 + 4 + 4 + 4; const ZSTD_MIN_JSON_BYTES: usize = 16 * 1024; const MIN_ZSTD_SAVINGS_BYTES: usize = 128; struct StoredJsonPayload<'a> { codec: JsonCodec, uncompressed_len: usize, data: &'a [u8], } struct JsonPackLayout { directory_start: usize, payload_start: usize, count: usize, } struct JsonPackEntry<'a> { hash: [u8; 32], payload: StoredJsonPayload<'a>, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum JsonHashCheck { /// Hot reads trust the local storage layer and pack directory. Content /// hashes are computed at write time; exhaustive verification belongs in /// explicit integrity-check/fsck callers rather than every row scan. TrustedHotRead, Verify, } enum OrderedSinglePackProbe { Hit(Vec>>), MissPresent(Vec), MissAbsent, } fn raw_json_ref_for_content(json: &str) -> JsonRef { JsonRef::from_hash(blake3::hash(json.as_bytes())) } pub(crate) fn json_ref_for_content(bytes: &[u8]) -> JsonRef { JsonRef::for_content(bytes) } #[cfg(test)] fn encode_json(json: &str) -> Result, LixError> { encode_json_for_storage(json) } fn encode_json_for_storage(json: &str) -> Result, LixError> { let raw_ref = raw_json_ref_for_content(json); encode_json_for_storage_with_ref(json, raw_ref) } fn encode_json_for_storage_with_ref( json: &str, raw_ref: JsonRef, ) -> Result, LixError> { let raw_data = json.as_bytes(); if raw_data.len() >= ZSTD_MIN_JSON_BYTES { let compressed = compress_json_payload(raw_data)?; if raw_data.len().saturating_sub(compressed.len()) >= MIN_ZSTD_SAVINGS_BYTES { return Ok(EncodedJson { json_ref: raw_ref, codec: JsonCodec::Zstd, uncompressed_len: json.len(), data: Cow::Owned(compressed), }); } } Ok(EncodedJson { json_ref: raw_ref, codec: JsonCodec::Raw, uncompressed_len: json.len(), data: Cow::Borrowed(raw_data), }) } pub(crate) fn encode_json_str(json: &str) -> Result, LixError> { encode_json_for_storage(json) } pub(crate) fn encode_json_str_with_ref( json: &str, json_ref: JsonRef, ) -> Result, LixError> { debug_assert_eq!(JsonRef::for_content(json.as_bytes()), json_ref); encode_json_for_storage_with_ref(json, json_ref) } pub(crate) fn encode_direct_json_payload(encoded_json: &EncodedJson<'_>) -> Vec { encode_stored_json_payload(encoded_json) } pub(crate) fn pack_key(commit_id: &str, pack_id: u32) -> Vec { let commit_id = commit_id.as_bytes(); let mut key = Vec::with_capacity(4 + commit_id.len() + 4); key.extend_from_slice(&(commit_id.len() as u32).to_be_bytes()); key.extend_from_slice(commit_id); key.extend_from_slice(&pack_id.to_be_bytes()); key } pub(crate) fn decode_json_pack_refs(bytes: &[u8]) -> Result, LixError> { let layout = json_pack_layout(bytes)?; let mut refs = Vec::with_capacity(layout.count); for index in 0..layout.count { refs.push(JsonRef::from_hash_bytes( json_pack_entry(bytes, &layout, index)?.hash, )); } Ok(refs) } pub(crate) fn encode_json_pack(entries: &[&EncodedJson<'_>]) -> Result, LixError> { let mut directory_len = STORED_JSON_PACK_MAGIC.len() + 4 + entries.len() * STORED_JSON_PACK_ENTRY_HEADER_LEN; let payload_len = entries .iter() .map(|entry| entry.data.as_ref().len()) .sum::(); let mut out = Vec::with_capacity(directory_len + payload_len); out.extend_from_slice(STORED_JSON_PACK_MAGIC); out.extend_from_slice(&(entries.len() as u32).to_be_bytes()); let mut offset = 0usize; for entry in entries { let data = entry.data.as_ref(); out.extend_from_slice(entry.json_ref.as_hash_bytes()); out.push(json_codec_byte(entry.codec)); out.extend_from_slice(&json_pack_u32( entry.uncompressed_len, "uncompressed length", )?); out.extend_from_slice(&json_pack_u32(offset, "payload offset")?); out.extend_from_slice(&json_pack_u32(data.len(), "payload length")?); offset = offset.checked_add(data.len()).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "json_store pack payload offset overflow", ) })?; } for entry in entries { out.extend_from_slice(entry.data.as_ref()); } directory_len = out.len() - payload_len; debug_assert_eq!( directory_len, STORED_JSON_PACK_MAGIC.len() + 4 + entries.len() * STORED_JSON_PACK_ENTRY_HEADER_LEN ); Ok(out) } fn json_pack_u32(value: usize, field: &str) -> Result<[u8; 4], LixError> { let value = u32::try_from(value).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("json_store pack {field} exceeds u32"), ) })?; Ok(value.to_be_bytes()) } pub(crate) fn encode_json_bytes_for_storage(bytes: &[u8]) -> Result<(JsonRef, Vec), LixError> { let json = std::str::from_utf8(bytes).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("json bytes are invalid UTF-8: {error}"), ) })?; let json_ref = JsonRef::from_hash(blake3::hash(bytes)); encode_json_str_for_storage_with_ref(json, json_ref) } pub(crate) fn encode_json_str_for_storage_with_ref( json: &str, json_ref: JsonRef, ) -> Result<(JsonRef, Vec), LixError> { let encoded_json = encode_json_for_storage_with_ref(json, json_ref)?; let json_ref = encoded_json.json_ref.clone(); Ok((json_ref, encode_stored_json_payload(&encoded_json))) } async fn load_json_bytes_direct( store: &mut impl StorageReader, json_ref: &JsonRef, ) -> Result>, LixError> { let result = store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: JSON_NAMESPACE.to_string(), keys: vec![json_ref.as_hash_bytes().to_vec()], }], }) .await? .groups .into_iter() .next() .and_then(|group| group.single_value_owned()); let Some(bytes) = result else { return Ok(None); }; let stored_payload = decode_stored_json_payload(&bytes)?; let _ = store; decode_json_payload(json_ref, stored_payload, JsonHashCheck::TrustedHotRead).map(Some) } pub(crate) async fn load_json_bytes_many_in_scope( store: &mut impl StorageReader, json_refs: &[JsonRef], scope: JsonReadScopeRef<'_>, ) -> Result>>, LixError> { load_json_bytes_many_in_scope_with_hash_check( store, json_refs, scope, JsonHashCheck::TrustedHotRead, ) .await } pub(crate) async fn verify_json_bytes_many_in_scope( store: &mut impl StorageReader, json_refs: &[JsonRef], scope: JsonReadScopeRef<'_>, ) -> Result>>, LixError> { load_json_bytes_many_in_scope_with_hash_check(store, json_refs, scope, JsonHashCheck::Verify) .await } async fn load_json_bytes_many_in_scope_with_hash_check( store: &mut impl StorageReader, json_refs: &[JsonRef], scope: JsonReadScopeRef<'_>, hash_check: JsonHashCheck, ) -> Result>>, LixError> { if json_refs.is_empty() { return Ok(Vec::new()); } let ordered_single_pack_probe = if let JsonReadScopeRef::CommitPacks { commit_id, pack_ids: [pack_id], } = scope { let probe = load_ordered_single_pack(store, json_refs, commit_id, *pack_id, hash_check).await?; if let OrderedSinglePackProbe::Hit(values) = probe { return Ok(values); } Some(probe) } else { None }; let mut unique_keys = Vec::new(); let mut unique_refs = Vec::new(); let mut key_indexes = HashMap::<[u8; 32], usize>::new(); let mut requested_indexes = Vec::with_capacity(json_refs.len()); let mut has_duplicate_refs = false; for json_ref in json_refs { let hash = *json_ref.as_hash_array(); let index = match key_indexes.get(&hash) { Some(index) => { has_duplicate_refs = true; *index } None => { let index = unique_keys.len(); key_indexes.insert(hash, index); unique_keys.push(hash.to_vec()); unique_refs.push(*json_ref); index } }; requested_indexes.push(index); } let mut unique_values = match scope { JsonReadScopeRef::OutOfBand => vec![None; unique_refs.len()], JsonReadScopeRef::CommitPacks { commit_id, pack_ids: [pack_id], } => match &ordered_single_pack_probe { Some(OrderedSinglePackProbe::MissPresent(stored_pack)) => { load_from_single_pack_bytes(stored_pack, &unique_refs, hash_check)? } Some(OrderedSinglePackProbe::MissAbsent) => vec![None; unique_refs.len()], _ => { let pack_ids = [*pack_id]; load_from_packs(store, &unique_refs, commit_id, &pack_ids, hash_check).await? } }, JsonReadScopeRef::CommitPacks { commit_id, pack_ids, } => load_from_packs(store, &unique_refs, commit_id, pack_ids, hash_check).await?, }; let missing = unique_values .iter() .enumerate() .filter_map(|(index, value)| value.is_none().then_some(index)) .collect::>(); if missing.is_empty() { return Ok(json_values_in_request_order( unique_values, requested_indexes, has_duplicate_refs, )); } let result = store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: JSON_NAMESPACE.to_string(), keys: missing .iter() .map(|&index| unique_keys[index].clone()) .collect(), }], }) .await?; let group = result.groups.into_iter().next().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "json_store batch load returned no result group", ) })?; if group.len() != missing.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "json_store batch load returned {} values for {} requested refs", group.len(), missing.len() ), )); } for (index, stored_bytes) in group.values_iter().enumerate() { let unique_index = missing[index]; let Some(stored_bytes) = stored_bytes else { continue; }; let stored_payload = decode_stored_json_payload(stored_bytes)?; let _ = store; unique_values[unique_index] = Some(decode_json_payload( &unique_refs[unique_index], stored_payload, hash_check, )?); } Ok(json_values_in_request_order( unique_values, requested_indexes, has_duplicate_refs, )) } fn json_values_in_request_order( unique_values: Vec>>, requested_indexes: Vec, has_duplicate_refs: bool, ) -> Vec>> { if !has_duplicate_refs { debug_assert_eq!(requested_indexes.len(), unique_values.len()); debug_assert!(requested_indexes .iter() .copied() .enumerate() .all(|(request_index, unique_index)| request_index == unique_index)); return unique_values; } requested_indexes .into_iter() .map(|index| unique_values[index].clone()) .collect() } async fn load_ordered_single_pack( store: &mut impl StorageReader, requested_refs: &[JsonRef], commit_id: &str, pack_id: u32, hash_check: JsonHashCheck, ) -> Result { let result = store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: JSON_PACK_NAMESPACE.to_string(), keys: vec![pack_key(commit_id, pack_id)], }], }) .await?; let group = result.groups.into_iter().next().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "json_store ordered pack load returned no result group", ) })?; if group.len() != 1 { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "json_store ordered pack load returned {} values for 1 requested pack", group.len() ), )); } let Some(stored_pack) = group.value(0).flatten() else { return Ok(OrderedSinglePackProbe::MissAbsent); }; let mut values = vec![None; requested_refs.len()]; if load_json_pack_values_in_request_order(stored_pack, hash_check, requested_refs, &mut values)? { Ok(OrderedSinglePackProbe::Hit(values)) } else { Ok(OrderedSinglePackProbe::MissPresent(stored_pack.to_vec())) } } fn load_from_single_pack_bytes( stored_pack: &[u8], unique_refs: &[JsonRef], hash_check: JsonHashCheck, ) -> Result>>, LixError> { let mut values = vec![None; unique_refs.len()]; if load_json_pack_values_in_request_order(stored_pack, hash_check, unique_refs, &mut values)? { return Ok(values); } let wanted = unique_refs .iter() .enumerate() .map(|(index, json_ref)| (*json_ref.as_hash_array(), index)) .collect::>(); load_json_pack_values(stored_pack, hash_check, &wanted, &mut values)?; Ok(values) } async fn load_from_packs( store: &mut impl StorageReader, unique_refs: &[JsonRef], commit_id: &str, pack_ids: &[u32], hash_check: JsonHashCheck, ) -> Result>>, LixError> { let mut values = vec![None; unique_refs.len()]; if pack_ids.is_empty() || unique_refs.is_empty() { return Ok(values); } let keys = pack_ids .iter() .map(|&pack_id| pack_key(commit_id, pack_id)) .collect::>(); let result = store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: JSON_PACK_NAMESPACE.to_string(), keys, }], }) .await?; let group = result.groups.into_iter().next().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "json_store pack load returned no result group", ) })?; if pack_ids.len() == 1 && group.len() == 1 { if let Some(stored_pack) = group.value(0).flatten() { if load_json_pack_values_in_request_order( stored_pack, hash_check, unique_refs, &mut values, )? { return Ok(values); } } } let wanted = unique_refs .iter() .enumerate() .map(|(index, json_ref)| (*json_ref.as_hash_array(), index)) .collect::>(); for stored_pack in group.values_iter().flatten() { load_json_pack_values(stored_pack, hash_check, &wanted, &mut values)?; } Ok(values) } fn encode_stored_json_payload(encoded_json: &EncodedJson<'_>) -> Vec { let mut out = Vec::with_capacity(STORED_JSON_HEADER_LEN + encoded_json.data.as_ref().len()); out.extend_from_slice(STORED_JSON_MAGIC); out.push(json_codec_byte(encoded_json.codec)); out.extend_from_slice(&(encoded_json.uncompressed_len as u64).to_be_bytes()); out.extend_from_slice(encoded_json.data.as_ref()); out } fn decode_stored_json_payload(bytes: &[u8]) -> Result, LixError> { if bytes.len() < STORED_JSON_HEADER_LEN { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "stored JSON payload is truncated", )); } if &bytes[..STORED_JSON_MAGIC.len()] != STORED_JSON_MAGIC { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "stored JSON payload has invalid header", )); } let codec = read_json_codec(bytes[STORED_JSON_MAGIC.len()])?; let len_start = STORED_JSON_MAGIC.len() + 1; let len_end = len_start + 8; let uncompressed_len = u64::from_be_bytes( bytes[len_start..len_end] .try_into() .expect("stored JSON length header is fixed size"), ) as usize; Ok(StoredJsonPayload { codec, uncompressed_len, data: &bytes[len_end..], }) } fn json_codec_byte(codec: JsonCodec) -> u8 { match codec { JsonCodec::Raw => 0, JsonCodec::Zstd => 1, } } fn read_json_codec(byte: u8) -> Result { match byte { 0 => Ok(JsonCodec::Raw), 1 => Ok(JsonCodec::Zstd), _ => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("stored JSON payload has unknown codec byte {byte}"), )), } } fn decode_json_payload( json_ref: &JsonRef, stored_payload: StoredJsonPayload<'_>, hash_check: JsonHashCheck, ) -> Result, LixError> { let data = match stored_payload.codec { JsonCodec::Raw => Ok(stored_payload.data.to_vec()), JsonCodec::Zstd => decode_json_zstd_payload( stored_payload.data, stored_payload.uncompressed_len, &json_ref.to_hex(), ), }?; if data.len() != stored_payload.uncompressed_len { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "json ref '{}' decoded to {} bytes, expected {}", json_ref.to_hex(), data.len(), stored_payload.uncompressed_len ), )); } if hash_check == JsonHashCheck::Verify { let actual_hash = blake3::hash(&data); if actual_hash.as_bytes() != json_ref.as_hash_bytes() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("json ref '{}' hash mismatch", json_ref.to_hex()), )); } } Ok(data) } fn load_json_pack_values_in_request_order( bytes: &[u8], hash_check: JsonHashCheck, requested_refs: &[JsonRef], values: &mut [Option>], ) -> Result { if values.len() < requested_refs.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, "json_store ordered pack load has fewer result slots than refs", )); } let layout = json_pack_layout(bytes)?; if layout.count != requested_refs.len() { return Ok(false); } for (index, json_ref) in requested_refs.iter().enumerate() { let entry = json_pack_entry(bytes, &layout, index)?; if &entry.hash != json_ref.as_hash_array() { for value in &mut values[..index] { *value = None; } return Ok(false); } values[index] = Some(decode_json_payload(json_ref, entry.payload, hash_check)?); } Ok(true) } fn load_json_pack_values( bytes: &[u8], hash_check: JsonHashCheck, wanted: &HashMap<[u8; 32], usize>, values: &mut [Option>], ) -> Result<(), LixError> { let layout = json_pack_layout(bytes)?; for index in 0..layout.count { let entry = json_pack_entry(bytes, &layout, index)?; let Some(&value_index) = wanted.get(&entry.hash) else { continue; }; let json_ref = JsonRef::from_hash_bytes(entry.hash); values[value_index] = Some(decode_json_payload(&json_ref, entry.payload, hash_check)?); } Ok(()) } fn json_pack_layout(bytes: &[u8]) -> Result { if bytes.len() < STORED_JSON_PACK_MAGIC.len() + 4 { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "stored JSON pack is truncated", )); } if &bytes[..STORED_JSON_PACK_MAGIC.len()] != STORED_JSON_PACK_MAGIC { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "stored JSON pack has invalid header", )); } let count_start = STORED_JSON_PACK_MAGIC.len(); let count_end = count_start + 4; let count = u32::from_be_bytes( bytes[count_start..count_end] .try_into() .expect("json pack count header is fixed size"), ) as usize; let directory_start = count_end; let directory_len = count .checked_mul(STORED_JSON_PACK_ENTRY_HEADER_LEN) .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "json pack directory overflow", ) })?; let payload_start = directory_start.checked_add(directory_len).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "json pack payload offset overflow", ) })?; if bytes.len() < payload_start { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "stored JSON pack directory is truncated", )); } Ok(JsonPackLayout { directory_start, payload_start, count, }) } fn json_pack_entry<'a>( bytes: &'a [u8], layout: &JsonPackLayout, index: usize, ) -> Result, LixError> { if index >= layout.count { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, "json pack entry index exceeds directory count", )); } let mut cursor = layout.directory_start + index * STORED_JSON_PACK_ENTRY_HEADER_LEN; let hash: [u8; 32] = bytes[cursor..cursor + 32] .try_into() .expect("json pack hash header is fixed size"); cursor += 32; let codec = read_json_codec(bytes[cursor])?; cursor += 1; let uncompressed_len = u32::from_be_bytes( bytes[cursor..cursor + 4] .try_into() .expect("json pack uncompressed length is fixed size"), ) as usize; cursor += 4; let offset = u32::from_be_bytes( bytes[cursor..cursor + 4] .try_into() .expect("json pack payload offset is fixed size"), ) as usize; cursor += 4; let len = u32::from_be_bytes( bytes[cursor..cursor + 4] .try_into() .expect("json pack payload length is fixed size"), ) as usize; let data_start = layout.payload_start.checked_add(offset).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "json pack entry offset overflow", ) })?; let data_end = data_start.checked_add(len).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "json pack entry length overflow", ) })?; if data_end > bytes.len() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "stored JSON pack entry payload is truncated", )); } Ok(JsonPackEntry { hash, payload: StoredJsonPayload { codec, uncompressed_len, data: &bytes[data_start..data_end], }, }) } #[cfg(test)] mod tests { use std::sync::Arc; use super::*; use crate::backend::testing::UnitTestBackend; use crate::storage::{StorageContext, StorageWriteSet}; #[tokio::test] async fn json_roundtrips_raw_payload() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let json = "{\"value\":\"small\"}"; let encoded = encode_json(json).expect("json should encode"); assert_eq!(encoded.codec, JsonCodec::Raw); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); writes.put( JSON_NAMESPACE, encoded.json_ref.as_hash_bytes().to_vec(), encode_stored_json_payload(&encoded), ); writes .apply(&mut transaction.as_mut()) .await .expect("json should store"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); assert_eq!( load_json_bytes_direct(&mut store, &encoded.json_ref) .await .expect("json should load"), Some(json.as_bytes().to_vec()) ); } #[tokio::test] async fn json_batch_load_roundtrips_in_request_order() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let first = encode_json("{\"value\":\"first\"}").expect("first json should encode"); let second = encode_json("{\"value\":\"second\"}").expect("second json should encode"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); writes.put( JSON_NAMESPACE, first.json_ref.as_hash_bytes().to_vec(), encode_stored_json_payload(&first), ); writes.put( JSON_NAMESPACE, second.json_ref.as_hash_bytes().to_vec(), encode_stored_json_payload(&second), ); writes .apply(&mut transaction.as_mut()) .await .expect("json should store"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); let values = load_json_bytes_many_in_scope( &mut store, &[second.json_ref, first.json_ref, second.json_ref], JsonReadScopeRef::OutOfBand, ) .await .expect("json batch should load"); assert_eq!( values, vec![ Some(second.data.as_ref().to_vec()), Some(first.data.as_ref().to_vec()), Some(second.data.as_ref().to_vec()), ] ); } #[tokio::test] async fn verified_batch_load_rejects_hash_mismatch() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let requested_ref = JsonRef::for_content(br#"{"value":"requested"}"#); let stored = encode_json("{\"value\":\"different\"}").expect("stored json should encode"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); writes.put( JSON_NAMESPACE, requested_ref.as_hash_bytes().to_vec(), encode_stored_json_payload(&stored), ); writes .apply(&mut transaction.as_mut()) .await .expect("json should store"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); let trusted = load_json_bytes_many_in_scope( &mut store, &[requested_ref], JsonReadScopeRef::OutOfBand, ) .await .expect("trusted hot read should not hash-check"); assert_eq!(trusted, vec![Some(stored.data.as_ref().to_vec())]); let mut store = storage.clone(); let error = verify_json_bytes_many_in_scope( &mut store, &[requested_ref], JsonReadScopeRef::OutOfBand, ) .await .expect_err("verified read should reject mismatched content address"); assert!( error.to_string().contains("hash mismatch"), "error should mention hash mismatch: {error}" ); } #[tokio::test] async fn verified_pack_load_checks_only_requested_entries() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let good = encode_json("{\"value\":\"good\"}").expect("good json should encode"); let bad_ref = JsonRef::for_content(br#"{"value":"expected"}"#); let bad = encode_json_for_storage_with_ref("{\"value\":\"wrong\"}", bad_ref) .expect("bad json should encode with mismatched ref"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); writes.put( JSON_PACK_NAMESPACE, pack_key("commit-a", 0), encode_json_pack(&[&good, &bad]).expect("pack should encode"), ); writes .apply(&mut transaction.as_mut()) .await .expect("json pack should store"); transaction .commit() .await .expect("transaction should commit"); let pack_ids = [0]; let mut store = storage.clone(); let good_values = verify_json_bytes_many_in_scope( &mut store, &[good.json_ref], JsonReadScopeRef::CommitPacks { commit_id: "commit-a", pack_ids: &pack_ids, }, ) .await .expect("unrequested bad pack entry should not be decoded"); assert_eq!(good_values, vec![Some(good.data.as_ref().to_vec())]); let mut store = storage.clone(); let error = verify_json_bytes_many_in_scope( &mut store, &[bad_ref], JsonReadScopeRef::CommitPacks { commit_id: "commit-a", pack_ids: &pack_ids, }, ) .await .expect_err("requested bad pack entry should be verified"); assert!( error.to_string().contains("hash mismatch"), "error should mention hash mismatch: {error}" ); } #[test] fn json_pack_directory_uses_compact_u32_fields() { let first = encode_json("{\"value\":\"first\"}").expect("first json should encode"); let second = encode_json("{\"value\":\"second\"}").expect("second json should encode"); let pack = encode_json_pack(&[&first, &second]).expect("pack should encode"); let payload_len = first.data.as_ref().len() + second.data.as_ref().len(); assert_eq!(STORED_JSON_PACK_ENTRY_HEADER_LEN, 32 + 1 + 4 + 4 + 4); assert_eq!( pack.len(), STORED_JSON_PACK_MAGIC.len() + 4 + 2 * STORED_JSON_PACK_ENTRY_HEADER_LEN + payload_len ); } #[test] fn json_pack_u32_rejects_oversized_directory_fields() { let error = json_pack_u32((u32::MAX as usize) + 1, "payload offset") .expect_err("oversized pack directory field should reject"); assert!( error.to_string().contains("payload offset exceeds u32"), "error should identify oversized field: {error}" ); } #[test] fn ordered_pack_load_fast_path_requires_exact_pack_order() { let first = encode_json("{\"value\":\"first\"}").expect("first json should encode"); let second = encode_json("{\"value\":\"second\"}").expect("second json should encode"); let pack = encode_json_pack(&[&first, &second]).expect("pack should encode"); let mut values = vec![None, None]; let loaded = load_json_pack_values_in_request_order( &pack, JsonHashCheck::Verify, &[first.json_ref, second.json_ref], &mut values, ) .expect("ordered pack load should parse"); assert!(loaded); assert_eq!( values, vec![ Some(first.data.as_ref().to_vec()), Some(second.data.as_ref().to_vec()), ] ); let mut values = vec![None, None]; let loaded = load_json_pack_values_in_request_order( &pack, JsonHashCheck::Verify, &[second.json_ref, first.json_ref], &mut values, ) .expect("unordered refs should fall back without error"); assert!(!loaded); assert_eq!(values, vec![None, None]); } #[tokio::test] async fn pack_batch_load_falls_back_for_unordered_refs() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let first = encode_json("{\"value\":\"first\"}").expect("first json should encode"); let second = encode_json("{\"value\":\"second\"}").expect("second json should encode"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); writes.put( JSON_PACK_NAMESPACE, pack_key("commit-a", 0), encode_json_pack(&[&first, &second]).expect("pack should encode"), ); writes .apply(&mut transaction.as_mut()) .await .expect("json pack should store"); transaction .commit() .await .expect("transaction should commit"); let pack_ids = [0]; let mut store = storage.clone(); let values = load_json_bytes_many_in_scope( &mut store, &[second.json_ref, first.json_ref], JsonReadScopeRef::CommitPacks { commit_id: "commit-a", pack_ids: &pack_ids, }, ) .await .expect("unordered refs should load through fallback"); assert_eq!( values, vec![ Some(second.data.as_ref().to_vec()), Some(first.data.as_ref().to_vec()), ] ); } #[tokio::test] async fn ordered_pack_probe_falls_back_to_direct_rows() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let packed = encode_json("{\"value\":\"packed\"}").expect("packed json should encode"); let direct = encode_json("{\"value\":\"direct\"}").expect("direct json should encode"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); writes.put( JSON_PACK_NAMESPACE, pack_key("commit-a", 0), encode_json_pack(&[&packed]).expect("pack should encode"), ); writes.put( JSON_NAMESPACE, direct.json_ref.as_hash_bytes().to_vec(), encode_stored_json_payload(&direct), ); writes .apply(&mut transaction.as_mut()) .await .expect("json rows should store"); transaction .commit() .await .expect("transaction should commit"); let pack_ids = [0]; let mut store = storage.clone(); let values = load_json_bytes_many_in_scope( &mut store, &[direct.json_ref], JsonReadScopeRef::CommitPacks { commit_id: "commit-a", pack_ids: &pack_ids, }, ) .await .expect("mismatched ordered pack probe should fall back to direct rows"); assert_eq!(values, vec![Some(direct.data.as_ref().to_vec())]); } } ================================================ FILE: packages/engine/src/json_store/types.rs ================================================ use std::sync::Arc; use crate::LixError; #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct NormalizedJson(Arc); impl NormalizedJson { pub(crate) fn from_arc_unchecked(normalized: Arc) -> Self { Self(normalized) } pub(crate) fn from_value(value: &serde_json::Value, context: &str) -> Result { let normalized: Arc = serde_json::to_string(value) .map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("{context} failed to serialize as normalized JSON: {error}"), ) })? .into(); Ok(Self(normalized)) } pub(crate) fn as_str(&self) -> &str { self.0.as_ref() } pub(crate) fn as_bytes(&self) -> &[u8] { self.as_str().as_bytes() } } #[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct JsonRef { hash: [u8; 32], } impl JsonRef { pub(crate) fn from_hash(hash: blake3::Hash) -> Self { Self { hash: *hash.as_bytes(), } } pub(crate) fn from_hash_bytes(hash: [u8; 32]) -> Self { Self { hash } } pub(crate) fn for_content(bytes: &[u8]) -> Self { Self::from_hash(blake3::hash(bytes)) } pub(crate) fn as_hash_bytes(&self) -> &[u8] { &self.hash } pub(crate) fn as_hash_array(&self) -> &[u8; 32] { &self.hash } pub(crate) fn to_hex(&self) -> String { self.hash.iter().map(|byte| format!("{byte:02x}")).collect() } } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) struct NormalizedJsonRef<'a> { normalized: &'a str, trusted_json_ref: Option, } impl<'a> NormalizedJsonRef<'a> { pub(crate) fn new(normalized: &'a str) -> Self { Self { normalized, trusted_json_ref: None, } } /// Uses a caller-owned invariant that `json_ref` was computed from /// `normalized`. This avoids rehashing JSON already normalized by the /// transaction staging boundary. pub(crate) fn trusted_prehashed(normalized: &'a str, json_ref: JsonRef) -> Self { Self { normalized, trusted_json_ref: Some(json_ref), } } pub(crate) fn normalized(&self) -> &'a str { self.normalized } pub(crate) fn trusted_json_ref(&self) -> Option { self.trusted_json_ref } } impl<'a> From<&'a NormalizedJson> for NormalizedJsonRef<'a> { fn from(value: &'a NormalizedJson) -> Self { Self::new(value.as_str()) } } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum JsonWritePlacementRef<'a> { CommitPack { commit_id: &'a str, pack_id: u32 }, OutOfBand, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum JsonReadScopeRef<'a> { OutOfBand, CommitPacks { commit_id: &'a str, pack_ids: &'a [u32], }, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) struct JsonLoadRequestRef<'a> { pub(crate) refs: &'a [JsonRef], pub(crate) scope: JsonReadScopeRef<'a>, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) struct JsonProjectionLoadRequestRef<'a> { pub(crate) refs: &'a [JsonRef], pub(crate) scope: JsonReadScopeRef<'a>, pub(crate) paths: &'a [JsonProjectionPath], } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct JsonLoadBatch { values: Vec>>, } impl JsonLoadBatch { pub(crate) fn new(values: Vec>>) -> Self { Self { values } } pub(crate) fn values(&self) -> &[Option>] { &self.values } pub(crate) fn into_values(self) -> Vec>> { self.values } } #[derive(Debug, Clone, PartialEq)] pub(crate) struct JsonValueBatch { values: Vec>, } impl JsonValueBatch { pub(crate) fn new(values: Vec>) -> Self { Self { values } } pub(crate) fn values(&self) -> &[Option] { &self.values } pub(crate) fn into_values(self) -> Vec> { self.values } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct JsonProjectionPath(String); impl JsonProjectionPath { pub(crate) fn new(pointer: impl Into) -> Self { Self(pointer.into()) } pub(crate) fn as_str(&self) -> &str { &self.0 } } #[derive(Debug, Clone, PartialEq)] pub(crate) struct JsonProjection { values: Vec>, } impl JsonProjection { pub(crate) fn new(values: Vec>) -> Self { Self { values } } pub(crate) fn values(&self) -> &[Option] { &self.values } } #[derive(Debug, Clone, PartialEq)] pub(crate) struct JsonProjectionBatch { values: Vec>, } impl JsonProjectionBatch { pub(crate) fn new(values: Vec>) -> Self { Self { values } } pub(crate) fn values(&self) -> &[Option] { &self.values } pub(crate) fn into_values(self) -> Vec> { self.values } } ================================================ FILE: packages/engine/src/lib.rs ================================================ mod backend; mod binary_cas; pub(crate) mod catalog; pub(crate) mod cel; pub(crate) mod commit_graph; #[allow(dead_code, unused_imports)] pub(crate) mod commit_store; mod common; pub(crate) mod domain; pub mod engine; pub(crate) mod entity_identity; pub(crate) mod functions; pub(crate) mod init; #[allow(dead_code)] pub(crate) mod json_store; pub(crate) mod live_state; mod schema; pub mod session; pub(crate) mod sql2; #[allow(dead_code, unused_imports)] pub(crate) mod storage; #[cfg(feature = "storage-benches")] pub mod storage_bench; #[cfg_attr(feature = "storage-benches", allow(dead_code))] #[cfg(any(test, feature = "storage-benches"))] pub(crate) mod test_support; pub(crate) mod tracked_state; pub mod transaction; pub(crate) mod untracked_state; pub(crate) mod version; pub mod wasm; pub use schema::{ lix_schema_definition, lix_schema_definition_json, validate_lix_schema, validate_lix_schema_definition, }; pub use backend::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteGroup, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePage, BytePageBuilder, }; pub use common::LixError; pub(crate) use common::{parse_row_metadata, parse_row_metadata_value, serialize_row_metadata}; pub use common::{CanonicalPluginKey, CanonicalSchemaKey, EntityId, FileId, VersionId}; pub use common::{LixNotice, NullableKeyFilter, SqlQueryResult, Value, WriteReceipt}; pub use common::{WireQueryResult, WireValue}; pub use engine::Engine; pub use init::InitReceipt; #[cfg(feature = "storage-benches")] pub use session::optimization9_sql2_bench; pub use session::{ CreateVersionOptions, CreateVersionReceipt, MergeChangeStats, MergeConflict, MergeConflictChangeKind, MergeConflictKind, MergeConflictSide, MergeVersionOptions, MergeVersionOutcome, MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt, SessionContext, SwitchVersionOptions, SwitchVersionReceipt, }; pub use session::{ExecuteResult, Row, RowRef, TryFromValue}; pub(crate) const GLOBAL_VERSION_ID: &str = "global"; ================================================ FILE: packages/engine/src/live_state/context.rs ================================================ use async_trait::async_trait; use tokio::sync::Mutex; use crate::commit_graph::CommitGraphContext; use crate::entity_identity::EntityIdentity; use crate::live_state::visibility; use crate::live_state::{ LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::storage::StorageReader; use crate::tracked_state::{ MaterializedTrackedStateRow, TrackedStateContext, TrackedStateFilter, TrackedStateProjection, TrackedStateRowRequest, TrackedStateScanRequest, }; use crate::untracked_state::{ UntrackedStateContext, UntrackedStateRowRequest, UntrackedStateScanRequest, }; use crate::version::VERSION_REF_SCHEMA_KEY; use crate::LixError; use crate::NullableKeyFilter; use crate::GLOBAL_VERSION_ID; const COMMIT_SCHEMA_KEY: &str = "lix_commit"; const COMMIT_EDGE_SCHEMA_KEY: &str = "lix_commit_edge"; /// Serving facade for visible live-state reads. /// /// Live state composes the rebuildable tracked projection with the durable /// untracked local overlay. Lower stores own persistence; this facade owns the /// visibility rule. pub(crate) struct LiveStateContext { tracked_state: TrackedStateContext, untracked_state: UntrackedStateContext, commit_graph: CommitGraphContext, } impl LiveStateContext { pub(crate) fn new( tracked_state: TrackedStateContext, untracked_state: UntrackedStateContext, commit_graph: CommitGraphContext, ) -> Self { Self { tracked_state, untracked_state, commit_graph, } } /// Creates a visible live-state reader over a caller-provided KV store. pub(crate) fn reader(&self, store: S) -> LiveStateStoreReader where S: StorageReader, { LiveStateStoreReader { store: Mutex::new(store), tracked_state: self.tracked_state.clone(), untracked_state: self.untracked_state, commit_graph: self.commit_graph.clone(), } } } /// Visible live-state reader backed by a caller-provided KV store. pub(crate) struct LiveStateStoreReader { store: Mutex, tracked_state: TrackedStateContext, untracked_state: UntrackedStateContext, commit_graph: CommitGraphContext, } impl LiveStateStoreReader where S: StorageReader, { pub(crate) async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { let mut store = self.store.lock().await; let scope = scan_scope(&mut *store, &self.untracked_state, request).await?; let derived_rows = scan_commit_derived_rows(&mut *store, &self.commit_graph, request, &scope).await?; let mut tracked_rows = Vec::new(); if request.filter.untracked != Some(true) && !is_commit_derived_only_request(request) { for version_id in &scope.storage_version_ids { let Some(commit_id) = load_version_ref_commit_id(&mut *store, &self.untracked_state, version_id) .await? else { continue; }; let tracked_request = tracked_scan_request_from_live(request); let source = tracked_source_from_version_id(version_id); let store: &mut dyn StorageReader = &mut *store; tracked_rows.extend( self.tracked_state .reader(store) .scan_rows_at_commit(&commit_id, &tracked_request) .await? .into_iter() .map(|row| project_tracked_row(row, version_id, source)), ); } } let untracked_rows = if request.filter.untracked != Some(false) { let store: &mut dyn StorageReader = &mut *store; self.untracked_state .reader(store) .scan_rows(&untracked_scan_request_from_live( request, &scope.storage_version_ids, )) .await? .into_iter() .map(MaterializedLiveStateRow::from) .collect::>() } else { Vec::new() }; let mut rows = if request.filter.untracked.is_some() { tracked_rows .into_iter() .chain(untracked_rows) .chain(derived_rows) .collect() } else { crate::live_state::overlay::overlay_untracked_rows(tracked_rows, untracked_rows) .into_iter() .chain(derived_rows) .collect() }; rows = visibility::resolve_scan_rows( rows, &scope.projection_version_ids, request.filter.include_tombstones, ); if let Some(limit) = request.limit { rows.truncate(limit); } Ok(rows) } pub(crate) async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { let mut store = self.store.lock().await; if !version_ref_exists(&mut *store, &self.untracked_state, &request.version_id).await? { return Ok(None); } if is_commit_derived_schema(&request.schema_key) && request.file_id == NullableKeyFilter::Null { let scope = LiveStateScanScope { storage_version_ids: vec![request.version_id.clone()], projection_version_ids: vec![request.version_id.clone()], }; let rows = scan_commit_derived_rows( &mut *store, &self.commit_graph, &LiveStateScanRequest { filter: crate::live_state::LiveStateFilter { schema_keys: vec![request.schema_key.clone()], entity_ids: vec![request.entity_id.clone()], version_ids: vec![request.version_id.clone()], file_ids: vec![NullableKeyFilter::Null], untracked: Some(false), include_tombstones: false, ..Default::default() }, limit: Some(1), ..Default::default() }, &scope, ) .await?; if let Some(row) = rows.into_iter().next() { return Ok(Some(row)); } } for candidate in load_row_candidates(request) { match candidate.source { LiveStateLookupSource::Untracked => { let store: &mut dyn StorageReader = &mut *store; if let Some(row) = self .untracked_state .reader(store) .load_row(&untracked_row_request_from_live( request, &candidate.version_id, )) .await? { return Ok(Some(visibility::project_loaded_row( MaterializedLiveStateRow::from(row), &request.version_id, &candidate.version_id, ))); } } LiveStateLookupSource::Tracked => { let Some(commit_id) = load_version_ref_commit_id( &mut *store, &self.untracked_state, &candidate.version_id, ) .await? else { continue; }; let store: &mut dyn StorageReader = &mut *store; let tracked_request = tracked_row_request_from_live(request); let mut rows = self .tracked_state .reader(store) .load_rows_at_commit(&commit_id, &[tracked_request]) .await?; if let Some(row) = rows.pop().flatten() { return Ok(Some(project_tracked_row( row, &request.version_id, tracked_source_from_version_id(&candidate.version_id), ))); } } } } Ok(None) } } #[async_trait] impl LiveStateReader for LiveStateStoreReader where S: StorageReader + Sync, { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { LiveStateStoreReader::scan_rows(self, request).await } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { LiveStateStoreReader::load_row(self, request).await } } async fn scan_commit_derived_rows( store: &mut dyn StorageReader, commit_graph: &CommitGraphContext, request: &LiveStateScanRequest, scope: &LiveStateScanScope, ) -> Result, LixError> { if request.filter.untracked == Some(true) || !request_may_include_commit_derived(request) { return Ok(Vec::new()); } if !file_filter_allows_null(&request.filter.file_ids) { return Ok(Vec::new()); } let version_ids = if scope.projection_version_ids.is_empty() { vec![GLOBAL_VERSION_ID.to_string()] } else { scope.projection_version_ids.clone() }; let mut graph = commit_graph.reader(store); let commits = graph.all_commits().await?; let include_commit = schema_filter_allows(&request.filter.schema_keys, COMMIT_SCHEMA_KEY); let include_commit_edge = schema_filter_allows(&request.filter.schema_keys, COMMIT_EDGE_SCHEMA_KEY); let mut rows = Vec::new(); for version_id in &version_ids { if include_commit { for commit in &commits { rows.push(commit_row(commit, version_id)?); } } if include_commit_edge { for edge in graph.commit_edges(&commits) { rows.push(commit_edge_row(&edge, version_id)?); } } } rows.retain(|row| { (request.filter.entity_ids.is_empty() || request.filter.entity_ids.contains(&row.entity_id)) && (request.filter.version_ids.is_empty() || request.filter.version_ids.contains(&row.version_id)) }); Ok(rows) } fn request_may_include_commit_derived(request: &LiveStateScanRequest) -> bool { request.filter.schema_keys.is_empty() || request .filter .schema_keys .iter() .any(|schema_key| is_commit_derived_schema(schema_key)) } fn is_commit_derived_only_request(request: &LiveStateScanRequest) -> bool { !request.filter.schema_keys.is_empty() && request .filter .schema_keys .iter() .all(|schema_key| is_commit_derived_schema(schema_key)) } fn is_commit_derived_schema(schema_key: &str) -> bool { matches!(schema_key, COMMIT_SCHEMA_KEY | COMMIT_EDGE_SCHEMA_KEY) } fn schema_filter_allows(schema_keys: &[String], schema_key: &str) -> bool { schema_keys.is_empty() || schema_keys.iter().any(|candidate| candidate == schema_key) } fn file_filter_allows_null(file_ids: &[NullableKeyFilter]) -> bool { file_ids.is_empty() || file_ids .iter() .any(|file_id| matches!(file_id, NullableKeyFilter::Any | NullableKeyFilter::Null)) } fn commit_row( commit: &crate::commit_graph::CommitGraphCommit, version_id: &str, ) -> Result { let snapshot_content = serde_json::to_string(&serde_json::json!({ "id": commit.commit_id, })) .map_err(|error| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to encode derived lix_commit snapshot: {error}"), ) })?; Ok(MaterializedLiveStateRow { entity_id: EntityIdentity::single(commit.commit_id.clone()), schema_key: COMMIT_SCHEMA_KEY.to_string(), file_id: None, snapshot_content: Some(snapshot_content), metadata: None, deleted: false, created_at: commit.change.created_at.clone(), updated_at: commit.change.created_at.clone(), global: true, change_id: Some(commit.change.id.clone()), commit_id: Some(commit.commit_id.clone()), untracked: false, version_id: version_id.to_string(), }) } fn commit_edge_row( edge: &crate::commit_graph::CommitGraphEdge, version_id: &str, ) -> Result { let snapshot_content = serde_json::to_string(&serde_json::json!({ "parent_id": edge.parent_commit_id, "child_id": edge.child_commit_id, "parent_order": edge.parent_order, })) .map_err(|error| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("failed to encode derived lix_commit_edge snapshot: {error}"), ) })?; Ok(MaterializedLiveStateRow { entity_id: EntityIdentity { parts: vec![edge.parent_commit_id.clone(), edge.child_commit_id.clone()], }, schema_key: COMMIT_EDGE_SCHEMA_KEY.to_string(), file_id: None, snapshot_content: Some(snapshot_content), metadata: None, deleted: false, created_at: "1970-01-01T00:00:00.000Z".to_string(), updated_at: "1970-01-01T00:00:00.000Z".to_string(), global: true, change_id: None, commit_id: Some(edge.child_commit_id.clone()), untracked: false, version_id: version_id.to_string(), }) } fn tracked_scan_request_from_live(request: &LiveStateScanRequest) -> TrackedStateScanRequest { TrackedStateScanRequest { filter: TrackedStateFilter { schema_keys: request.filter.schema_keys.clone(), entity_ids: request.filter.entity_ids.clone(), file_ids: request.filter.file_ids.clone(), // Scan tombstones internally so version-local tombstones can hide // global fallback rows before the serving facade filters them. include_tombstones: true, }, projection: TrackedStateProjection { columns: request.projection.columns.clone(), }, limit: None, } } fn untracked_scan_request_from_live( request: &LiveStateScanRequest, version_ids: &[String], ) -> UntrackedStateScanRequest { let mut filter: crate::untracked_state::UntrackedStateFilter = request.filter.clone().into(); filter.version_ids = version_ids.to_vec(); UntrackedStateScanRequest { filter, projection: crate::untracked_state::UntrackedStateProjection { columns: request.projection.columns.clone(), }, limit: None, } } #[derive(Debug, Clone, PartialEq, Eq)] struct LiveStateScanScope { storage_version_ids: Vec, projection_version_ids: Vec, } async fn scan_scope( store: &mut dyn StorageReader, untracked_state: &UntrackedStateContext, request: &LiveStateScanRequest, ) -> Result { if request.filter.version_ids.is_empty() { return Ok(LiveStateScanScope { storage_version_ids: all_version_ref_ids(store, untracked_state).await?, projection_version_ids: Vec::new(), }); } let mut projection_version_ids = Vec::new(); for version_id in &request.filter.version_ids { if version_ref_exists(store, untracked_state, version_id).await? { projection_version_ids.push(version_id.clone()); } } let storage_version_ids = visibility::expanded_version_ids(&projection_version_ids); Ok(LiveStateScanScope { storage_version_ids, projection_version_ids, }) } async fn all_version_ref_ids( store: &mut dyn StorageReader, untracked_state: &UntrackedStateContext, ) -> Result, LixError> { let rows = untracked_state .reader(store) .scan_rows(&UntrackedStateScanRequest { filter: crate::untracked_state::UntrackedStateFilter { schema_keys: vec![VERSION_REF_SCHEMA_KEY.to_string()], version_ids: vec![GLOBAL_VERSION_ID.to_string()], ..Default::default() }, ..Default::default() }) .await?; rows.into_iter() .map(|row| row.entity_id.as_single_string_owned()) .collect() } async fn load_version_ref_commit_id( store: &mut dyn StorageReader, untracked_state: &UntrackedStateContext, version_id: &str, ) -> Result, LixError> { let Some(row) = untracked_state .reader(store) .load_row(&UntrackedStateRowRequest { schema_key: VERSION_REF_SCHEMA_KEY.to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single(version_id), file_id: crate::NullableKeyFilter::Null, }) .await? else { return Ok(None); }; let Some(snapshot_content) = row.snapshot_content.as_deref() else { return Ok(None); }; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("live_state version-ref snapshot parse failed: {error}"), ) })?; Ok(snapshot .get("commit_id") .and_then(serde_json::Value::as_str) .map(str::to_string)) } async fn version_ref_exists( store: &mut dyn StorageReader, untracked_state: &UntrackedStateContext, version_id: &str, ) -> Result { Ok( load_version_ref_commit_id(store, untracked_state, version_id) .await? .is_some(), ) } #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum TrackedRowSource { Global, Version, } fn tracked_source_from_version_id(version_id: &str) -> TrackedRowSource { if version_id == GLOBAL_VERSION_ID { TrackedRowSource::Global } else { TrackedRowSource::Version } } fn project_tracked_row( row: MaterializedTrackedStateRow, view_version_id: &str, source: TrackedRowSource, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: row.entity_id, schema_key: row.schema_key, file_id: row.file_id, snapshot_content: row.snapshot_content, metadata: row.metadata, deleted: row.deleted, created_at: row.created_at, updated_at: row.updated_at, global: source == TrackedRowSource::Global, change_id: Some(row.change_id), commit_id: Some(row.commit_id), untracked: false, version_id: view_version_id.to_string(), } } #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum LiveStateLookupSource { Untracked, Tracked, } #[derive(Debug, Clone, PartialEq, Eq)] struct LiveStateLookupCandidate { source: LiveStateLookupSource, version_id: String, } fn load_row_candidates(request: &LiveStateRowRequest) -> Vec { let mut candidates = vec![ LiveStateLookupCandidate { source: LiveStateLookupSource::Untracked, version_id: request.version_id.clone(), }, LiveStateLookupCandidate { source: LiveStateLookupSource::Tracked, version_id: request.version_id.clone(), }, ]; if request.version_id != GLOBAL_VERSION_ID { candidates.extend([ LiveStateLookupCandidate { source: LiveStateLookupSource::Untracked, version_id: GLOBAL_VERSION_ID.to_string(), }, LiveStateLookupCandidate { source: LiveStateLookupSource::Tracked, version_id: GLOBAL_VERSION_ID.to_string(), }, ]); } candidates } fn untracked_row_request_from_live( request: &LiveStateRowRequest, version_id: &str, ) -> crate::untracked_state::UntrackedStateRowRequest { crate::untracked_state::UntrackedStateRowRequest { schema_key: request.schema_key.clone(), version_id: version_id.to_string(), entity_id: request.entity_id.clone(), file_id: request.file_id.clone(), } } fn tracked_row_request_from_live(request: &LiveStateRowRequest) -> TrackedStateRowRequest { TrackedStateRowRequest { schema_key: request.schema_key.clone(), entity_id: request.entity_id.clone(), file_id: request.file_id.clone(), } } #[cfg(test)] mod tests { use std::sync::Arc; use super::*; use crate::backend::{testing::UnitTestBackend, Backend}; use crate::commit_store::{CommitDraftRef, CommitStoreContext}; use crate::entity_identity::EntityIdentity; use crate::json_store::{ JsonStoreContext, JsonWritePlacementRef, NormalizedJson, NormalizedJsonRef, }; use crate::live_state::LiveStateFilter; use crate::storage::{StorageContext, StorageWriteSet, StorageWriteTransaction}; use crate::tracked_state::{TrackedStateDeltaRef, TrackedStateScanRequest}; use crate::untracked_state::{MaterializedUntrackedStateRow, UntrackedStateContext}; use crate::NullableKeyFilter; use serde_json::json; const COMMIT_SCHEMA_KEY: &str = "lix_commit"; fn live_state_context() -> LiveStateContext { LiveStateContext::new( crate::tracked_state::TrackedStateContext::new(), crate::untracked_state::UntrackedStateContext::new(), crate::commit_graph::CommitGraphContext::new(), ) } async fn write_untracked_rows_to_store( store: &mut (impl StorageWriteTransaction + ?Sized), rows: &[MaterializedUntrackedStateRow], ) { let mut writes = StorageWriteSet::new(); let canonical_rows = rows .iter() .map(|row| crate::test_support::untracked_state_row_from_materialized(&mut writes, row)) .collect::, _>>() .expect("untracked rows should canonicalize"); UntrackedStateContext::new() .writer(&mut writes) .stage_rows(canonical_rows.iter().map(|row| row.as_ref())) .expect("untracked rows should write"); writes .apply(store) .await .expect("untracked rows should apply"); } async fn write_empty_commits_to_store( store: &mut (impl StorageWriteTransaction + ?Sized), commit_ids: &[&str], ) { let mut writes = StorageWriteSet::new(); for commit_id in commit_ids { let commit_change_id = format!("{commit_id}:commit"); CommitStoreContext::new() .writer(&mut *store, &mut writes) .stage_commit_draft( CommitDraftRef { id: commit_id, change_id: &commit_change_id, parent_ids: &[], author_account_ids: &[], created_at: "1970-01-01T00:00:00.000Z", }, Vec::new(), Vec::new(), ) .await .expect("empty commit should stage"); } writes .apply(store) .await .expect("empty commits should apply"); } async fn stage_materialized_live_rows( store: &mut (impl StorageReader + ?Sized), writes: &mut StorageWriteSet, _json_writer: &mut crate::json_store::JsonStoreWriter, rows: &[MaterializedLiveStateRow], ) -> Result<(), LixError> { let mut untracked_rows = Vec::new(); let mut tracked_rows_by_commit = std::collections::BTreeMap::< String, Vec<(crate::commit_store::Change, String, String)>, >::new(); let mut parent_by_commit = std::collections::BTreeMap::>::new(); for row in rows { if row.untracked { let materialized = crate::untracked_state::MaterializedUntrackedStateRow::from(row); let canonical = crate::test_support::untracked_state_row_from_materialized( writes, &materialized, )?; untracked_rows.push(canonical); continue; } let materialized = MaterializedTrackedStateRow::try_from(row)?; let commit_id = row.commit_id.clone().ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "test tracked row missing commit_id") })?; if row.schema_key == COMMIT_SCHEMA_KEY { parent_by_commit.insert( commit_id.clone(), parent_commit_id_from_test_commit_row(row)?, ); } if row.schema_key != COMMIT_SCHEMA_KEY { let change = crate::test_support::tracked_change_from_materialized(&materialized)?; stage_tracked_materialized_json(writes, &commit_id, &materialized)?; tracked_rows_by_commit.entry(commit_id).or_default().push(( change, materialized.created_at, materialized.updated_at, )); } } UntrackedStateContext::new() .writer(writes) .stage_rows(untracked_rows.iter().map(|row| row.as_ref()))?; for (commit_id, rows) in tracked_rows_by_commit { let parent_commit_id = parent_by_commit.remove(&commit_id).flatten(); let parent_ids = parent_commit_id .as_ref() .map(|parent| vec![parent.clone()]) .unwrap_or_default(); let commit_change_id = format!("{commit_id}:commit"); let commit = CommitDraftRef { id: &commit_id, change_id: &commit_change_id, parent_ids: &parent_ids, author_account_ids: &[], created_at: rows .first() .map(|(change, _, _)| change.created_at.as_str()) .unwrap_or("1970-01-01T00:00:00.000Z"), }; let staged = CommitStoreContext::new() .writer(&mut *store, writes) .stage_tracked_commit_draft( commit, rows.iter().map(|(change, _, _)| change.as_ref()).collect(), Vec::new(), ) .await?; let deltas = rows .iter() .zip(&staged.authored_locators) .map( |((change, created_at, updated_at), locator)| TrackedStateDeltaRef { change: change.as_ref(), locator: locator.as_ref(), created_at, updated_at, }, ) .collect::>(); TrackedStateContext::new() .writer(&mut *store, writes) .stage_delta(&commit_id, parent_commit_id.as_deref(), &deltas) .await?; } Ok(()) } fn stage_tracked_materialized_json( writes: &mut StorageWriteSet, commit_id: &str, row: &MaterializedTrackedStateRow, ) -> Result<(), LixError> { let mut payloads = Vec::new(); if let Some(snapshot) = row.snapshot_content.as_deref() { payloads.push(NormalizedJson::from_arc_unchecked(Arc::from(snapshot))); } if let Some(metadata) = row.metadata.as_ref() { payloads.push(NormalizedJson::from_arc_unchecked(Arc::from( crate::serialize_row_metadata(metadata), ))); } JsonStoreContext::new().writer().stage_batch( writes, JsonWritePlacementRef::CommitPack { commit_id, pack_id: 0, }, payloads .iter() .map(|payload| NormalizedJsonRef::from(payload)), )?; Ok(()) } fn parent_commit_id_from_test_commit_row( row: &MaterializedLiveStateRow, ) -> Result, LixError> { let Some(metadata) = row.metadata.as_deref() else { return Ok(None); }; let metadata = serde_json::from_str::(metadata).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("test commit row has invalid metadata: {error}"), ) })?; Ok(metadata .get("test_parents") .and_then(serde_json::Value::as_array) .and_then(|parents| parents.first()) .and_then(serde_json::Value::as_str) .map(str::to_string)) } #[tokio::test] async fn live_state_overlays_untracked_rows() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &[tracked_row_with_commit( "tracked-value", Some("change-tracked"), "commit-tracked", )], ) .await .expect("tracked row should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked row should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-tracked"), untracked_row("untracked-value"), ], ) .await; transaction.commit().await.expect("commit should persist"); let rows = scan_selected_tab_at(&live_state, storage.clone(), "global", false) .await .expect("scan should succeed"); assert_eq!(rows.len(), 1); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"untracked-value\"}") ); assert!(rows[0].untracked); assert_eq!(rows[0].change_id, None); let loaded = live_state .reader(storage.clone()) .load_row(&LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: "global".to_string(), entity_id: crate::entity_identity::EntityIdentity::single("selected-tab"), file_id: NullableKeyFilter::Null, }) .await .expect("load should succeed") .expect("overlay row should be visible"); assert!(loaded.untracked); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"untracked-value\"}") ); } #[tokio::test] async fn tracked_row_is_visible_without_untracked_overlay() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &[tracked_row_with_commit( "tracked-value", Some("change-tracked"), "commit-tracked", )], ) .await .expect("tracked row should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked row should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[version_ref_row("global", "commit-tracked")], ) .await; transaction.commit().await.expect("commit should persist"); let loaded = load_selected_tab(&live_state, storage.clone()) .await .expect("load should succeed") .expect("tracked row should be visible"); assert!(!loaded.untracked); assert_eq!(loaded.change_id.as_deref(), Some("change-tracked")); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"tracked-value\"}") ); } #[tokio::test] async fn deleting_untracked_row_reveals_tracked_row() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &[tracked_row_with_commit( "tracked-value", Some("change-tracked"), "commit-tracked", )], ) .await .expect("tracked row should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked row should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-tracked"), untracked_row("untracked-value"), ], ) .await; { let mut writes = StorageWriteSet::new(); let identity = crate::untracked_state::UntrackedStateIdentity { version_id: "global".to_string(), schema_key: "lix_key_value".to_string(), entity_id: EntityIdentity::single("selected-tab"), file_id: None, }; UntrackedStateContext::new() .writer(&mut writes) .stage_delete_rows(std::iter::once(identity.as_ref())); writes .apply(&mut transaction.as_mut()) .await .expect("untracked row should delete"); } transaction.commit().await.expect("commit should persist"); let loaded = load_selected_tab(&live_state, storage.clone()) .await .expect("load should succeed") .expect("tracked row should be visible again"); assert!(!loaded.untracked); assert_eq!(loaded.change_id.as_deref(), Some("change-tracked")); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"tracked-value\"}") ); } #[tokio::test] async fn load_row_falls_back_to_global_tracked_row_for_requested_version() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [tracked_row_with_commit( "global-tracked", Some("change-global"), "commit-global", )]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("tracked row should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked row should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("version-a", "commit-version-a"), ], ) .await; write_empty_commits_to_store(transaction.as_mut(), &["commit-version-a"]).await; transaction.commit().await.expect("commit should persist"); let loaded = load_selected_tab_at(&live_state, storage.clone(), "version-a") .await .expect("load should succeed") .expect("global row should be visible for requested version"); assert_eq!(loaded.version_id, "version-a"); assert!(loaded.global); assert!(!loaded.untracked); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"global-tracked\"}") ); } #[tokio::test] async fn main_sees_global_row_by_reading_global_root_separately() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let tracked_state = TrackedStateContext::new(); let live_state = LiveStateContext::new( tracked_state.clone(), UntrackedStateContext::new(), crate::commit_graph::CommitGraphContext::new(), ); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [tracked_row_with_commit( "global-tracked", Some("change-global"), "commit-global", )]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("global tracked row should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("global tracked row should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("main", "commit-main"), ], ) .await; write_empty_commits_to_store(transaction.as_mut(), &["commit-main"]).await; transaction.commit().await.expect("commit should persist"); let loaded = load_selected_tab_at(&live_state, storage.clone(), "main") .await .expect("load should succeed") .expect("global row should be projected into main"); assert_eq!(loaded.version_id, "main"); assert!(loaded.global); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"global-tracked\"}") ); let main_root_rows = scan_tracked_root(&tracked_state, storage.clone(), "commit-main").await; assert_eq!( main_root_rows.len(), 0, "global fallback must come from the global root, not a copied main root row" ); } #[tokio::test] async fn load_row_prefers_requested_version_over_global() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_with_commit("global-tracked", Some("change-global"), "commit-global"), tracked_row_at_with_commit( "version-a", "version-tracked", Some("change-version"), "commit-version", ), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("tracked rows should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked rows should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("version-a", "commit-version"), ], ) .await; transaction.commit().await.expect("commit should persist"); let loaded = load_selected_tab_at(&live_state, storage.clone(), "version-a") .await .expect("load should succeed") .expect("version row should be visible"); assert_eq!(loaded.version_id, "version-a"); assert!(!loaded.untracked); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"version-tracked\"}") ); } #[tokio::test] async fn main_override_hides_global_row() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_with_commit("global-tracked", Some("change-global"), "commit-global"), tracked_row_at_with_commit( "main", "main-tracked", Some("change-main"), "commit-main", ), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("tracked rows should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked rows should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("main", "commit-main"), ], ) .await; transaction.commit().await.expect("commit should persist"); let loaded = load_selected_tab_at(&live_state, storage.clone(), "main") .await .expect("load should succeed") .expect("main row should be visible"); assert_eq!(loaded.version_id, "main"); assert!(!loaded.global); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"main-tracked\"}") ); } #[tokio::test] async fn load_row_prefers_requested_untracked_over_requested_tracked_and_global_rows() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_with_commit("global-tracked", Some("change-global"), "commit-global"), tracked_row_at_with_commit( "version-a", "version-tracked", Some("change-version"), "commit-version", ), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("tracked rows should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked rows should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("version-a", "commit-version"), untracked_row_at("global", "global-untracked"), untracked_row_at("version-a", "version-untracked"), ], ) .await; transaction.commit().await.expect("commit should persist"); let loaded = load_selected_tab_at(&live_state, storage.clone(), "version-a") .await .expect("load should succeed") .expect("version untracked row should be visible"); assert_eq!(loaded.version_id, "version-a"); assert!(loaded.untracked); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"version-untracked\"}") ); } #[tokio::test] async fn scan_rows_overlays_requested_version_over_global() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_with_commit("global-tracked", Some("change-global"), "commit-global"), tracked_row_at_with_commit( "version-a", "version-tracked", Some("change-version"), "commit-version", ), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("rows should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("rows should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("version-a", "commit-version"), ], ) .await; transaction.commit().await.expect("commit should persist"); let rows = scan_selected_tab_at(&live_state, storage.clone(), "version-a", false) .await .expect("scan should succeed"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].version_id, "version-a"); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"version-tracked\"}") ); } #[tokio::test] async fn scan_rows_projects_global_row_into_requested_version() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [tracked_row_with_commit( "global-tracked", Some("change-global"), "commit-global", )]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("rows should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("rows should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("version-a", "commit-version-a"), ], ) .await; write_empty_commits_to_store(transaction.as_mut(), &["commit-version-a"]).await; transaction.commit().await.expect("commit should persist"); let rows = scan_selected_tab_at(&live_state, storage.clone(), "version-a", false) .await .expect("scan should succeed"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].version_id, "version-a"); assert!(rows[0].global); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"global-tracked\"}") ); } #[tokio::test] async fn scan_rows_does_not_project_global_rows_into_missing_version() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [tracked_row_with_commit( "global-tracked", Some("change-global"), "commit-global", )]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("tracked row should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked row should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[version_ref_row("global", "commit-global")], ) .await; transaction.commit().await.expect("commit should persist"); let rows = scan_selected_tab_at(&live_state, storage.clone(), "missing-version", false) .await .expect("scan should succeed"); assert_eq!( rows.len(), 0, "global rows must not be projected into a missing version scope" ); } #[tokio::test] async fn winning_tombstone_hides_row_unless_tombstones_are_included() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_with_commit("global-tracked", Some("change-global"), "commit-global"), tombstone_tracked_row_at_with_commit( "version-a", Some("change-tombstone"), "commit-version", ), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("rows should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("rows should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("version-a", "commit-version"), ], ) .await; transaction.commit().await.expect("commit should persist"); let hidden = scan_selected_tab_at(&live_state, storage.clone(), "version-a", false) .await .expect("scan should succeed"); assert_eq!(hidden.len(), 0); let with_tombstone = scan_selected_tab_at(&live_state, storage.clone(), "version-a", true) .await .expect("scan should succeed"); assert_eq!(with_tombstone.len(), 1); assert_eq!(with_tombstone[0].version_id, "version-a"); assert_eq!(with_tombstone[0].snapshot_content, None); } #[tokio::test] async fn main_tombstone_hides_global_row() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_with_commit("global-tracked", Some("change-global"), "commit-global"), tombstone_tracked_row_at_with_commit( "main", Some("change-main-tombstone"), "commit-main", ), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("tracked rows should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked rows should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[ version_ref_row("global", "commit-global"), version_ref_row("main", "commit-main"), ], ) .await; transaction.commit().await.expect("commit should persist"); let hidden = scan_selected_tab_at(&live_state, storage.clone(), "main", false) .await .expect("scan should succeed"); assert_eq!(hidden.len(), 0); let tombstones = scan_selected_tab_at(&live_state, storage.clone(), "main", true) .await .expect("scan should succeed"); assert_eq!(tombstones.len(), 1); assert_eq!(tombstones[0].version_id, "main"); assert!(!tombstones[0].global); assert_eq!(tombstones[0].snapshot_content, None); } #[tokio::test] async fn writer_allows_commit_fact_to_share_the_touched_version_commit_id() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = live_state_context(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_at_with_commit( "version-a", "version-row", Some("change-version"), "commit-version", ), commit_live_state_row("commit-version"), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("commit facts are changelog projections, not root-local rows"); } writes .apply(&mut transaction.as_mut()) .await .expect("commit fact rows should apply"); } write_untracked_rows_to_store( transaction.as_mut(), &[version_ref_row("version-a", "commit-version")], ) .await; transaction.commit().await.expect("commit should persist"); let loaded = load_selected_tab_at(&live_state, storage.clone(), "version-a") .await .expect("load should succeed") .expect("version row should be visible"); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"version-row\"}") ); } #[tokio::test] async fn writer_uses_first_parent_as_merge_root_base() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let mut seed_transaction = storage .begin_write_transaction() .await .expect("seed transaction should open"); let mut writes = StorageWriteSet::new(); { CommitStoreContext::new() .writer(&mut seed_transaction.as_mut(), &mut writes) .stage_commit_draft( CommitDraftRef { id: "parent-left", change_id: "parent-left:commit", parent_ids: &[], author_account_ids: &[], created_at: "1970-01-01T00:00:00.000Z", }, Vec::new(), Vec::new(), ) .await .expect("first parent commit should stage"); TrackedStateContext::new() .writer(&mut seed_transaction.as_mut(), &mut writes) .stage_delta("parent-left", None, &[]) .await .expect("first parent root should exist"); } writes .apply(&mut seed_transaction.as_mut()) .await .expect("first parent root should apply"); seed_transaction .commit() .await .expect("seed transaction should commit"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_at_with_commit( "version-a", "version-row", Some("change-version"), "commit-merge", ), commit_live_state_row_with_parents( "commit-merge", &["parent-left", "parent-right"], ), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("merge commit should use first parent as tracked-root base"); } writes .apply(&mut transaction.as_mut()) .await .expect("merge commit rows should apply"); } } #[tokio::test] async fn non_global_root_does_not_store_global_rows() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let tracked_state = TrackedStateContext::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); { let rows = [ tracked_row_with_commit("global-tracked", Some("change-global"), "commit-global"), tracked_row_at_with_commit( "main", "main-tracked", Some("change-main"), "commit-main", ), ]; let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); { stage_materialized_live_rows( transaction.as_mut(), &mut writes, &mut json_writer, &rows, ) .await .expect("tracked rows should stage"); } writes .apply(&mut transaction.as_mut()) .await .expect("tracked rows should apply"); } transaction.commit().await.expect("commit should persist"); let global_root_rows = scan_tracked_root(&tracked_state, storage.clone(), "commit-global").await; assert_eq!(global_root_rows.len(), 1); assert_eq!( global_root_rows[0].snapshot_content.as_deref(), Some("{\"value\":\"global-tracked\"}") ); let main_root_rows = scan_tracked_root(&tracked_state, storage.clone(), "commit-main").await; assert_eq!(main_root_rows.len(), 1); assert_eq!( main_root_rows[0].snapshot_content.as_deref(), Some("{\"value\":\"main-tracked\"}") ); } async fn load_selected_tab( live_state: &LiveStateContext, storage: StorageContext, ) -> Result, LixError> { live_state .reader(storage) .load_row(&LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: "global".to_string(), entity_id: crate::entity_identity::EntityIdentity::single("selected-tab"), file_id: NullableKeyFilter::Null, }) .await } async fn load_selected_tab_at( live_state: &LiveStateContext, storage: StorageContext, version_id: &str, ) -> Result, LixError> { live_state .reader(storage) .load_row(&LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: version_id.to_string(), entity_id: crate::entity_identity::EntityIdentity::single("selected-tab"), file_id: NullableKeyFilter::Null, }) .await } async fn scan_selected_tab_at( live_state: &LiveStateContext, storage: StorageContext, version_id: &str, include_tombstones: bool, ) -> Result, LixError> { live_state .reader(storage) .scan_rows(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec!["lix_key_value".to_string()], entity_ids: vec![crate::entity_identity::EntityIdentity::single( "selected-tab", )], version_ids: vec![version_id.to_string()], file_ids: vec![NullableKeyFilter::Null], include_tombstones, ..LiveStateFilter::default() }, ..LiveStateScanRequest::default() }) .await } async fn scan_tracked_root( tracked_state: &TrackedStateContext, storage: StorageContext, commit_id: &str, ) -> Vec { tracked_state .reader(storage) .scan_rows_at_commit( commit_id, &TrackedStateScanRequest { filter: TrackedStateFilter { include_tombstones: true, ..Default::default() }, ..Default::default() }, ) .await .expect("tracked root should scan") } fn tracked_row_with_commit( value: &str, change_id: Option<&str>, commit_id: &str, ) -> MaterializedLiveStateRow { tracked_row_at_with_commit("global", value, change_id, commit_id) } fn tracked_row_at_with_commit( version_id: &str, value: &str, change_id: Option<&str>, commit_id: &str, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: identity("selected-tab"), schema_key: "lix_key_value".to_string(), file_id: None, snapshot_content: Some(format!("{{\"value\":\"{value}\"}}")), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global: version_id == "global", change_id: change_id.map(str::to_string), commit_id: Some(commit_id.to_string()), untracked: false, version_id: version_id.to_string(), } } fn tombstone_tracked_row_at_with_commit( version_id: &str, change_id: Option<&str>, commit_id: &str, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { snapshot_content: None, deleted: true, ..tracked_row_at_with_commit(version_id, "ignored", change_id, commit_id) } } fn untracked_row(value: &str) -> MaterializedUntrackedStateRow { untracked_row_at("global", value) } fn untracked_row_at(version_id: &str, value: &str) -> MaterializedUntrackedStateRow { MaterializedUntrackedStateRow { entity_id: identity("selected-tab"), schema_key: "lix_key_value".to_string(), file_id: None, snapshot_content: Some(format!("{{\"value\":\"{value}\"}}")), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global: version_id == "global", version_id: version_id.to_string(), } } fn version_ref_row(version_id: &str, commit_id: &str) -> MaterializedUntrackedStateRow { MaterializedUntrackedStateRow { entity_id: identity(version_id), schema_key: "lix_version_ref".to_string(), file_id: None, snapshot_content: Some( serde_json::to_string(&json!({ "id": version_id, "commit_id": commit_id, })) .expect("version ref should serialize"), ), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global: true, version_id: "global".to_string(), } } fn commit_live_state_row(commit_id: &str) -> MaterializedLiveStateRow { commit_live_state_row_with_parents(commit_id, &[]) } fn commit_live_state_row_with_parents( commit_id: &str, parent_ids: &[&str], ) -> MaterializedLiveStateRow { let mut row = commit_live_state_row_with_snapshot( commit_id, json!({ "id": commit_id, }), ); row.metadata = Some( serde_json::to_string(&json!({ "test_parents": parent_ids })) .expect("test metadata should serialize"), ); row } fn commit_live_state_row_with_snapshot( commit_id: &str, snapshot: serde_json::Value, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: identity(commit_id), schema_key: COMMIT_SCHEMA_KEY.to_string(), file_id: None, snapshot_content: Some( serde_json::to_string(&snapshot).expect("commit snapshot should serialize"), ), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global: true, change_id: Some(format!("change-{commit_id}")), commit_id: Some(commit_id.to_string()), untracked: false, version_id: "global".to_string(), } } fn identity(entity_id: &str) -> EntityIdentity { EntityIdentity::single(entity_id) } } ================================================ FILE: packages/engine/src/live_state/mod.rs ================================================ mod context; mod overlay; mod reader; mod types; mod visibility; #[allow(unused_imports)] pub(crate) use context::{LiveStateContext, LiveStateStoreReader}; #[allow(unused_imports)] pub(crate) use reader::LiveStateReader; #[allow(unused_imports)] pub(crate) use types::{ Bound, LiveStateFilter, LiveStateProjection, LiveStateRowIdentity, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, ScanConstraint, ScanField, ScanOperator, }; ================================================ FILE: packages/engine/src/live_state/overlay.rs ================================================ use std::collections::BTreeMap; use crate::live_state::{LiveStateRowIdentity, MaterializedLiveStateRow}; /// Applies the local untracked overlay to tracked live-state rows. /// /// The visible live-state contract is "latest local untracked row wins" for /// the same version/schema/entity/file identity. This keeps SQL providers from /// knowing whether a visible row came from tracked changelog projection or from /// local untracked state. pub(crate) fn overlay_untracked_rows( tracked_rows: Vec, untracked_rows: Vec, ) -> Vec { let mut rows_by_identity = BTreeMap::new(); for row in tracked_rows { rows_by_identity.insert(LiveStateRowIdentity::from_row(&row), row); } for row in untracked_rows { rows_by_identity.insert(LiveStateRowIdentity::from_row(&row), row); } rows_by_identity.into_values().collect() } #[cfg(test)] mod tests { use super::*; #[test] fn untracked_row_wins_for_same_identity() { let tracked = live_row("tracked", false, Some("change-tracked")); let untracked = live_row("untracked", true, None); let rows = overlay_untracked_rows(vec![tracked], vec![untracked]); assert_eq!(rows.len(), 1); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"untracked\"}") ); assert!(rows[0].untracked); assert_eq!(rows[0].change_id, None); } #[test] fn different_identities_are_preserved() { let tracked = live_row("tracked", false, Some("change-tracked")); let mut untracked = live_row("untracked", true, None); untracked.entity_id = crate::entity_identity::EntityIdentity::single("other"); let rows = overlay_untracked_rows(vec![tracked], vec![untracked]); assert_eq!(rows.len(), 2); } fn live_row(value: &str, untracked: bool, change_id: Option<&str>) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single("entity"), schema_key: "schema".to_string(), file_id: None, snapshot_content: Some(format!("{{\"value\":\"{value}\"}}")), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global: true, change_id: change_id.map(str::to_string), commit_id: None, untracked, version_id: "global".to_string(), } } } ================================================ FILE: packages/engine/src/live_state/reader.rs ================================================ use async_trait::async_trait; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{LiveStateRowRequest, LiveStateScanRequest}; use crate::LixError; /// Minimal engine read model for transaction planning and SQL providers. /// /// Engine only needs visible state-row reads here. Changelog freshness/catch-up /// should be added at this boundary later instead of leaking projection internals /// into sessions or SQL providers. #[async_trait] pub(crate) trait LiveStateReader: Send + Sync { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError>; async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError>; } ================================================ FILE: packages/engine/src/live_state/types.rs ================================================ use crate::entity_identity::EntityIdentity; use crate::tracked_state::MaterializedTrackedStateRow; use crate::untracked_state::{ MaterializedUntrackedStateRow, UntrackedStateFilter, UntrackedStateRowRequest, }; use crate::{NullableKeyFilter, Value}; /// Durable row visible through live_state reads. /// /// Unlike provider write rows, live-state rows are fully hydrated facts. Missing /// generated fields should be caught before this type is constructed. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct MaterializedLiveStateRow { pub(crate) entity_id: EntityIdentity, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot_content: Option, pub(crate) metadata: Option, pub(crate) deleted: bool, pub(crate) created_at: String, pub(crate) updated_at: String, pub(crate) global: bool, pub(crate) change_id: Option, pub(crate) commit_id: Option, pub(crate) untracked: bool, pub(crate) version_id: String, } impl From for MaterializedLiveStateRow { fn from(row: MaterializedUntrackedStateRow) -> Self { MaterializedLiveStateRow { entity_id: row.entity_id, schema_key: row.schema_key, file_id: row.file_id, snapshot_content: row.snapshot_content, metadata: row.metadata, deleted: row.deleted, created_at: row.created_at, updated_at: row.updated_at, global: row.global, change_id: None, commit_id: None, untracked: true, version_id: row.version_id, } } } impl TryFrom<&MaterializedLiveStateRow> for MaterializedTrackedStateRow { type Error = crate::LixError; fn try_from(row: &MaterializedLiveStateRow) -> Result { if row.untracked { return Err(crate::LixError::new( "LIX_ERROR_UNKNOWN", "tracked_state cannot store untracked live-state rows", )); } let Some(change_id) = row.change_id.clone() else { return Err(crate::LixError::new( "LIX_ERROR_UNKNOWN", "tracked_state rows require change_id", )); }; let Some(commit_id) = row.commit_id.clone() else { return Err(crate::LixError::new( "LIX_ERROR_UNKNOWN", "tracked_state rows require commit_id", )); }; Ok(MaterializedTrackedStateRow { entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: row.file_id.clone(), snapshot_content: row.snapshot_content.clone(), metadata: row.metadata.clone(), deleted: row.deleted, created_at: row.created_at.clone(), updated_at: row.updated_at.clone(), change_id, commit_id, }) } } impl From<&MaterializedLiveStateRow> for MaterializedUntrackedStateRow { fn from(row: &MaterializedLiveStateRow) -> Self { MaterializedUntrackedStateRow { entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: row.file_id.clone(), snapshot_content: row.snapshot_content.clone(), metadata: row.metadata.clone(), deleted: row.deleted, created_at: row.created_at.clone(), updated_at: row.updated_at.clone(), global: row.global, version_id: row.version_id.clone(), } } } /// Which indexed field a live-state scan constraint applies to. #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)] pub(crate) enum ScanField { EntityId, FileId, } /// Inclusive or exclusive range bound. #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)] pub(crate) struct Bound { pub(crate) value: Value, pub(crate) inclusive: bool, } /// SQL-free structured scan constraint. #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)] pub(crate) struct ScanConstraint { pub(crate) field: ScanField, pub(crate) operator: ScanOperator, } /// Structured scan operator aligned with the current planner/storage split. #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize)] pub(crate) enum ScanOperator { Eq(Value), In(Vec), Range { lower: Option, upper: Option, }, } /// Identity-centered filter for visible live entities. #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct LiveStateFilter { #[serde(default)] pub(crate) schema_keys: Vec, #[serde(default)] pub(crate) entity_ids: Vec, #[serde(default)] pub(crate) version_ids: Vec, #[serde(default)] pub(crate) file_ids: Vec>, #[serde(default)] pub(crate) untracked: Option, #[serde(default)] pub(crate) constraints: Vec, #[serde(default)] pub(crate) include_tombstones: bool, } impl From for UntrackedStateFilter { fn from(filter: LiveStateFilter) -> Self { Self { schema_keys: filter.schema_keys, entity_ids: filter.entity_ids, version_ids: filter.version_ids, file_ids: filter.file_ids, } } } /// Requested property set for a live-state scan. #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct LiveStateProjection { #[serde(default)] pub(crate) columns: Vec, } /// First-principles scan request for engine-owned reads. #[derive(Debug, Clone, PartialEq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct LiveStateScanRequest { #[serde(default)] pub(crate) filter: LiveStateFilter, #[serde(default)] pub(crate) projection: LiveStateProjection, #[serde(default)] pub(crate) limit: Option, } /// Point lookup request for one visible live-state row. #[derive(Debug, Clone, PartialEq)] pub(crate) struct LiveStateRowRequest { pub(crate) schema_key: String, pub(crate) version_id: String, pub(crate) entity_id: EntityIdentity, pub(crate) file_id: NullableKeyFilter, } impl From<&LiveStateRowRequest> for UntrackedStateRowRequest { fn from(request: &LiveStateRowRequest) -> Self { Self { schema_key: request.schema_key.clone(), version_id: request.version_id.clone(), entity_id: request.entity_id.clone(), file_id: request.file_id.clone(), } } } /// Stable visible-row identity used for overlay composition. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] pub(crate) struct LiveStateRowIdentity { pub(crate) version_id: String, pub(crate) schema_key: String, pub(crate) entity_id: EntityIdentity, pub(crate) file_id: Option, } impl LiveStateRowIdentity { pub(crate) fn from_row(row: &MaterializedLiveStateRow) -> Self { Self { version_id: row.version_id.clone(), schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), file_id: row.file_id.clone(), } } } ================================================ FILE: packages/engine/src/live_state/visibility.rs ================================================ use std::collections::BTreeMap; use crate::live_state::{LiveStateRowIdentity, MaterializedLiveStateRow}; use crate::GLOBAL_VERSION_ID; /// Expands a version-scoped storage read so global candidates are available for /// the visibility overlay. pub(crate) fn expanded_version_ids(version_ids: &[String]) -> Vec { if version_ids.is_empty() { return Vec::new(); } let mut expanded = version_ids.to_vec(); if version_ids .iter() .any(|version_id| version_id != GLOBAL_VERSION_ID) && !expanded .iter() .any(|version_id| version_id == GLOBAL_VERSION_ID) { expanded.push(GLOBAL_VERSION_ID.to_string()); } expanded } /// Resolves raw tracked/untracked candidates into the rows visible for a scan. /// /// Global rows are projected into each requested version scope, but keep /// `global = true`. Version-scoped rows win over projected global rows for the /// same identity. Tombstones participate in winning and are filtered only after /// visibility is resolved. This projection is a read concern; constraint /// validation remains exact storage-scope local unless a validator explicitly /// opts into overlay semantics. pub(crate) fn resolve_scan_rows( rows: Vec, requested_version_ids: &[String], include_tombstones: bool, ) -> Vec { let mut rows = project_global_rows_into_requested_versions(rows, requested_version_ids); if !include_tombstones { rows.retain(|row| !row.deleted); } rows } /// Resolves a row loaded through a concrete storage version into the row visible /// to the requested version scope. pub(crate) fn project_loaded_row( mut row: MaterializedLiveStateRow, requested_version_id: &str, matched_version_id: &str, ) -> MaterializedLiveStateRow { if row.global && requested_version_id != GLOBAL_VERSION_ID { row.version_id = requested_version_id.to_string(); } else if matched_version_id == GLOBAL_VERSION_ID && requested_version_id != GLOBAL_VERSION_ID { row.version_id = requested_version_id.to_string(); } row } fn project_global_rows_into_requested_versions( rows: Vec, requested_version_ids: &[String], ) -> Vec { if requested_version_ids.is_empty() { return rows; } let mut rows_by_identity = BTreeMap::::new(); for requested_version_id in requested_version_ids { for row in &rows { if row.version_id == GLOBAL_VERSION_ID { let mut projected = row.clone(); projected.version_id = requested_version_id.clone(); rows_by_identity.insert(LiveStateRowIdentity::from_row(&projected), projected); } } for row in rows .iter() .filter(|row| row.version_id == *requested_version_id) { rows_by_identity.insert(LiveStateRowIdentity::from_row(row), row.clone()); } } rows_by_identity.into_values().collect() } #[cfg(test)] mod tests { use super::*; #[test] fn expands_requested_version_with_global_candidates() { assert_eq!( expanded_version_ids(&["version-a".to_string()]), vec!["version-a".to_string(), "global".to_string()] ); assert_eq!( expanded_version_ids(&["global".to_string()]), vec!["global".to_string()] ); } #[test] fn scan_projects_global_row_into_requested_version() { let rows = resolve_scan_rows( vec![row_at( "global", "global-value", true, Some("change-global"), )], &["version-a".to_string()], false, ); assert_eq!(rows.len(), 1); assert_eq!(rows[0].version_id, "version-a"); assert!(rows[0].global); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"global-value\"}") ); } #[test] fn scan_prefers_requested_version_row_over_projected_global_row() { let rows = resolve_scan_rows( vec![ row_at("global", "global-value", true, Some("change-global")), row_at("version-a", "version-value", false, Some("change-version")), ], &["version-a".to_string()], false, ); assert_eq!(rows.len(), 1); assert_eq!(rows[0].version_id, "version-a"); assert!(!rows[0].global); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"version-value\"}") ); } #[test] fn version_tombstone_hides_global_row_after_visibility_resolution() { let rows = resolve_scan_rows( vec![ row_at("global", "global-value", true, Some("change-global")), tombstone_at("version-a", false, Some("change-tombstone")), ], &["version-a".to_string()], false, ); assert!(rows.is_empty()); } #[test] fn tombstone_can_be_returned_when_requested() { let rows = resolve_scan_rows( vec![ row_at("global", "global-value", true, Some("change-global")), tombstone_at("version-a", false, Some("change-tombstone")), ], &["version-a".to_string()], true, ); assert_eq!(rows.len(), 1); assert_eq!(rows[0].version_id, "version-a"); assert_eq!(rows[0].snapshot_content, None); } #[test] fn loaded_global_row_is_projected_into_requested_version() { let row = project_loaded_row( row_at("global", "global-value", true, Some("change-global")), "version-a", "global", ); assert_eq!(row.version_id, "version-a"); assert!(row.global); } fn row_at( version_id: &str, value: &str, global: bool, change_id: Option<&str>, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single("entity"), schema_key: "schema".to_string(), file_id: None, snapshot_content: Some(format!("{{\"value\":\"{value}\"}}")), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global, change_id: change_id.map(str::to_string), commit_id: Some("commit".to_string()), untracked: false, version_id: version_id.to_string(), } } fn tombstone_at( version_id: &str, global: bool, change_id: Option<&str>, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { snapshot_content: None, deleted: true, ..row_at(version_id, "ignored", global, change_id) } } } ================================================ FILE: packages/engine/src/plugin/archive.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use std::io::{Cursor, Read}; use std::path::{Component, Path}; use serde_json::Value as JsonValue; use zip::read::ZipArchive; use crate::schema::{schema_key_from_definition, validate_lix_schema_definition}; use crate::LixError; use super::{parse_plugin_manifest_json, InstalledPlugin, PluginManifest}; #[derive(Debug, Clone)] pub(crate) struct ParsedPluginArchive { pub manifest: PluginManifest, pub schemas: Vec, } pub(crate) fn parse_plugin_archive_for_install( archive_bytes: &[u8], ) -> Result { let files = read_archive_files_for_install(archive_bytes)?; let manifest_bytes = files.get("manifest.json").ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "Plugin archive must contain manifest.json".to_string(), hint: None, details: None, })?; let manifest_raw = std::str::from_utf8(manifest_bytes).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive manifest.json must be UTF-8: {error}"), hint: None, details: None, })?; let validated_manifest = parse_plugin_manifest_json(manifest_raw)?; let entry_path = normalize_archive_path_for_install(&validated_manifest.manifest.entry)?; let wasm_bytes = files .get(&entry_path) .ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "Plugin archive is missing manifest entry file '{}'", validated_manifest.manifest.entry ), hint: None, details: None, })? .clone(); ensure_valid_plugin_wasm_for_install(&wasm_bytes)?; let mut schemas = Vec::with_capacity(validated_manifest.manifest.schemas.len()); let mut seen_schema_keys = BTreeSet::<(String, String)>::new(); for schema_path in &validated_manifest.manifest.schemas { let normalized_schema_path = normalize_archive_path_for_install(schema_path)?; let schema_bytes = files.get(&normalized_schema_path).ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive is missing schema file '{schema_path}'"), hint: None, details: None, })?; let schema_json: JsonValue = serde_json::from_slice(schema_bytes).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "Plugin archive schema '{schema_path}' is invalid JSON: {error}" ), hint: None, details: None, })?; validate_lix_schema_definition(&schema_json)?; let schema_key = schema_key_from_definition(&schema_json)?; if !seen_schema_keys.insert(schema_key.schema_key.clone()) { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "Plugin archive declares duplicate schema '{}'", schema_key.schema_key ), hint: None, details: None, }); } schemas.push(schema_json); } Ok(ParsedPluginArchive { manifest: validated_manifest.manifest, schemas, }) } pub(crate) fn load_installed_plugin_from_archive_bytes( plugin_key: &str, archive_path: &str, archive_bytes: &[u8], ) -> Result { let files = read_plugin_archive_files(archive_path, archive_bytes)?; let manifest_bytes = files.get("manifest.json").ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive '{}' is missing manifest.json", archive_path ), hint: None, details: None, })?; let manifest_raw = std::str::from_utf8(manifest_bytes).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive '{}' manifest.json must be UTF-8: {error}", archive_path ), hint: None, details: None, })?; let validated_manifest = parse_plugin_manifest_json(manifest_raw)?; if validated_manifest.manifest.key != plugin_key { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive '{}' key mismatch: path key '{}' vs manifest key '{}'", archive_path, plugin_key, validated_manifest.manifest.key ), hint: None, details: None, }); } let entry_path = normalize_plugin_archive_path_for_materialization(&validated_manifest.manifest.entry)?; let wasm = files.get(&entry_path).ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive '{}' is missing entry file '{}'", archive_path, validated_manifest.manifest.entry ), hint: None, details: None, })?; ensure_valid_plugin_wasm_for_materialization(wasm)?; let manifest = validated_manifest.manifest; let content_type = manifest.file_match.content_type; Ok(InstalledPlugin { key: manifest.key, runtime: manifest.runtime, api_version: manifest.api_version, path_glob: manifest.file_match.path_glob, content_type, entry: manifest.entry, manifest_json: validated_manifest.normalized_json, wasm: wasm.clone(), }) } fn read_archive_files_for_install( archive_bytes: &[u8], ) -> Result>, LixError> { if archive_bytes.is_empty() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "Plugin archive bytes must not be empty".to_string(), hint: None, details: None, }); } let mut archive = ZipArchive::new(Cursor::new(archive_bytes)).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive is not a valid zip file: {error}"), hint: None, details: None, })?; let mut files = BTreeMap::>::new(); for index in 0..archive.len() { let mut entry = archive.by_index(index).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Failed to read plugin archive entry at index {index}: {error}"), hint: None, details: None, })?; let raw_name = entry.name().to_string(); if entry.is_dir() { continue; } if is_symlink_mode(entry.unix_mode()) { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive entry '{raw_name}' must not be a symlink"), hint: None, details: None, }); } let normalized_path = normalize_archive_path_for_install(&raw_name)?; let mut bytes = Vec::new(); entry.read_to_end(&mut bytes).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Failed to read plugin archive entry '{raw_name}': {error}"), hint: None, details: None, })?; if files.insert(normalized_path.clone(), bytes).is_some() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive contains duplicate entry '{normalized_path}'"), hint: None, details: None, }); } } Ok(files) } fn read_plugin_archive_files( archive_path: &str, archive_bytes: &[u8], ) -> Result>, LixError> { if archive_bytes.is_empty() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive '{}' is empty", archive_path ), hint: None, details: None, }); } let mut archive = ZipArchive::new(Cursor::new(archive_bytes)).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive '{}' is not a valid zip file: {error}", archive_path ), hint: None, details: None, })?; let mut files = BTreeMap::>::new(); for index in 0..archive.len() { let mut entry = archive.by_index(index).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: failed to read archive '{}' entry index {}: {error}", archive_path, index ), hint: None, details: None, })?; let entry_name = entry.name().to_string(); let normalized_path = normalize_plugin_archive_path_for_materialization(&entry_name)?; if normalized_path.ends_with('/') { continue; } let mut bytes = Vec::new(); entry.read_to_end(&mut bytes).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: failed to read archive '{}' entry '{}': {error}", archive_path, entry_name ), hint: None, details: None, })?; files.insert(normalized_path, bytes); } Ok(files) } fn normalize_archive_path_for_install(path: &str) -> Result { if path.is_empty() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "Plugin archive path must not be empty".to_string(), hint: None, details: None, }); } if path.starts_with('/') || path.starts_with('\\') { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive path '{path}' must be relative"), hint: None, details: None, }); } if path.contains('\\') { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive path '{path}' must use forward slash separators"), hint: None, details: None, }); } let mut segments = Vec::::new(); for component in Path::new(path).components() { match component { Component::Normal(value) => { let segment = value.to_str().ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "Plugin archive path '{path}' contains non-UTF-8 components" ), hint: None, details: None, })?; if segment.is_empty() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive path '{path}' is invalid"), hint: None, details: None, }); } segments.push(segment.to_string()); } Component::CurDir | Component::ParentDir | Component::RootDir | Component::Prefix(_) => { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "Plugin archive path '{path}' must not contain traversal or absolute components" ), hint: None, details: None, }) } } } if segments.is_empty() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin archive path '{path}' is invalid"), hint: None, details: None, }); } Ok(segments.join("/")) } fn normalize_plugin_archive_path_for_materialization(path: &str) -> Result { let raw_path = Path::new(path); if raw_path.is_absolute() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive path '{}' must be relative", path ), hint: None, details: None, }); } let mut normalized = Vec::new(); for component in raw_path.components() { match component { Component::Normal(part) => normalized.push(part.to_string_lossy().to_string()), Component::CurDir => {} Component::ParentDir | Component::RootDir | Component::Prefix(_) => { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive path '{}' must not escape the archive root", path ), hint: None, details: None, }); } } } if normalized.is_empty() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "plugin materialization: archive path must not be empty".to_string(), hint: None, details: None, }); } Ok(normalized.join("/")) } fn ensure_valid_plugin_wasm_for_install(wasm_bytes: &[u8]) -> Result<(), LixError> { if wasm_bytes.is_empty() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "Plugin wasm bytes must not be empty".to_string(), hint: None, details: None, }); } if wasm_bytes.len() < 8 || !wasm_bytes.starts_with(&[0x00, 0x61, 0x73, 0x6d]) { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "Plugin wasm bytes must start with a valid wasm header".to_string(), hint: None, details: None, }); } Ok(()) } fn ensure_valid_plugin_wasm_for_materialization(bytes: &[u8]) -> Result<(), LixError> { const WASM_MAGIC: &[u8; 4] = b"\0asm"; if bytes.len() < WASM_MAGIC.len() || &bytes[..WASM_MAGIC.len()] != WASM_MAGIC { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "plugin materialization: entry file must be a valid WebAssembly module" .to_string(), hint: None, details: None, }); } Ok(()) } fn is_symlink_mode(mode: Option) -> bool { const MODE_FILE_TYPE_MASK: u32 = 0o170000; const MODE_SYMLINK: u32 = 0o120000; mode.is_some_and(|value| (value & MODE_FILE_TYPE_MASK) == MODE_SYMLINK) } ================================================ FILE: packages/engine/src/plugin/component.rs ================================================ use std::sync::Arc; use crate::common::LixError; use crate::wasm::{WasmComponentInstance, WasmLimits, WasmRuntime}; use super::InstalledPlugin; #[derive(Clone)] pub(crate) struct CachedPluginComponent { pub(crate) wasm: Vec, pub(crate) instance: Arc, } const APPLY_CHANGES_EXPORTS: &[&str] = &["apply-changes", "api#apply-changes"]; pub(crate) trait PluginComponentHost { fn plugin_component_cache( &self, ) -> &std::sync::Mutex>; fn wasm_runtime(&self) -> &Arc; } pub(crate) async fn load_or_init_plugin_component( host: &impl PluginComponentHost, plugin: &InstalledPlugin, ) -> Result, LixError> { { let guard = host.plugin_component_cache().lock().map_err(|_| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "plugin component cache lock poisoned".to_string(), hint: None, details: None, })?; if let Some(cached) = guard.get(&plugin.key) { if cached.wasm == plugin.wasm { return Ok(cached.instance.clone()); } } } let initialized = host .wasm_runtime() .init_component(plugin.wasm.clone(), WasmLimits::default()) .await?; let mut guard = host.plugin_component_cache().lock().map_err(|_| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "plugin component cache lock poisoned".to_string(), hint: None, details: None, })?; if let Some(cached) = guard.get(&plugin.key) { if cached.wasm == plugin.wasm { return Ok(cached.instance.clone()); } } guard.insert( plugin.key.clone(), CachedPluginComponent { wasm: plugin.wasm.clone(), instance: initialized.clone(), }, ); Ok(initialized) } pub(crate) async fn apply_changes_with_plugin( host: &impl PluginComponentHost, plugin: &InstalledPlugin, payload: &[u8], ) -> Result, LixError> { let instance = load_or_init_plugin_component(host, plugin).await?; invoke_apply_changes_export(instance.as_ref(), payload).await } async fn invoke_apply_changes_export( instance: &dyn WasmComponentInstance, payload: &[u8], ) -> Result, LixError> { let mut errors = Vec::new(); for export in APPLY_CHANGES_EXPORTS { match instance.call(export, payload).await { Ok(output) => return Ok(output), Err(error) => errors.push(format!("{export}: {}", error.message)), } } Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: failed to call apply-changes export ({})", errors.join("; ") ), hint: None, details: None, }) } #[cfg(test)] mod tests { use super::*; use crate::plugin::{InstalledPlugin, PluginRuntime}; use crate::wasm::WasmRuntime; use async_trait::async_trait; use std::sync::atomic::{AtomicUsize, Ordering}; struct TestHost { wasm_runtime: Arc, plugin_component_cache: std::sync::Mutex>, } impl PluginComponentHost for TestHost { fn plugin_component_cache( &self, ) -> &std::sync::Mutex> { &self.plugin_component_cache } fn wasm_runtime(&self) -> &Arc { &self.wasm_runtime } } #[derive(Default)] struct CountingRuntime { init_calls: Arc, } struct NoopComponent; #[async_trait(?Send)] impl WasmRuntime for CountingRuntime { async fn init_component( &self, _bytes: Vec, _limits: WasmLimits, ) -> Result, LixError> { self.init_calls.fetch_add(1, Ordering::SeqCst); Ok(Arc::new(NoopComponent)) } } #[async_trait(?Send)] impl WasmComponentInstance for NoopComponent { async fn call(&self, _export: &str, _input: &[u8]) -> Result, LixError> { Ok(Vec::new()) } } #[tokio::test] async fn component_cache_reinitializes_when_same_key_wasm_changes() { let runtime = Arc::new(CountingRuntime::default()); let host = TestHost { wasm_runtime: runtime.clone(), plugin_component_cache: std::sync::Mutex::new(Default::default()), }; let mut plugin = InstalledPlugin { key: "k".to_string(), runtime: PluginRuntime::WasmComponentV1, api_version: "0.1.0".to_string(), path_glob: "*.json".to_string(), content_type: None, entry: "plugin.wasm".to_string(), manifest_json: "{}".to_string(), wasm: vec![1], }; load_or_init_plugin_component(&host, &plugin) .await .expect("first init should succeed"); load_or_init_plugin_component(&host, &plugin) .await .expect("second lookup should reuse cache"); assert_eq!(runtime.init_calls.load(Ordering::SeqCst), 1); plugin.wasm = vec![2]; load_or_init_plugin_component(&host, &plugin) .await .expect("changed wasm should reinitialize instance"); assert_eq!(runtime.init_calls.load(Ordering::SeqCst), 2); } } ================================================ FILE: packages/engine/src/plugin/install.rs ================================================ //! Plugin install write helpers. //! //! This module owns plugin archive parsing, registered-schema staging, and the //! prepared write construction needed to install a plugin into the engine. use std::collections::BTreeMap; use async_trait::async_trait; use serde_json::{json, Value as JsonValue}; use crate::catalog::{ResolvedRelation, SurfaceRegistry}; use crate::common::stable_content_fingerprint_hex; use crate::common::{NormalizedDirectoryPath, ParsedFilePath}; use crate::plugin::{ parse_plugin_archive_for_install, plugin_storage_archive_file_id, plugin_storage_archive_path, ParsedPluginArchive, PLUGIN_STORAGE_ROOT_DIRECTORY_PATH, }; use crate::schema::{schema_key_from_definition, validate_lix_schema_definition}; use crate::sql::{ ChangeBatch, CommitPreconditions, ExpectedHead, IdempotencyKey, OptionalTextPatch, PlanEffects, PlannedFilesystemDescriptor, PlannedFilesystemFile, PlannedFilesystemState, PlannedStateRow, PreparedWriteOperationKind, PreparedWriteStatementKind, PublicChange, ResultContract, SchemaLiveTableRequirement, SemanticEffect, WriteDiagnosticContext, WriteLane, WriteMode, }; use crate::streams::{ state_commit_stream_changes_from_changes, StateCommitStreamOperation, StateCommitStreamRuntimeMetadata, }; use crate::transaction::{ PreparedPublicSurfaceRegistryEffect, PreparedPublicSurfaceRegistryMutation, PreparedPublicWrite, PreparedPublicWriteContract, PreparedPublicWriteExecution, PreparedPublicWriteMaterialization, PreparedPublicWritePlanArtifact, PreparedResolvedWritePartition, PreparedResolvedWritePlan, PreparedWriteArtifact, PreparedWriteFunctionBindings, PreparedWriteStatement, }; use crate::{LixError, Value}; use crate::transaction::WriteCommand; const REGISTERED_SCHEMA_STORAGE_SCHEMA_KEY: &str = "lix_registered_schema"; const FILESYSTEM_DESCRIPTOR_SCHEMA_KEY: &str = "lix_file_descriptor"; const FILESYSTEM_BINARY_BLOB_REF_SCHEMA_KEY: &str = "lix_binary_blob_ref"; #[derive(Clone)] pub(crate) struct PluginInstallWriteContext { function_bindings: PreparedWriteFunctionBindings, public_surface_registry: SurfaceRegistry, target_version_id: String, active_account_ids: Vec, origin_key: Option, } impl PluginInstallWriteContext { pub(crate) fn new( function_bindings: PreparedWriteFunctionBindings, public_surface_registry: SurfaceRegistry, target_version_id: impl Into, active_account_ids: Vec, origin_key: Option, ) -> Self { Self { function_bindings, public_surface_registry, target_version_id: target_version_id.into(), active_account_ids, origin_key, } } fn target_version_id(&self) -> &str { &self.target_version_id } } #[async_trait(?Send)] pub(crate) trait PluginInstallWriteExecutor { fn plugin_install_write_context(&self) -> PluginInstallWriteContext; fn stage_prepared_write_statement(&mut self, statement: WriteCommand) -> Result<(), LixError>; async fn resolve_directory_id( &mut self, path: &NormalizedDirectoryPath, ) -> Result, LixError>; } pub(crate) async fn install_plugin_archive_with_writer( archive_bytes: &[u8], executor: &mut dyn PluginInstallWriteExecutor, ) -> Result<(), LixError> { let parsed = parse_plugin_archive_for_install(archive_bytes)?; install_plugin_with_writer(executor, &parsed, archive_bytes).await } pub(crate) fn prepare_registered_schema_write_statement( schema: &JsonValue, context: &PluginInstallWriteContext, ) -> Result { prepare_registered_schema_write_statement_from_schemas(std::slice::from_ref(schema), context) } async fn install_plugin_with_writer( executor: &mut dyn PluginInstallWriteExecutor, parsed: &ParsedPluginArchive, archive_bytes: &[u8], ) -> Result<(), LixError> { let plugin_install_context = executor.plugin_install_write_context(); if !parsed.schemas.is_empty() { executor.stage_prepared_write_statement( prepare_registered_schema_write_statement_from_schemas( &parsed.schemas, &plugin_install_context, )?, )?; } let plugin_root = NormalizedDirectoryPath::from_normalized(PLUGIN_STORAGE_ROOT_DIRECTORY_PATH.to_string()); let plugin_directory_id = executor .resolve_directory_id(&plugin_root) .await? .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!( "plugin storage directory '{}' is missing", PLUGIN_STORAGE_ROOT_DIRECTORY_PATH ), ) })?; executor.stage_prepared_write_statement(prepare_plugin_archive_write_statement( parsed, archive_bytes, &plugin_directory_id, &plugin_install_context, )?)?; Ok(()) } #[derive(Clone)] struct RegisteredSchemaRowSpec { entity_id: String, registered_schema_key: String, snapshot: JsonValue, schema_json: JsonValue, } fn prepare_registered_schema_write_statement_from_schemas( schemas: &[JsonValue], context: &PluginInstallWriteContext, ) -> Result { let target = require_resolved_surface( &context.public_surface_registry, "lix_registered_schema_by_version", )?; let schema_rows = schemas .iter() .map(registered_schema_row_spec_from_json) .collect::, _>>()?; let intended_post_state = schema_rows .iter() .map(|row| registered_schema_planned_row(row, context.target_version_id())) .collect::>(); let changes = schema_rows .iter() .map(|row| PublicChange { entity_id: row.entity_id.clone(), schema_key: REGISTERED_SCHEMA_STORAGE_SCHEMA_KEY.to_string(), file_id: None, plugin_key: None, snapshot_content: Some(row.snapshot.to_string()), metadata: None, version_id: context.target_version_id().to_string(), origin_key: context.origin_key.clone(), }) .collect::>(); let schema_live_table_requirements = schema_rows .iter() .map(|row| SchemaLiveTableRequirement { schema_key: row.registered_schema_key.clone(), schema_definition: Some(row.schema_json.clone()), }) .collect::>(); prepare_public_tracked_write_statement( context, target, "lix_registered_schema_by_version", intended_post_state, PlannedFilesystemState::default(), changes, schema_live_table_requirements, PreparedPublicSurfaceRegistryEffect::ApplyMutations( schema_rows .iter() .map( |row| PreparedPublicSurfaceRegistryMutation::UpsertRegisteredSchemaSnapshot { snapshot: row.snapshot.clone(), }, ) .collect(), ), "semantic.register_schema", ) } fn prepare_plugin_archive_write_statement( parsed: &ParsedPluginArchive, archive_bytes: &[u8], plugin_directory_id: &str, context: &PluginInstallWriteContext, ) -> Result { let target = require_resolved_surface(&context.public_surface_registry, "lix_file_by_version")?; let archive_id = plugin_storage_archive_file_id(parsed.manifest.key.as_str()); let archive_path = plugin_storage_archive_path(parsed.manifest.key.as_str())?; let parsed_path = ParsedFilePath::try_from_path(&archive_path)?; let descriptor = PlannedFilesystemDescriptor { directory_id: plugin_directory_id.to_string(), name: parsed_path.name.clone(), metadata: None, hidden: false, }; let target_version_id = context.target_version_id(); let filesystem_state = PlannedFilesystemState { files: [( (archive_id.clone(), target_version_id.to_string()), PlannedFilesystemFile { file_id: archive_id.clone(), version_id: target_version_id.to_string(), untracked: false, descriptor: Some(descriptor.clone()), metadata_patch: OptionalTextPatch::Unchanged, data: Some(archive_bytes.to_vec()), deleted: false, }, )] .into_iter() .collect(), }; let intended_post_state = vec![ plugin_archive_file_descriptor_row(&archive_id, target_version_id, &descriptor), plugin_archive_binary_blob_ref_row(&archive_id, target_version_id, archive_bytes)?, ]; let changes = intended_post_state .iter() .map(planned_row_to_public_change) .collect::, _>>()?; prepare_public_tracked_write_statement( context, target, "lix_file_by_version", intended_post_state, filesystem_state, changes, Vec::new(), PreparedPublicSurfaceRegistryEffect::None, "semantic.install_plugin_archive", ) } fn registered_schema_row_spec_from_json( schema: &JsonValue, ) -> Result { validate_lix_schema_definition(schema)?; let schema_key = schema_key_from_definition(schema)?; Ok(RegisteredSchemaRowSpec { entity_id: schema_key.entity_id(), registered_schema_key: schema_key.schema_key, snapshot: json!({ "value": schema }), schema_json: schema.clone(), }) } fn registered_schema_planned_row( row: &RegisteredSchemaRowSpec, target_version_id: &str, ) -> PlannedStateRow { let mut values = BTreeMap::new(); values.insert("entity_id".to_string(), Value::Text(row.entity_id.clone())); values.insert( "schema_key".to_string(), Value::Text(REGISTERED_SCHEMA_STORAGE_SCHEMA_KEY.to_string()), ); values.insert("file_id".to_string(), Value::Null); values.insert("plugin_key".to_string(), Value::Null); values.insert( "snapshot_content".to_string(), Value::Json(row.snapshot.clone()), ); values.insert( "version_id".to_string(), Value::Text(target_version_id.to_string()), ); PlannedStateRow { entity_id: row.entity_id.clone(), schema_key: REGISTERED_SCHEMA_STORAGE_SCHEMA_KEY.to_string(), version_id: Some(target_version_id.to_string()), values, origin_key: None, tombstone: false, } } fn plugin_archive_file_descriptor_row( archive_id: &str, target_version_id: &str, descriptor: &PlannedFilesystemDescriptor, ) -> PlannedStateRow { let snapshot_content = json!({ "id": archive_id, "directory_id": descriptor.directory_id, "name": descriptor.name, "hidden": descriptor.hidden, }) .to_string(); let mut values = BTreeMap::new(); values.insert("entity_id".to_string(), Value::Text(archive_id.to_string())); values.insert( "schema_key".to_string(), Value::Text(FILESYSTEM_DESCRIPTOR_SCHEMA_KEY.to_string()), ); values.insert("file_id".to_string(), Value::Null); values.insert("plugin_key".to_string(), Value::Null); values.insert( "snapshot_content".to_string(), Value::Text(snapshot_content), ); values.insert( "version_id".to_string(), Value::Text(target_version_id.to_string()), ); PlannedStateRow { entity_id: archive_id.to_string(), schema_key: FILESYSTEM_DESCRIPTOR_SCHEMA_KEY.to_string(), version_id: Some(target_version_id.to_string()), values, origin_key: None, tombstone: false, } } fn plugin_archive_binary_blob_ref_row( archive_id: &str, target_version_id: &str, archive_bytes: &[u8], ) -> Result { let size_bytes = u64::try_from(archive_bytes.len()).map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", format!( "plugin archive '{}' exceeds supported size range", archive_id ), ) })?; let snapshot_content = json!({ "id": archive_id, "blob_hash": stable_content_fingerprint_hex(archive_bytes), "size_bytes": size_bytes, }) .to_string(); let mut values = BTreeMap::new(); values.insert("entity_id".to_string(), Value::Text(archive_id.to_string())); values.insert( "schema_key".to_string(), Value::Text(FILESYSTEM_BINARY_BLOB_REF_SCHEMA_KEY.to_string()), ); values.insert("file_id".to_string(), Value::Text(archive_id.to_string())); values.insert("plugin_key".to_string(), Value::Null); values.insert( "snapshot_content".to_string(), Value::Text(snapshot_content), ); values.insert( "version_id".to_string(), Value::Text(target_version_id.to_string()), ); Ok(PlannedStateRow { entity_id: archive_id.to_string(), schema_key: FILESYSTEM_BINARY_BLOB_REF_SCHEMA_KEY.to_string(), version_id: Some(target_version_id.to_string()), values, origin_key: None, tombstone: false, }) } fn prepare_public_tracked_write_statement( context: &PluginInstallWriteContext, target: ResolvedRelation, relation_name: &str, intended_post_state: Vec, filesystem_state: PlannedFilesystemState, changes: Vec, schema_live_table_requirements: Vec, public_surface_registry_effect: PreparedPublicSurfaceRegistryEffect, idempotency_purpose: &str, ) -> Result { let semantic_effects = semantic_plan_effects_from_changes(&changes, context.origin_key.as_deref())?; let write_payload = json!({ "rows": intended_post_state.iter().map(summarize_planned_row).collect::>(), "changes": changes.iter().map(summarize_change).collect::>(), "filesystem_files": filesystem_state.files.keys().cloned().collect::>(), }); WriteCommand::build( PreparedWriteStatement { statement_kind: PreparedWriteStatementKind::Write, result_contract: ResultContract::DmlNoReturning, artifact: PreparedWriteArtifact::PublicWrite(PreparedPublicWrite { contract: PreparedPublicWriteContract { operation_kind: PreparedWriteOperationKind::Insert, target, on_conflict_action: None, requested_version_id: Some(context.target_version_id().to_string()), active_account_ids: context.active_account_ids.clone(), origin_key: context.origin_key.clone(), resolved_write_plan: Some(PreparedResolvedWritePlan { partitions: vec![PreparedResolvedWritePartition { execution_mode: WriteMode::Tracked, authoritative_pre_state_rows: Vec::new(), intended_post_state, filesystem_state, }], }), }, execution: PreparedPublicWritePlanArtifact::Materialize( PreparedPublicWriteMaterialization { partitions: vec![PreparedPublicWriteExecution { execution_mode: WriteMode::Tracked, intended_post_state: Vec::new(), schema_live_table_requirements, change_batch: Some(ChangeBatch { changes: changes.clone(), write_lane: WriteLane::GlobalAdmin, origin_key: context.origin_key.clone(), semantic_effects: semantic_effect_markers_from_changes(&changes), }), create_preconditions: Some(CommitPreconditions { write_lane: WriteLane::GlobalAdmin, expected_head: ExpectedHead::CurrentHead, idempotency_key: semantic_idempotency_key( idempotency_purpose, &write_payload, )?, }), semantic_effects, persist_filesystem_payloads_before_write: false, }], }, ), }), diagnostic_context: WriteDiagnosticContext::new(vec![relation_name.to_string()]), public_surface_registry_effect, }, &context.function_bindings, ) } fn semantic_plan_effects_from_changes( changes: &[PublicChange], origin_key: Option<&str>, ) -> Result { Ok(PlanEffects { state_commit_stream_changes: state_commit_stream_changes_from_changes( changes, StateCommitStreamOperation::Insert, StateCommitStreamRuntimeMetadata::from_runtime_origin_key(origin_key), )?, ..PlanEffects::default() }) } fn semantic_effect_markers_from_changes(changes: &[PublicChange]) -> Vec { changes .iter() .map(|change| SemanticEffect { effect_key: "state.upsert".to_string(), target: format!( "{}:{}@{}", change.schema_key, change.entity_id, change.version_id ), }) .collect() } fn planned_row_to_public_change(row: &PlannedStateRow) -> Result { Ok(PublicChange { entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: planned_row_text_value(row, "file_id"), plugin_key: planned_row_text_value(row, "plugin_key"), snapshot_content: if row.tombstone { None } else { planned_row_json_text_value(row, "snapshot_content") }, metadata: planned_row_json_text_value(row, "metadata"), version_id: row .version_id .clone() .or_else(|| planned_row_text_value(row, "version_id")) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "semantic tracked write requires a concrete version_id", ) })?, origin_key: row.origin_key.clone(), }) } fn planned_row_text_value(row: &PlannedStateRow, key: &str) -> Option { match row.values.get(key) { Some(Value::Text(value)) => Some(value.clone()), Some(Value::Integer(value)) => Some(value.to_string()), Some(Value::Boolean(value)) => Some(value.to_string()), Some(Value::Real(value)) => Some(value.to_string()), _ => None, } } fn planned_row_json_text_value(row: &PlannedStateRow, key: &str) -> Option { match row.values.get(key) { Some(Value::Json(value)) => Some(value.to_string()), _ => planned_row_text_value(row, key), } } fn semantic_idempotency_key( purpose: &str, payload: &JsonValue, ) -> Result { let bytes = serde_json::to_vec(payload).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("semantic idempotency payload serialization failed: {error}"), ) })?; Ok(IdempotencyKey( json!({ "purpose": purpose, "fingerprint": stable_content_fingerprint_hex(&bytes), }) .to_string(), )) } fn summarize_change(change: &PublicChange) -> JsonValue { json!({ "entity_id": change.entity_id, "schema_key": change.schema_key, "file_id": change.file_id, "plugin_key": change.plugin_key, "version_id": change.version_id, "origin_key": change.origin_key, "snapshot_content": change.snapshot_content.as_ref().map(|snapshot| { stable_content_fingerprint_hex(snapshot.as_bytes()) }), }) } fn summarize_planned_row(row: &PlannedStateRow) -> JsonValue { json!({ "entity_id": row.entity_id, "schema_key": row.schema_key, "version_id": row.version_id, "tombstone": row.tombstone, "values": row .values .iter() .map(|(key, value)| { ( key.clone(), match value { Value::Null => json!({ "kind": "null" }), Value::Text(text) => json!({ "kind": "text", "sha256": stable_content_fingerprint_hex(text.as_bytes()), "len": text.len(), }), Value::Json(value) => { let encoded = value.to_string(); json!({ "kind": "json", "sha256": stable_content_fingerprint_hex(encoded.as_bytes()), "len": encoded.len(), }) } Value::Blob(bytes) => json!({ "kind": "blob", "sha256": stable_content_fingerprint_hex(bytes), "len": bytes.len(), }), Value::Integer(value) => json!({ "kind": "integer", "value": value }), Value::Real(value) => json!({ "kind": "real", "value": value }), Value::Boolean(value) => json!({ "kind": "boolean", "value": value }), }, ) }) .collect::>(), }) } fn require_resolved_surface( public_surface_registry: &SurfaceRegistry, relation_name: &str, ) -> Result { public_surface_registry .bind_relation_name(relation_name) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("public surface '{relation_name}' is not registered"), ) }) } ================================================ FILE: packages/engine/src/plugin/manifest.rs ================================================ use std::sync::OnceLock; use globset::{Glob, GlobBuilder}; use jsonschema::{Draft, JSONSchema}; use serde::{Deserialize, Serialize}; use serde_json::Value as JsonValue; use crate::LixError; static PLUGIN_MANIFEST_SCHEMA: OnceLock = OnceLock::new(); static PLUGIN_MANIFEST_VALIDATOR: OnceLock> = OnceLock::new(); #[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)] #[serde(rename_all = "kebab-case")] pub enum PluginRuntime { WasmComponentV1, } #[allow(dead_code)] impl PluginRuntime { pub fn as_str(self) -> &'static str { match self { Self::WasmComponentV1 => "wasm-component-v1", } } pub fn from_str(value: &str) -> Option { match value { "wasm-component-v1" => Some(Self::WasmComponentV1), _ => None, } } } #[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] pub struct PluginManifest { pub key: String, pub runtime: PluginRuntime, pub api_version: String, #[serde(rename = "match")] pub file_match: PluginMatch, #[serde(default)] pub detect_changes: Option, pub entry: String, pub schemas: Vec, } #[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] pub struct PluginMatch { pub path_glob: String, #[serde(default)] pub content_type: Option, } #[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)] #[serde(rename_all = "snake_case")] pub enum PluginContentType { Text, Binary, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct ValidatedPluginManifest { pub manifest: PluginManifest, pub normalized_json: String, } #[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] pub struct DetectChangesConfig { #[serde(default)] pub state_context: Option, } #[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] pub struct DetectStateContextConfig { #[serde(default)] pub include_active_state: Option, #[serde(default)] pub columns: Option>, } #[allow(dead_code)] impl DetectStateContextConfig { pub fn includes_active_state(&self) -> bool { self.include_active_state.unwrap_or(false) } pub fn resolved_columns_or_default(&self) -> Option> { if !self.includes_active_state() { return None; } Some( self.columns .clone() .unwrap_or_else(|| StateContextColumn::default_active_state_columns().to_vec()), ) } } #[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)] #[serde(rename_all = "snake_case")] pub enum StateContextColumn { EntityId, SchemaKey, SchemaVersion, SnapshotContent, FileId, PluginKey, VersionId, ChangeId, Metadata, CreatedAt, UpdatedAt, } #[allow(dead_code)] impl StateContextColumn { pub const fn default_active_state_columns() -> &'static [StateContextColumn] { &[ StateContextColumn::EntityId, StateContextColumn::SchemaKey, StateContextColumn::SchemaVersion, StateContextColumn::SnapshotContent, ] } } pub fn parse_plugin_manifest_json(raw: &str) -> Result { let manifest_json: JsonValue = serde_json::from_str(raw).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin manifest must be valid JSON: {error}"), hint: None, details: None, })?; validate_plugin_manifest_json(&manifest_json)?; let manifest: PluginManifest = serde_json::from_value(manifest_json.clone()).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Plugin manifest does not match expected shape: {error}"), hint: None, details: None, })?; validate_path_glob(&manifest.file_match.path_glob)?; let normalized_json = serde_json::to_string(&manifest_json).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Failed to normalize plugin manifest JSON: {error}"), hint: None, details: None, })?; Ok(ValidatedPluginManifest { manifest, normalized_json, }) } pub fn select_best_glob_match<'a, T, C: Copy + PartialEq>( path: &str, file_content_type: Option, candidates: &'a [T], glob: impl Fn(&T) -> &str, required_content_type: impl Fn(&T) -> Option, ) -> Option<&'a T> { let mut selected: Option<&T> = None; let mut selected_rank: Option<(u8, i32)> = None; for candidate in candidates { let pattern = glob(candidate); if !glob_matches_path(pattern, path) { continue; } if let (Some(actual_type), Some(required_type)) = (file_content_type, required_content_type(candidate)) { if actual_type != required_type { continue; } } let rank = glob_specificity_rank(pattern); match selected_rank { None => { selected = Some(candidate); selected_rank = Some(rank); } Some(existing_rank) if rank > existing_rank => { selected = Some(candidate); selected_rank = Some(rank); } _ => {} } } selected } pub fn glob_matches_path(glob: &str, path: &str) -> bool { let normalized_glob = glob.trim(); let normalized_path = path.trim(); if normalized_glob.is_empty() || normalized_path.is_empty() { return false; } if is_catch_all_glob(normalized_glob) { return true; } GlobBuilder::new(normalized_glob) .literal_separator(false) .case_insensitive(true) .build() .map(|compiled| compiled.compile_matcher().is_match(normalized_path)) .unwrap_or(false) } fn validate_path_glob(glob: &str) -> Result<(), LixError> { Glob::new(glob).map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Invalid plugin manifest: match.path_glob is invalid: {error}"), hint: None, details: None, })?; Ok(()) } fn validate_plugin_manifest_json(manifest: &JsonValue) -> Result<(), LixError> { let validator = plugin_manifest_validator()?; if let Err(errors) = validator.validate(manifest) { let details = format_validation_errors(errors); return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Invalid plugin manifest: {details}"), hint: None, details: None, }); } Ok(()) } fn glob_specificity_rank(glob: &str) -> (u8, i32) { let normalized = glob.trim(); if is_catch_all_glob(normalized) { return (0, i32::MIN); } (1, glob_specificity_score(normalized)) } fn glob_specificity_score(glob: &str) -> i32 { let mut literal_chars = 0i32; let mut wildcard_chars = 0i32; for ch in glob.chars() { match ch { '*' | '?' | '[' | ']' | '{' | '}' => wildcard_chars += 1, _ => literal_chars += 1, } } literal_chars - wildcard_chars } fn is_catch_all_glob(glob: &str) -> bool { glob == "*" || glob == "**/*" || glob == "**" } fn plugin_manifest_validator() -> Result<&'static JSONSchema, LixError> { let result = PLUGIN_MANIFEST_VALIDATOR.get_or_init(|| { let mut options = JSONSchema::options(); options.with_meta_schemas(); if plugin_manifest_schema() .get("$schema") .and_then(JsonValue::as_str) .is_some_and(|url| url == "https://json-schema.org/draft/2020-12/schema") { options.with_draft(Draft::Draft202012); } options .compile(plugin_manifest_schema()) .map_err(|error| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!("Failed to compile plugin manifest schema: {error}"), hint: None, details: None, }) }); match result { Ok(schema) => Ok(schema), Err(error) => Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: error.message.clone(), hint: None, details: None, }), } } fn plugin_manifest_schema() -> &'static JsonValue { PLUGIN_MANIFEST_SCHEMA.get_or_init(|| { let raw = include_str!("./plugin_manifest.schema.json"); serde_json::from_str(raw).expect("plugin_manifest.schema.json must be valid JSON") }) } fn format_validation_errors<'a>( errors: impl Iterator>, ) -> String { let mut parts = Vec::new(); for error in errors { let path = error.instance_path.to_string(); let message = error.to_string(); if path.is_empty() { parts.push(message); } else { parts.push(format!("{path} {message}")); } } if parts.is_empty() { "Unknown validation error".to_string() } else { parts.join("; ") } } #[cfg(test)] mod tests { use super::{ parse_plugin_manifest_json, DetectStateContextConfig, PluginContentType, StateContextColumn, }; #[test] fn resolved_columns_returns_none_when_active_state_is_not_enabled() { let config = DetectStateContextConfig { include_active_state: None, columns: None, }; assert_eq!(config.resolved_columns_or_default(), None); } #[test] fn resolved_columns_uses_defaults_when_columns_are_omitted() { let config = DetectStateContextConfig { include_active_state: Some(true), columns: None, }; assert_eq!( config.resolved_columns_or_default(), Some(StateContextColumn::default_active_state_columns().to_vec()) ); } #[test] fn resolved_columns_uses_explicit_column_selection() { let config = DetectStateContextConfig { include_active_state: Some(true), columns: Some(vec![ StateContextColumn::EntityId, StateContextColumn::SchemaKey, ]), }; assert_eq!( config.resolved_columns_or_default(), Some(vec![ StateContextColumn::EntityId, StateContextColumn::SchemaKey ]) ); } #[test] fn parses_valid_manifest() { let validated = parse_plugin_manifest_json( r#"{ "key":"plugin_json", "runtime":"wasm-component-v1", "api_version":"0.1.0", "match":{"path_glob":"*.json"}, "entry":"plugin.wasm", "schemas":["schema/default.json"] }"#, ) .expect("manifest should parse"); assert_eq!(validated.manifest.key, "plugin_json"); assert_eq!(validated.manifest.runtime.as_str(), "wasm-component-v1"); assert_eq!(validated.manifest.entry, "plugin.wasm"); } #[test] fn rejects_invalid_manifest() { let err = parse_plugin_manifest_json( r#"{ "runtime":"wasm-component-v1", "api_version":"0.1.0", "match":{"path_glob":"*.json"}, "entry":"plugin.wasm", "schemas":["schema/default.json"] }"#, ) .expect_err("manifest should be invalid"); assert!(err.message.contains("Invalid plugin manifest")); assert!(err.message.contains("key")); } #[test] fn rejects_invalid_path_glob() { let err = parse_plugin_manifest_json( r#"{ "key":"plugin_markdown", "runtime":"wasm-component-v1", "api_version":"0.1.0", "match":{"path_glob":"*.{md,mdx"}, "entry":"plugin.wasm", "schemas":["schema/default.json"] }"#, ) .expect_err("invalid glob should fail"); assert!(err.message.contains("match.path_glob")); } #[test] fn parses_manifest_with_content_type_match_filter() { let validated = parse_plugin_manifest_json( r#"{ "key":"plugin_text", "runtime":"wasm-component-v1", "api_version":"0.1.0", "match":{"path_glob":"**/*", "content_type":"text"}, "entry":"plugin.wasm", "schemas":["schema/default.json"] }"#, ) .expect("manifest should parse"); assert_eq!( validated.manifest.file_match.content_type, Some(PluginContentType::Text) ); } #[test] fn parses_manifest_with_active_state_columns() { let validated = parse_plugin_manifest_json( r#"{ "key":"plugin_markdown", "runtime":"wasm-component-v1", "api_version":"0.1.0", "match":{"path_glob":"*.{md,mdx}"}, "entry":"plugin.wasm", "schemas":["schema/default.json"], "detect_changes": { "state_context": { "include_active_state": true, "columns": ["entity_id", "schema_key", "snapshot_content"] } } }"#, ) .expect("manifest should parse"); let state_context = validated .manifest .detect_changes .expect("detect_changes should be present") .state_context .expect("state_context should be present"); assert_eq!(state_context.include_active_state, Some(true)); assert_eq!( state_context.columns, Some(vec![ StateContextColumn::EntityId, StateContextColumn::SchemaKey, StateContextColumn::SnapshotContent ]) ); } #[test] fn parses_manifest_with_active_state_and_default_columns() { let validated = parse_plugin_manifest_json( r#"{ "key":"plugin_markdown", "runtime":"wasm-component-v1", "api_version":"0.1.0", "match":{"path_glob":"*.md"}, "entry":"plugin.wasm", "schemas":["schema/default.json"], "detect_changes": { "state_context": { "include_active_state": true } } }"#, ) .expect("manifest should parse"); let state_context = validated .manifest .detect_changes .expect("detect_changes should be present") .state_context .expect("state_context should be present"); assert_eq!( state_context.resolved_columns_or_default(), Some(StateContextColumn::default_active_state_columns().to_vec()) ); } } ================================================ FILE: packages/engine/src/plugin/materializer.rs ================================================ use std::collections::BTreeSet; use std::sync::{Arc, RwLock}; use async_trait::async_trait; use crate::common::LixError; use crate::live_state::{list_installed_plugin_archive_refs, PluginArchiveRef}; use crate::Backend; use super::component::{apply_changes_with_plugin, PluginComponentHost}; use super::{ load_installed_plugin_from_archive_bytes, plugin_key_from_archive_path, PluginContentType, PluginRuntime, }; #[derive(Debug, Clone, PartialEq, Eq)] pub struct InstalledPlugin { pub key: String, pub runtime: PluginRuntime, pub api_version: String, pub path_glob: String, pub content_type: Option, pub entry: String, pub manifest_json: String, pub wasm: Vec, } #[async_trait(?Send)] pub trait FilesystemPluginMaterializer { async fn load_installed_plugins(&self) -> Result, LixError>; async fn apply_plugin_changes( &self, plugin: &InstalledPlugin, payload: &[u8], ) -> Result, LixError>; } pub(crate) trait PluginMaterializationHost: PluginComponentHost { fn plugin_backend(&self) -> &Arc; fn installed_plugins_cache(&self) -> &RwLock>>; } pub(crate) async fn load_installed_plugins_with_runtime_cache( host: &impl PluginMaterializationHost, ) -> Result, LixError> { if let Some(cached) = host .installed_plugins_cache() .read() .map_err(|_| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "installed plugin cache lock poisoned".to_string(), hint: None, details: None, })? .clone() { return Ok(cached); } let plugins = load_installed_plugins_from_backend(host).await?; let mut guard = host .installed_plugins_cache() .write() .map_err(|_| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "installed plugin cache lock poisoned".to_string(), hint: None, details: None, })?; *guard = Some(plugins.clone()); Ok(plugins) } pub(crate) async fn load_installed_plugins_from_backend( host: &impl PluginMaterializationHost, ) -> Result, LixError> { load_installed_plugins_from_backend_state(host.plugin_backend().as_ref()).await } pub(crate) async fn load_installed_plugins_from_backend_state( backend: &dyn Backend, ) -> Result, LixError> { let archive_refs = list_installed_plugin_archive_refs(backend).await?; let mut plugins = Vec::with_capacity(archive_refs.len()); for archive_ref in archive_refs { plugins.push( load_installed_plugin_from_archive_ref_with_backend(backend, &archive_ref).await?, ); } Ok(plugins) } pub(crate) async fn load_installed_plugin_from_archive_ref_with_backend( backend: &dyn Backend, archive_ref: &PluginArchiveRef, ) -> Result { let Some(plugin_key) = plugin_key_from_archive_path(&archive_ref.path) else { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: unsupported plugin archive path '{}'", archive_ref.path ), hint: None, details: None, }); }; let binary_cas = crate::binary_cas::BinaryCasContext::new(); let mut reader = binary_cas.reader(backend); let archive_hash = crate::binary_cas::BlobHash::from_hex(&archive_ref.blob_hash)?; let archive_bytes = reader .load_bytes_many(&[archive_hash]) .await? .into_vec() .into_iter() .next() .flatten() .ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: missing plugin archive blob '{}' for file '{}' ({})", archive_ref.blob_hash, archive_ref.path, archive_ref.file_id ), hint: None, details: None, })?; if archive_bytes.is_empty() { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin materialization: archive '{}' is empty", archive_ref.path ), hint: None, details: None, }); } load_installed_plugin_from_archive_bytes(&plugin_key, &archive_ref.path, &archive_bytes) } pub(crate) async fn list_installed_plugin_manifest_keys( backend: &dyn Backend, ) -> Result, LixError> { Ok(load_installed_plugins_from_backend_state(backend) .await? .into_iter() .map(|plugin| plugin.key) .collect()) } #[allow(dead_code)] pub(crate) async fn installed_plugin_manifest_key_exists( backend: &dyn Backend, plugin_key: &str, ) -> Result { Ok(list_installed_plugin_manifest_keys(backend) .await? .contains(plugin_key)) } pub(crate) fn invalidate_installed_plugins_cache( host: &impl PluginMaterializationHost, ) -> Result<(), LixError> { let mut guard = host .installed_plugins_cache() .write() .map_err(|_| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "installed plugin cache lock poisoned".to_string(), hint: None, details: None, })?; *guard = None; let mut component_guard = host.plugin_component_cache().lock().map_err(|_| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "plugin component cache lock poisoned".to_string(), hint: None, details: None, })?; component_guard.clear(); Ok(()) } #[async_trait(?Send)] impl FilesystemPluginMaterializer for T where T: PluginMaterializationHost, { async fn load_installed_plugins(&self) -> Result, LixError> { load_installed_plugins_with_runtime_cache(self).await } async fn apply_plugin_changes( &self, plugin: &InstalledPlugin, payload: &[u8], ) -> Result, LixError> { apply_changes_with_plugin(self, plugin, payload).await } } #[cfg(test)] mod tests { use super::*; use crate::binary_cas::codec::{ binary_blob_hash_bytes, encode_binary_cas_chunk, encode_binary_cas_manifest, encode_binary_cas_manifest_chunk, BinaryCasManifest, BinaryChunkCodec, }; use crate::binary_cas::kv::{ BINARY_CAS_CHUNK_NAMESPACE, BINARY_CAS_MANIFEST_CHUNK_NAMESPACE, BINARY_CAS_MANIFEST_NAMESPACE, }; use crate::{ BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, }; use async_trait::async_trait; use std::io::{Cursor, Write}; use zip::write::SimpleFileOptions; use zip::{CompressionMethod, ZipWriter}; struct InstalledPluginLookupBackend { archive_bytes: Vec, } struct PluginLookupTransaction { archive_bytes: Vec, } #[async_trait] impl Backend for InstalledPluginLookupBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { Ok(Box::new(PluginLookupTransaction { archive_bytes: self.archive_bytes.clone(), })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { Ok(Box::new(PluginLookupTransaction { archive_bytes: self.archive_bytes.clone(), })) } } #[async_trait] impl BackendReadTransaction for PluginLookupTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { if let Some(value) = test_kv_get(&self.archive_bytes, &group.namespace, &key)? { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new(namespace, values.finish(), present)); } Ok(BackendKvValueBatch { groups }) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let exists = group .keys .iter() .map(|key| test_kv_get(&self.archive_bytes, &group.namespace, key)) .collect::, LixError>>()? .into_iter() .map(|value| value.is_some()) .collect(); groups.push(BackendKvExistsGroup { namespace, exists, }); } Ok(BackendKvExistsBatch { groups }) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { let entries = test_kv_scan(&self.archive_bytes, request)?; Ok(BackendKvKeyPage { keys: entries.keys, resume_after: entries.resume_after, }) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { let entries = test_kv_scan(&self.archive_bytes, request)?; Ok(BackendKvValuePage { values: entries.values, resume_after: entries.resume_after, }) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { test_kv_scan(&self.archive_bytes, request) } async fn rollback(self: Box) -> Result<(), LixError> { Ok(()) } } #[async_trait] impl BackendWriteTransaction for PluginLookupTransaction { async fn write_kv_batch(&mut self, _batch: BackendKvWriteBatch) -> Result { Err(LixError::new( "LIX_ERROR_UNKNOWN", "plugin lookup test backend is read-only", )) } async fn commit(self: Box) -> Result<(), LixError> { Ok(()) } } fn test_kv_get( archive_bytes: &[u8], namespace: &str, key: &[u8], ) -> Result>, LixError> { match (namespace, key) { (BINARY_CAS_MANIFEST_NAMESPACE, key) if key == binary_blob_hash_bytes(archive_bytes).as_slice() => { Ok(Some(encode_binary_cas_manifest( &BinaryCasManifest::Chunked { size_bytes: archive_bytes.len() as u64, chunk_count: 1, }, ))) } (BINARY_CAS_CHUNK_NAMESPACE, key) if key == binary_blob_hash_bytes(archive_bytes).as_slice() => { Ok(Some(encode_binary_cas_chunk( BinaryChunkCodec::Raw, archive_bytes.len() as u64, archive_bytes, ))) } _ => Ok(None), } } fn test_kv_scan( archive_bytes: &[u8], request: BackendKvScanRequest, ) -> Result { if request.namespace != BINARY_CAS_MANIFEST_CHUNK_NAMESPACE { return Ok(BackendKvEntryPage { keys: BytePageBuilder::new().finish(), values: BytePageBuilder::new().finish(), resume_after: None, }); } let blob_hash = binary_blob_hash_bytes(archive_bytes); let chunk_hash = binary_blob_hash_bytes(archive_bytes); let mut key = blob_hash.to_vec(); key.extend_from_slice(&0u64.to_be_bytes()); let include = match request.range { BackendKvScanRange::Prefix(prefix) => key.starts_with(&prefix), BackendKvScanRange::Range { start, end } => key >= start && key < end, }; if !include || request.after.as_deref().is_some_and(|after| key.as_slice() <= after) { return Ok(BackendKvEntryPage { keys: BytePageBuilder::new().finish(), values: BytePageBuilder::new().finish(), resume_after: None, }); } let value = encode_binary_cas_manifest_chunk(&chunk_hash, archive_bytes.len() as u64); let mut keys = BytePageBuilder::with_capacity(1, key.len()); let mut values = BytePageBuilder::with_capacity(1, value.len()); let mut resume_after = None; if request.limit > 0 { resume_after = Some(key.clone()); keys.push(&key); values.push(&value); } let resume_after = (request.limit == 0).then_some(resume_after).flatten(); Ok(BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, }) } fn build_archive(entries: &[(&str, &[u8])]) -> Vec { let options = SimpleFileOptions::default().compression_method(CompressionMethod::Stored); let cursor = Cursor::new(Vec::new()); let mut writer = ZipWriter::new(cursor); for (path, bytes) in entries { writer .start_file(*path, options) .expect("archive entry start should succeed"); writer .write_all(bytes) .expect("archive entry write should succeed"); } writer .finish() .expect("archive finish should succeed") .into_inner() } fn build_plugin_archive(manifest_json: &str) -> Vec { let wasm = [0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00]; build_archive(&[ ("manifest.json", manifest_json.as_bytes()), ("plugin.wasm", &wasm), ]) } fn plugin_manifest_json(key: &str) -> String { format!( r#"{{ "key":"{key}", "runtime":"wasm-component-v1", "api_version":"0.1.0", "match":{{"path_glob":"*.json"}}, "entry":"plugin.wasm", "schemas":["schema/plugin_json_schema.json"] }}"# ) } #[tokio::test] async fn installed_plugin_manifest_key_exists_reads_installed_manifest_keys() { let backend = InstalledPluginLookupBackend { archive_bytes: build_plugin_archive(&plugin_manifest_json("plugin_json")), }; assert!( installed_plugin_manifest_key_exists(&backend, "plugin_json") .await .expect("installed manifest key lookup should succeed") ); assert!( !installed_plugin_manifest_key_exists(&backend, "missing_plugin") .await .expect("missing manifest key lookup should succeed") ); } } ================================================ FILE: packages/engine/src/plugin/mod.rs ================================================ //! Plugin subsystem root. //! //! Phase 1 establishes `crate::plugin::*` as the owner path for plugin-domain //! code under concrete plugin-owned modules instead of legacy ownership-neutral //! buckets. mod archive; pub(crate) mod component; mod manifest; mod materializer; mod storage; pub(crate) use archive::{ load_installed_plugin_from_archive_bytes, parse_plugin_archive_for_install, ParsedPluginArchive, }; #[allow(unused_imports)] pub(crate) use manifest::{ glob_matches_path, parse_plugin_manifest_json, select_best_glob_match, DetectChangesConfig, DetectStateContextConfig, PluginContentType, PluginManifest, PluginMatch, PluginRuntime, StateContextColumn, ValidatedPluginManifest, }; #[allow(unused_imports)] pub(crate) use materializer::{ installed_plugin_manifest_key_exists, invalidate_installed_plugins_cache, list_installed_plugin_manifest_keys, load_installed_plugins_from_backend_state, load_installed_plugins_with_runtime_cache, FilesystemPluginMaterializer, InstalledPlugin, PluginMaterializationHost, }; #[allow(unused_imports)] pub(crate) use storage::{ plugin_key_from_archive_path, plugin_storage_archive_file_id, plugin_storage_archive_path, PLUGIN_ARCHIVE_FILE_EXTENSION, PLUGIN_STORAGE_ROOT_DIRECTORY_PATH, }; ================================================ FILE: packages/engine/src/plugin/plugin_manifest.json ================================================ { "$schema": "https://json-schema.org/draft/2020-12/schema", "type": "object", "additionalProperties": false, "required": [ "key", "runtime", "api_version", "match", "entry", "schemas" ], "properties": { "key": { "type": "string", "minLength": 1, "maxLength": 128, "pattern": "^[a-z][a-z0-9_-]*$" }, "runtime": { "type": "string", "enum": [ "wasm-component-v1" ] }, "api_version": { "type": "string", "pattern": "^[0-9]+\\.[0-9]+\\.[0-9]+$" }, "match": { "type": "object", "additionalProperties": false, "required": ["path_glob"], "properties": { "path_glob": { "type": "string", "minLength": 1 }, "content_type": { "type": "string", "enum": ["text", "binary"] } } }, "detect_changes": { "type": "object", "additionalProperties": false, "properties": { "state_context": { "type": "object", "additionalProperties": false, "properties": { "include_active_state": { "type": "boolean" }, "columns": { "type": "array", "minItems": 1, "uniqueItems": true, "items": { "type": "string", "enum": [ "entity_id", "schema_key", "snapshot_content", "file_id", "plugin_key", "version_id", "change_id", "metadata", "created_at", "updated_at" ] }, "contains": { "const": "entity_id" } } }, "allOf": [ { "if": { "properties": { "include_active_state": { "const": true } }, "required": [ "include_active_state" ] }, "then": {}, "else": { "not": { "required": [ "columns" ] } } } ] } } }, "entry": { "type": "string", "minLength": 1 }, "schemas": { "type": "array", "minItems": 1, "items": { "type": "string", "minLength": 1 } } } } ================================================ FILE: packages/engine/src/plugin/storage.rs ================================================ use crate::LixError; pub const PLUGIN_STORAGE_ROOT_DIRECTORY_PATH: &str = "/.lix/plugins/"; pub const PLUGIN_ARCHIVE_FILE_EXTENSION: &str = ".lixplugin"; pub fn plugin_storage_archive_file_id(plugin_key: &str) -> String { format!("lix_plugin_archive::{plugin_key}") } pub fn plugin_storage_archive_path(plugin_key: &str) -> Result { validate_plugin_key_segment(plugin_key)?; Ok(format!( "{PLUGIN_STORAGE_ROOT_DIRECTORY_PATH}{plugin_key}{PLUGIN_ARCHIVE_FILE_EXTENSION}" )) } pub fn plugin_key_from_archive_path(path: &str) -> Option { let file_name = path.strip_prefix(PLUGIN_STORAGE_ROOT_DIRECTORY_PATH)?; let plugin_key = file_name.strip_suffix(PLUGIN_ARCHIVE_FILE_EXTENSION)?; if plugin_key.is_empty() || plugin_key == "." || plugin_key == ".." || plugin_key.contains('/') || plugin_key.contains('\\') { return None; } Some(plugin_key.to_string()) } fn validate_plugin_key_segment(plugin_key: &str) -> Result<(), LixError> { if plugin_key.is_empty() || plugin_key == "." || plugin_key == ".." || plugin_key.contains('/') || plugin_key.contains('\\') { return Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "plugin key '{}' must be a single relative path segment", plugin_key ), hint: None, details: None, }); } Ok(()) } #[cfg(test)] mod tests { use super::{plugin_key_from_archive_path, plugin_storage_archive_path}; #[test] fn computes_storage_archive_paths() { assert_eq!( plugin_storage_archive_path("plugin_json").expect("path should build"), "/.lix/plugins/plugin_json.lixplugin" ); } #[test] fn extracts_plugin_key_from_storage_path() { assert_eq!( plugin_key_from_archive_path("/.lix/plugins/plugin_json.lixplugin"), Some("plugin_json".to_string()) ); assert_eq!( plugin_key_from_archive_path("/.lix/plugins/nested/plugin.lixplugin"), None ); } } ================================================ FILE: packages/engine/src/schema/annotations/defaults.rs ================================================ use serde_json::{Map as JsonMap, Value as JsonValue}; use crate::cel::{CelEvaluator, CelFunctionProvider}; use crate::LixError; pub(crate) fn apply_schema_defaults

( snapshot: &mut JsonMap, schema: &JsonValue, evaluator: &CelEvaluator, functions: P, schema_key: &str, ) -> Result where P: CelFunctionProvider, { apply_schema_defaults_with_context( snapshot, schema, &snapshot.clone(), evaluator, functions, schema_key, ) } pub(crate) fn apply_schema_defaults_with_shared_runtime

( snapshot: &mut JsonMap, schema: &JsonValue, functions: P, schema_key: &str, ) -> Result where P: CelFunctionProvider, { apply_schema_defaults( snapshot, schema, crate::cel::shared_runtime(), functions, schema_key, ) } pub(crate) fn apply_schema_defaults_with_context

( snapshot: &mut JsonMap, schema: &JsonValue, context: &JsonMap, evaluator: &CelEvaluator, functions: P, schema_key: &str, ) -> Result where P: CelFunctionProvider, { let Some(properties) = schema.get("properties").and_then(|value| value.as_object()) else { return Ok(false); }; let mut ordered_properties: Vec<(&String, &JsonValue)> = properties.iter().collect(); ordered_properties.sort_by(|(left_name, _), (right_name, _)| left_name.cmp(right_name)); let mut changed = false; for (field_name, field_schema) in ordered_properties { if snapshot.contains_key(field_name) { continue; } if let Some(expression) = field_schema .get("x-lix-default") .and_then(|value| value.as_str()) { let value = evaluator .evaluate_with_functions(expression, context, functions.clone()) .map_err(|err| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: format!( "failed to evaluate x-lix-default for '{}.{}': {}", schema_key, field_name, err.message ), hint: None, details: None, })?; snapshot.insert(field_name.clone(), value); changed = true; continue; } if let Some(default_value) = field_schema.get("default") { snapshot.insert(field_name.clone(), default_value.clone()); changed = true; } } Ok(changed) } #[cfg(test)] mod tests { use serde_json::{json, Map as JsonMap, Value as JsonValue}; use crate::cel::{CelEvaluator, CelFunctionProvider}; use super::apply_schema_defaults_with_context; #[test] fn applies_x_lix_default_for_missing_fields() { let evaluator = CelEvaluator::new(); let schema = json!({ "properties": { "slug": { "type": "string", "x-lix-default": "name + '-slug'" } } }); let mut snapshot = JsonMap::new(); snapshot.insert("name".to_string(), JsonValue::String("sample".to_string())); let context = snapshot.clone(); let changed = apply_schema_defaults_with_context( &mut snapshot, &schema, &context, &evaluator, fixed_functions(), "test_schema", "1", ) .expect("apply defaults"); assert!(changed); assert_eq!( snapshot.get("slug"), Some(&JsonValue::String("sample-slug".to_string())) ); } #[test] fn x_lix_default_overrides_json_default() { let evaluator = CelEvaluator::new(); let schema = json!({ "properties": { "status": { "type": "string", "default": "literal", "x-lix-default": "'computed'" } } }); let mut snapshot = JsonMap::new(); let context = snapshot.clone(); let changed = apply_schema_defaults_with_context( &mut snapshot, &schema, &context, &evaluator, fixed_functions(), "test_schema", "1", ) .expect("apply defaults"); assert!(changed); assert_eq!( snapshot.get("status"), Some(&JsonValue::String("computed".to_string())) ); } #[test] fn does_not_default_explicit_null_values() { let evaluator = CelEvaluator::new(); let schema = json!({ "properties": { "status": { "type": "string", "x-lix-default": "'computed'" } } }); let mut snapshot = JsonMap::new(); snapshot.insert("status".to_string(), JsonValue::Null); let context = snapshot.clone(); let changed = apply_schema_defaults_with_context( &mut snapshot, &schema, &context, &evaluator, fixed_functions(), "test_schema", "1", ) .expect("apply defaults"); assert!(!changed); assert_eq!(snapshot.get("status"), Some(&JsonValue::Null)); } #[test] fn applies_cel_defaults_in_stable_sorted_field_order() { #[derive(Clone)] struct CountingFunctions { next: std::sync::Arc, } impl CelFunctionProvider for CountingFunctions { fn call_uuid_v7(&self) -> String { let current = self.next.fetch_add(1, std::sync::atomic::Ordering::SeqCst); format!("uuid-{current}") } fn call_timestamp(&self) -> String { let current = self.next.fetch_add(1, std::sync::atomic::Ordering::SeqCst); format!("ts-{current}") } } let evaluator = CelEvaluator::new(); let schema = json!({ "properties": { "z_uuid": { "type": "string", "x-lix-default": "lix_uuid_v7()" }, "a_timestamp": { "type": "string", "x-lix-default": "lix_timestamp()" } } }); let mut snapshot = JsonMap::new(); let context = snapshot.clone(); let changed = apply_schema_defaults_with_context( &mut snapshot, &schema, &context, &evaluator, CountingFunctions { next: std::sync::Arc::new(std::sync::atomic::AtomicI64::new(0)), }, "test_schema", "1", ) .expect("apply defaults"); assert!(changed); assert_eq!( snapshot.get("a_timestamp"), Some(&JsonValue::String("ts-0".to_string())) ); assert_eq!( snapshot.get("z_uuid"), Some(&JsonValue::String("uuid-1".to_string())) ); } #[derive(Clone)] struct FixedFunctions; impl CelFunctionProvider for FixedFunctions { fn call_uuid_v7(&self) -> String { "uuid-fixed".to_string() } fn call_timestamp(&self) -> String { "1970-01-01T00:00:00.000Z".to_string() } } fn fixed_functions() -> FixedFunctions { FixedFunctions } } ================================================ FILE: packages/engine/src/schema/annotations/mod.rs ================================================ pub(crate) mod defaults; ================================================ FILE: packages/engine/src/schema/builtin/lix_account.json ================================================ { "x-lix-key": "lix_account", "x-lix-primary-key": [ "/id" ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()" }, "name": { "type": "string" } }, "required": [ "id", "name" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_active_account.json ================================================ { "x-lix-key": "lix_active_account", "x-lix-primary-key": [ "/account_id" ], "x-lix-foreign-keys": [ { "properties": [ "/account_id" ], "references": { "schemaKey": "lix_account", "properties": [ "/id" ] } } ], "type": "object", "properties": { "account_id": { "type": "string" } }, "required": [ "account_id" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_binary_blob_ref.json ================================================ { "x-lix-key": "lix_binary_blob_ref", "description": "Metadata pointer from a file version to its binary payload in internal CAS storage.", "x-lix-primary-key": [ "/id" ], "type": "object", "properties": { "id": { "type": "string", "description": "File/entity identifier (matches lix_file.id) for this binary reference row." }, "blob_hash": { "type": "string", "description": "BLAKE3 content hash used as the canonical CAS key for the binary payload." }, "size_bytes": { "type": "integer", "minimum": 0, "description": "Logical uncompressed file size in bytes for the referenced binary payload." } }, "required": [ "id", "blob_hash", "size_bytes" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_change.json ================================================ { "x-lix-key": "lix_change", "description": "A change records one edit to a Lix entity, including what changed, when it changed, and which entity was affected.", "x-lix-primary-key": [ "/id" ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()", "description": "Stable identifier for this change." }, "entity_id": { "type": "array", "description": "Canonical JSON primary-key tuple for the entity this change applies to, scoped by (`schema_key`, `file_id`). Values are ordered according to the target schema's `x-lix-primary-key`.", "items": { "type": "string" }, "minItems": 1 }, "schema_key": { "type": "string", "description": "Schema identifier of the entity (e.g. `lix_file_descriptor`, `lix_commit`, or a user-registered key)." }, "file_id": { "type": [ "string", "null" ], "description": "Filesystem-scoped file identifier when the change belongs to a file; NULL for engine-internal entities (commits, versions, settings)." }, "metadata": { "type": [ "object", "null" ], "description": "Optional user-provided JSON metadata attached to the change; NULL when nothing was supplied." }, "created_at": { "type": "string", "examples": [ "2026-05-08T17:42:31.123Z" ], "description": "ISO-8601 timestamp at which the change was recorded (set via `lix_timestamp()` at write time)." }, "snapshot_content": { "type": [ "object", "null" ], "description": "Entity JSON body at this change; NULL represents a tombstone (deletion)." } }, "required": [ "id", "entity_id", "schema_key", "file_id", "created_at" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_change_author.json ================================================ { "x-lix-key": "lix_change_author", "x-lix-primary-key": [ "/change_id", "/account_id" ], "x-lix-foreign-keys": [ { "properties": [ "/change_id" ], "references": { "schemaKey": "lix_change", "properties": [ "/id" ] } }, { "properties": [ "/account_id" ], "references": { "schemaKey": "lix_account", "properties": [ "/id" ] } } ], "type": "object", "properties": { "change_id": { "type": "string" }, "account_id": { "type": "string" } }, "required": [ "change_id", "account_id" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_commit.json ================================================ { "x-lix-key": "lix_commit", "description": "A commit is a stable point in project history. Versions point to commits. Use lix_commit_edge to inspect parent commits.", "examples": [ { "id": "commit_01jexample" } ], "x-lix-primary-key": [ "/id" ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()", "description": "Stable identifier of this commit." } }, "required": [ "id" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_commit_edge.json ================================================ { "x-lix-key": "lix_commit_edge", "description": "Direct parent relationship between two commits. Merge commits have one row per parent. The first parent is useful for showing mainline history or comparing a merge commit against the commit that was checked out before the merge.", "examples": [ { "parent_id": "commit-main", "child_id": "commit-merge", "parent_order": 0 }, { "parent_id": "commit-feature", "child_id": "commit-merge", "parent_order": 1 } ], "x-lix-primary-key": ["/child_id", "/parent_order"], "x-lix-unique": [["/parent_id", "/child_id"]], "x-lix-foreign-keys": [ { "properties": ["/parent_id"], "references": { "schemaKey": "lix_commit", "properties": ["/id"] } }, { "properties": ["/child_id"], "references": { "schemaKey": "lix_commit", "properties": ["/id"] } } ], "type": "object", "properties": { "parent_id": { "type": "string", "description": "Identifier of the parent commit." }, "child_id": { "type": "string", "description": "Identifier of the child commit." }, "parent_order": { "type": "integer", "minimum": 0, "examples": [0, 1], "description": "Zero-based position of this parent in the child commit's ordered parent list. The first parent has order 0; additional merge parents have order 1, 2, and so on." } }, "required": ["parent_id", "child_id", "parent_order"], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_directory_descriptor.json ================================================ { "x-lix-key": "lix_directory_descriptor", "x-lix-primary-key": [ "/id" ], "x-lix-unique": [ [ "/parent_id", "/name" ] ], "x-lix-foreign-keys": [ { "properties": [ "/parent_id" ], "references": { "schemaKey": "lix_directory_descriptor", "properties": [ "/id" ] } } ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()" }, "parent_id": { "type": [ "string", "null" ] }, "name": { "type": "string", "pattern": "^(?!\\.{1,2}$)[^/\\\\]+$" }, "hidden": { "type": "boolean", "x-lix-default": "false" } }, "required": [ "id", "parent_id", "name" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_file_descriptor.json ================================================ { "x-lix-key": "lix_file_descriptor", "x-lix-primary-key": [ "/id" ], "x-lix-unique": [ [ "/directory_id", "/name" ] ], "x-lix-foreign-keys": [ { "properties": [ "/directory_id" ], "references": { "schemaKey": "lix_directory_descriptor", "properties": [ "/id" ] } } ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()" }, "directory_id": { "type": [ "string", "null" ] }, "name": { "type": "string", "pattern": "^(?!\\.{1,2}$)[^/\\\\]+$" }, "hidden": { "type": "boolean", "x-lix-default": "false" } }, "required": [ "id", "directory_id", "name" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_key_value.json ================================================ { "x-lix-key": "lix_key_value", "x-lix-primary-key": [ "/key" ], "type": "object", "properties": { "key": { "type": "string" }, "value": { "description": "Arbitrary JSON value. This field stays in the JSON domain even when different rows hold different JSON kinds.", "anyOf": [ { "type": "object" }, { "type": "array" }, { "type": "string" }, { "type": "number" }, { "type": "boolean" }, { "type": "null" } ] } }, "required": [ "key", "value" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_label.json ================================================ { "x-lix-key": "lix_label", "description": "Catalog of labels that can be assigned to arbitrary live Lix rows through lix_label_assignment.", "x-lix-primary-key": [ "/id" ], "x-lix-unique": [ [ "/name" ] ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()", "description": "Stable label identifier. Label assignments reference this value." }, "name": { "type": "string", "description": "Human-readable label name. Unique across labels." } }, "required": [ "id", "name" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_label_assignment.json ================================================ { "x-lix-key": "lix_label_assignment", "description": "Mapping table that assigns a label to any live Lix row addressed by (target_entity_id, target_schema_key, target_file_id). The state foreign-key tuple is ordered as [0] target_entity_id, [1] target_schema_key, [2] target_file_id.", "x-lix-primary-key": [ "/id" ], "x-lix-unique": [ [ "/target_entity_id", "/target_schema_key", "/target_file_id", "/label_id" ] ], "x-lix-state-foreign-keys": [ [ "/target_entity_id", "/target_schema_key", "/target_file_id" ] ], "x-lix-foreign-keys": [ { "properties": [ "/label_id" ], "references": { "schemaKey": "lix_label", "properties": [ "/id" ] } } ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()", "description": "Stable identifier for this label assignment row." }, "target_entity_id": { "type": "array", "description": "Target row entity_id. This is slot [0] in x-lix-state-foreign-keys and must be the canonical JSON array of string primary-key parts.", "items": { "type": "string" }, "minItems": 1 }, "target_schema_key": { "type": "string", "description": "Target row schema key. This is slot [1] in x-lix-state-foreign-keys." }, "target_file_id": { "type": [ "string", "null" ], "description": "Target row file scope. This is slot [2] in x-lix-state-foreign-keys; null targets global rows." }, "label_id": { "type": "string", "description": "Label assigned to the target row. References lix_label.id." } }, "required": [ "id", "target_entity_id", "target_schema_key", "target_file_id", "label_id" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_registered_schema.json ================================================ { "x-lix-key": "lix_registered_schema", "x-lix-primary-key": [ "/value/x-lix-key" ], "type": "object", "properties": { "value": { "type": "object", "properties": { "x-lix-key": { "type": "string" } }, "required": [ "x-lix-key" ], "additionalProperties": true } }, "required": [ "value" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_version_descriptor.json ================================================ { "x-lix-key": "lix_version_descriptor", "description": "User-facing version metadata (name and visibility) for a branch-like version. The stable identity of a version; the matching `lix_version_ref` carries the moving head pointer. The catalog's `lix_version` surface joins this descriptor with its ref to present a single user-visible version row.", "x-lix-primary-key": [ "/id" ], "x-lix-unique": [ [ "/name" ] ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()", "description": "Stable version identifier (UUIDv7). Referenced by `lix_version_ref.id`." }, "name": { "type": "string", "description": "Human-readable version name (e.g. `main`, `feature-x`) shown in version listings and CLI output." }, "hidden": { "type": "boolean", "default": false, "description": "When true, the version is filtered from default listings (CLI, catalog views); operations by explicit id still succeed." } }, "required": [ "id", "name" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/lix_version_ref.json ================================================ { "x-lix-key": "lix_version_ref", "description": "Version head pointer. Records which commit a version should currently resolve to in the local runtime. Intentionally not part of canonical commit membership: refs may be reset client-side after sync without introducing content conflicts. Each `lix_version_descriptor.id` has exactly one `lix_version_ref` row.", "x-lix-primary-key": [ "/id" ], "x-lix-foreign-keys": [ { "properties": [ "/id" ], "references": { "schemaKey": "lix_version_descriptor", "properties": [ "/id" ] } }, { "properties": [ "/commit_id" ], "references": { "schemaKey": "lix_commit", "properties": [ "/id" ] } } ], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()", "description": "Version identifier whose head pointer is being stored; matches `lix_version_descriptor.id`." }, "commit_id": { "type": "string", "description": "Commit the version should currently resolve to in the local runtime (references `lix_commit.id`)." } }, "required": [ "id", "commit_id" ], "additionalProperties": false } ================================================ FILE: packages/engine/src/schema/builtin/mod.rs ================================================ use serde_json::Value as JsonValue; use std::sync::OnceLock; use crate::schema::lix_schema_definition; const LIX_REGISTERED_SCHEMA_KEY: &str = "lix_registered_schema"; const LIX_KEY_VALUE_SCHEMA_KEY: &str = "lix_key_value"; const LIX_ACCOUNT_SCHEMA_KEY: &str = "lix_account"; const LIX_ACTIVE_ACCOUNT_SCHEMA_KEY: &str = "lix_active_account"; const LIX_LABEL_SCHEMA_KEY: &str = "lix_label"; const LIX_LABEL_ASSIGNMENT_SCHEMA_KEY: &str = "lix_label_assignment"; const LIX_CHANGE_SCHEMA_KEY: &str = "lix_change"; const LIX_CHANGE_AUTHOR_SCHEMA_KEY: &str = "lix_change_author"; const LIX_COMMIT_SCHEMA_KEY: &str = "lix_commit"; const LIX_VERSION_DESCRIPTOR_SCHEMA_KEY: &str = "lix_version_descriptor"; const LIX_VERSION_REF_SCHEMA_KEY: &str = "lix_version_ref"; const LIX_COMMIT_EDGE_SCHEMA_KEY: &str = "lix_commit_edge"; const LIX_FILE_DESCRIPTOR_SCHEMA_KEY: &str = "lix_file_descriptor"; const LIX_DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = "lix_directory_descriptor"; const LIX_BINARY_BLOB_REF_SCHEMA_KEY: &str = "lix_binary_blob_ref"; const LIX_REGISTERED_SCHEMA_JSON: &str = include_str!("lix_registered_schema.json"); const LIX_KEY_VALUE_SCHEMA_JSON: &str = include_str!("lix_key_value.json"); const LIX_ACCOUNT_SCHEMA_JSON: &str = include_str!("lix_account.json"); const LIX_ACTIVE_ACCOUNT_SCHEMA_JSON: &str = include_str!("lix_active_account.json"); const LIX_LABEL_SCHEMA_JSON: &str = include_str!("lix_label.json"); const LIX_LABEL_ASSIGNMENT_SCHEMA_JSON: &str = include_str!("lix_label_assignment.json"); const LIX_CHANGE_SCHEMA_JSON: &str = include_str!("lix_change.json"); const LIX_CHANGE_AUTHOR_SCHEMA_JSON: &str = include_str!("lix_change_author.json"); const LIX_COMMIT_SCHEMA_JSON: &str = include_str!("lix_commit.json"); const LIX_VERSION_DESCRIPTOR_SCHEMA_JSON: &str = include_str!("lix_version_descriptor.json"); const LIX_VERSION_REF_SCHEMA_JSON: &str = include_str!("lix_version_ref.json"); const LIX_COMMIT_EDGE_SCHEMA_JSON: &str = include_str!("lix_commit_edge.json"); const LIX_FILE_DESCRIPTOR_SCHEMA_JSON: &str = include_str!("lix_file_descriptor.json"); const LIX_DIRECTORY_DESCRIPTOR_SCHEMA_JSON: &str = include_str!("lix_directory_descriptor.json"); const LIX_BINARY_BLOB_REF_SCHEMA_JSON: &str = include_str!("lix_binary_blob_ref.json"); static LIX_REGISTERED_SCHEMA: OnceLock = OnceLock::new(); static LIX_KEY_VALUE_SCHEMA: OnceLock = OnceLock::new(); static LIX_ACCOUNT_SCHEMA: OnceLock = OnceLock::new(); static LIX_ACTIVE_ACCOUNT_SCHEMA: OnceLock = OnceLock::new(); static LIX_LABEL_SCHEMA: OnceLock = OnceLock::new(); static LIX_LABEL_ASSIGNMENT_SCHEMA: OnceLock = OnceLock::new(); static LIX_CHANGE_SCHEMA: OnceLock = OnceLock::new(); static LIX_CHANGE_AUTHOR_SCHEMA: OnceLock = OnceLock::new(); static LIX_COMMIT_SCHEMA: OnceLock = OnceLock::new(); static LIX_VERSION_DESCRIPTOR_SCHEMA: OnceLock = OnceLock::new(); static LIX_VERSION_REF_SCHEMA: OnceLock = OnceLock::new(); static LIX_COMMIT_EDGE_SCHEMA: OnceLock = OnceLock::new(); static LIX_FILE_DESCRIPTOR_SCHEMA: OnceLock = OnceLock::new(); static LIX_DIRECTORY_DESCRIPTOR_SCHEMA: OnceLock = OnceLock::new(); static LIX_BINARY_BLOB_REF_SCHEMA: OnceLock = OnceLock::new(); const BUILTIN_SCHEMA_KEYS: &[&str] = &[ LIX_REGISTERED_SCHEMA_KEY, LIX_KEY_VALUE_SCHEMA_KEY, LIX_ACCOUNT_SCHEMA_KEY, LIX_ACTIVE_ACCOUNT_SCHEMA_KEY, LIX_LABEL_SCHEMA_KEY, LIX_LABEL_ASSIGNMENT_SCHEMA_KEY, LIX_CHANGE_SCHEMA_KEY, LIX_CHANGE_AUTHOR_SCHEMA_KEY, LIX_COMMIT_SCHEMA_KEY, LIX_VERSION_DESCRIPTOR_SCHEMA_KEY, LIX_VERSION_REF_SCHEMA_KEY, LIX_COMMIT_EDGE_SCHEMA_KEY, LIX_FILE_DESCRIPTOR_SCHEMA_KEY, LIX_DIRECTORY_DESCRIPTOR_SCHEMA_KEY, LIX_BINARY_BLOB_REF_SCHEMA_KEY, ]; pub(super) fn is_seed_schema_key(schema_key: &str) -> bool { BUILTIN_SCHEMA_KEYS.contains(&schema_key) } pub(super) fn seed_schema_definitions() -> Vec<&'static JsonValue> { BUILTIN_SCHEMA_KEYS .iter() .map(|schema_key| { seed_schema_definition(schema_key) .unwrap_or_else(|| panic!("missing seed schema definition for '{schema_key}'")) }) .collect() } pub(super) fn seed_schema_definition(schema_key: &str) -> Option<&'static JsonValue> { match schema_key { LIX_REGISTERED_SCHEMA_KEY => Some( LIX_REGISTERED_SCHEMA.get_or_init(|| parse_registered_schema_with_inlined_definition()), ), LIX_KEY_VALUE_SCHEMA_KEY => { Some(LIX_KEY_VALUE_SCHEMA.get_or_init(|| { parse_builtin_schema("lix_key_value.json", LIX_KEY_VALUE_SCHEMA_JSON) })) } LIX_ACCOUNT_SCHEMA_KEY => Some( LIX_ACCOUNT_SCHEMA .get_or_init(|| parse_builtin_schema("lix_account.json", LIX_ACCOUNT_SCHEMA_JSON)), ), LIX_ACTIVE_ACCOUNT_SCHEMA_KEY => Some(LIX_ACTIVE_ACCOUNT_SCHEMA.get_or_init(|| { parse_builtin_schema("lix_active_account.json", LIX_ACTIVE_ACCOUNT_SCHEMA_JSON) })), LIX_LABEL_SCHEMA_KEY => Some( LIX_LABEL_SCHEMA .get_or_init(|| parse_builtin_schema("lix_label.json", LIX_LABEL_SCHEMA_JSON)), ), LIX_LABEL_ASSIGNMENT_SCHEMA_KEY => Some(LIX_LABEL_ASSIGNMENT_SCHEMA.get_or_init(|| { parse_builtin_schema( "lix_label_assignment.json", LIX_LABEL_ASSIGNMENT_SCHEMA_JSON, ) })), LIX_CHANGE_SCHEMA_KEY => Some( LIX_CHANGE_SCHEMA .get_or_init(|| parse_builtin_schema("lix_change.json", LIX_CHANGE_SCHEMA_JSON)), ), LIX_CHANGE_AUTHOR_SCHEMA_KEY => Some(LIX_CHANGE_AUTHOR_SCHEMA.get_or_init(|| { parse_builtin_schema("lix_change_author.json", LIX_CHANGE_AUTHOR_SCHEMA_JSON) })), LIX_COMMIT_SCHEMA_KEY => Some( LIX_COMMIT_SCHEMA .get_or_init(|| parse_builtin_schema("lix_commit.json", LIX_COMMIT_SCHEMA_JSON)), ), LIX_VERSION_DESCRIPTOR_SCHEMA_KEY => { Some(LIX_VERSION_DESCRIPTOR_SCHEMA.get_or_init(|| { parse_builtin_schema( "lix_version_descriptor.json", LIX_VERSION_DESCRIPTOR_SCHEMA_JSON, ) })) } LIX_VERSION_REF_SCHEMA_KEY => Some(LIX_VERSION_REF_SCHEMA.get_or_init(|| { parse_builtin_schema("lix_version_ref.json", LIX_VERSION_REF_SCHEMA_JSON) })), LIX_COMMIT_EDGE_SCHEMA_KEY => Some(LIX_COMMIT_EDGE_SCHEMA.get_or_init(|| { parse_builtin_schema("lix_commit_edge.json", LIX_COMMIT_EDGE_SCHEMA_JSON) })), LIX_FILE_DESCRIPTOR_SCHEMA_KEY => Some(LIX_FILE_DESCRIPTOR_SCHEMA.get_or_init(|| { parse_builtin_schema("lix_file_descriptor.json", LIX_FILE_DESCRIPTOR_SCHEMA_JSON) })), LIX_DIRECTORY_DESCRIPTOR_SCHEMA_KEY => { Some(LIX_DIRECTORY_DESCRIPTOR_SCHEMA.get_or_init(|| { parse_builtin_schema( "lix_directory_descriptor.json", LIX_DIRECTORY_DESCRIPTOR_SCHEMA_JSON, ) })) } LIX_BINARY_BLOB_REF_SCHEMA_KEY => Some(LIX_BINARY_BLOB_REF_SCHEMA.get_or_init(|| { parse_builtin_schema("lix_binary_blob_ref.json", LIX_BINARY_BLOB_REF_SCHEMA_JSON) })), _ => None, } } #[allow(dead_code)] pub(crate) fn builtin_schema_json(schema_key: &str) -> Option<&'static str> { match schema_key { LIX_REGISTERED_SCHEMA_KEY => Some(LIX_REGISTERED_SCHEMA_JSON), LIX_KEY_VALUE_SCHEMA_KEY => Some(LIX_KEY_VALUE_SCHEMA_JSON), LIX_ACCOUNT_SCHEMA_KEY => Some(LIX_ACCOUNT_SCHEMA_JSON), LIX_ACTIVE_ACCOUNT_SCHEMA_KEY => Some(LIX_ACTIVE_ACCOUNT_SCHEMA_JSON), LIX_LABEL_SCHEMA_KEY => Some(LIX_LABEL_SCHEMA_JSON), LIX_LABEL_ASSIGNMENT_SCHEMA_KEY => Some(LIX_LABEL_ASSIGNMENT_SCHEMA_JSON), LIX_CHANGE_SCHEMA_KEY => Some(LIX_CHANGE_SCHEMA_JSON), LIX_CHANGE_AUTHOR_SCHEMA_KEY => Some(LIX_CHANGE_AUTHOR_SCHEMA_JSON), LIX_COMMIT_SCHEMA_KEY => Some(LIX_COMMIT_SCHEMA_JSON), LIX_VERSION_DESCRIPTOR_SCHEMA_KEY => Some(LIX_VERSION_DESCRIPTOR_SCHEMA_JSON), LIX_VERSION_REF_SCHEMA_KEY => Some(LIX_VERSION_REF_SCHEMA_JSON), LIX_COMMIT_EDGE_SCHEMA_KEY => Some(LIX_COMMIT_EDGE_SCHEMA_JSON), LIX_FILE_DESCRIPTOR_SCHEMA_KEY => Some(LIX_FILE_DESCRIPTOR_SCHEMA_JSON), LIX_DIRECTORY_DESCRIPTOR_SCHEMA_KEY => Some(LIX_DIRECTORY_DESCRIPTOR_SCHEMA_JSON), LIX_BINARY_BLOB_REF_SCHEMA_KEY => Some(LIX_BINARY_BLOB_REF_SCHEMA_JSON), _ => None, } } fn parse_builtin_schema(file_name: &str, raw_json: &str) -> JsonValue { serde_json::from_str(raw_json).unwrap_or_else(|error| { panic!("builtin schema file '{file_name}' must contain valid JSON: {error}") }) } fn parse_registered_schema_with_inlined_definition() -> JsonValue { let mut schema = parse_builtin_schema("lix_registered_schema.json", LIX_REGISTERED_SCHEMA_JSON); let value_schema = schema .pointer_mut("/properties/value") .expect("lix_registered_schema.json must define /properties/value"); let value_schema_object = value_schema .as_object_mut() .expect("lix_registered_schema.json /properties/value must be an object"); value_schema_object.insert( "allOf".to_string(), JsonValue::Array(vec![lix_schema_definition().clone()]), ); schema } #[cfg(test)] mod tests { use super::{seed_schema_definition, BUILTIN_SCHEMA_KEYS}; #[test] fn builtin_schemas_load_without_extra_override_metadata() { for schema_key in BUILTIN_SCHEMA_KEYS { seed_schema_definition(schema_key).expect("schema should exist"); } } #[test] fn registered_schema_value_inlines_lix_schema_definition() { let schema = seed_schema_definition("lix_registered_schema").expect("schema should exist"); let all_of = schema .pointer("/properties/value/allOf") .and_then(|value| value.as_array()) .expect("registered schema value must define allOf array"); assert_eq!(all_of.len(), 1); assert_eq!(all_of[0], *crate::schema::lix_schema_definition()); } } ================================================ FILE: packages/engine/src/schema/compatibility.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use serde_json::Value as JsonValue; use crate::common::top_level_property_name; use crate::entity_identity::canonical_json_text; use crate::LixError; const DOC_ONLY_SCHEMA_FIELDS: &[&str] = &["$comment", "deprecated", "description", "title"]; const CONSTRAINT_FIELDS: &[&str] = &[ "x-lix-primary-key", "x-lix-unique", "x-lix-foreign-keys", "x-lix-state-foreign-keys", ]; /// Validates that `next` is a compatible amendment of `previous`. /// /// The 0.6 schema model treats `x-lix-key` as the durable relation identity. /// Same-key amendments may widen accepted data by adding optional top-level /// properties, but they must not alter identity, constraints, requiredness, or /// existing field semantics. Nested object schemas are deliberately frozen for /// 0.6; recursive schema evolution is a later, explicit feature. /// /// Primary-key column order is semantic because it defines composite /// `entity_id` tuple order, so primary keys are never normalized. Relational /// constraints are frozen even when a particular addition could be /// retroactively safe, such as a new FK on a new optional property. That is a /// deliberate MVP rule we may relax later. pub(crate) fn validate_schema_amendment( previous: &JsonValue, next: &JsonValue, ) -> Result<(), LixError> { let previous_key = schema_key(previous, "previous")?; let next_key = schema_key(next, "next")?; if previous_key != next_key { return schema_amendment_error(format!( "schema amendment must keep x-lix-key stable; previous '{previous_key}', next '{next_key}'" )); } require_additional_properties_false(previous, "previous", previous_key)?; require_additional_properties_false(next, "next", next_key)?; validate_constraints_unchanged(previous, next, previous_key)?; let changed_top_level_semantic_keys = changed_top_level_semantic_keys(previous, next); if !changed_top_level_semantic_keys.is_empty() { return schema_amendment_error(format!( "schema '{previous_key}' cannot change top-level schema semantics: {}", changed_top_level_semantic_keys.join(", ") )); } let previous_required = string_set_field(previous, "required", "previous", previous_key)?; let next_required = string_set_field(next, "required", "next", next_key)?; if previous_required != next_required { return schema_amendment_error(format!( "schema '{previous_key}' cannot amend required properties" )); } let previous_properties = properties_field(previous, "previous", previous_key)?; let next_properties = properties_field(next, "next", next_key)?; for (property_name, previous_property_schema) in &previous_properties { let Some(next_property_schema) = next_properties.get(property_name) else { return schema_amendment_error(format!( "schema '{previous_key}' cannot remove property '/{property_name}'" )); }; if strip_doc_only_fields(previous_property_schema) != strip_doc_only_fields(next_property_schema) { return schema_amendment_error(format!( "schema '{previous_key}' cannot change existing property '/{property_name}' except for doc-only fields" )); } } let constrained_property_names = constrained_top_level_property_names(next)?; for property_name in next_properties.keys() { if previous_properties.contains_key(property_name) { continue; } if next_required.contains(property_name) { return schema_amendment_error(format!( "schema '{previous_key}' cannot add required property '/{property_name}'" )); } if constrained_property_names.contains(property_name) { return schema_amendment_error(format!( "schema '{previous_key}' cannot add property '/{property_name}' as part of primary, unique, or foreign-key constraints" )); } } Ok(()) } fn schema_key<'a>(schema: &'a JsonValue, side: &str) -> Result<&'a str, LixError> { schema .get("x-lix-key") .and_then(JsonValue::as_str) .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("{side} schema must include string x-lix-key"), ) }) } fn require_additional_properties_false( schema: &JsonValue, side: &str, schema_key: &str, ) -> Result<(), LixError> { if schema.get("additionalProperties") == Some(&JsonValue::Bool(false)) { return Ok(()); } schema_amendment_error(format!( "{side} schema '{schema_key}' must set additionalProperties to false" )) } fn validate_constraints_unchanged( previous: &JsonValue, next: &JsonValue, schema_key: &str, ) -> Result<(), LixError> { // Primary-key column order is semantic because it defines composite // entity_id tuple order, so it is compared directly and never normalized. if previous.get("x-lix-primary-key") != next.get("x-lix-primary-key") { return schema_amendment_error(format!( "schema '{schema_key}' cannot amend constraint field 'x-lix-primary-key'" )); } for field in [ "x-lix-unique", "x-lix-foreign-keys", "x-lix-state-foreign-keys", ] { if normalized_constraint_list(previous.get(field), field)? != normalized_constraint_list(next.get(field), field)? { return schema_amendment_error(format!( "schema '{schema_key}' cannot amend constraint field '{field}'" )); } } Ok(()) } fn normalized_constraint_list( value: Option<&JsonValue>, field: &str, ) -> Result, LixError> { let Some(value) = value else { return Ok(Vec::new()); }; let Some(values) = value.as_array() else { return schema_amendment_error(format!( "schema constraint field '{field}' must be an array" )); }; let mut values = values.clone(); values.sort_by(|left, right| { let left = canonical_json_text(left) .expect("canonical json from in-memory serde_json::Value cannot fail"); let right = canonical_json_text(right) .expect("canonical json from in-memory serde_json::Value cannot fail"); left.cmp(&right) }); Ok(values) } fn properties_field( schema: &JsonValue, side: &str, schema_key: &str, ) -> Result, LixError> { match schema.get("properties") { Some(JsonValue::Object(object)) => Ok(object .iter() .map(|(key, value)| (key.clone(), value.clone())) .collect()), Some(_) => schema_amendment_error(format!( "{side} schema '{schema_key}' field 'properties' must be an object" )), None => Ok(BTreeMap::new()), } } fn string_set_field( schema: &JsonValue, field: &str, side: &str, schema_key: &str, ) -> Result, LixError> { let Some(value) = schema.get(field) else { return Ok(BTreeSet::new()); }; let Some(values) = value.as_array() else { return schema_amendment_error(format!( "{side} schema '{schema_key}' field '{field}' must be an array of strings" )); }; values .iter() .map(|value| { value.as_str().map(str::to_string).ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "{side} schema '{schema_key}' field '{field}' must be an array of strings" ), ) }) }) .collect() } fn strip_doc_only_fields(value: &JsonValue) -> JsonValue { match value { JsonValue::Object(object) => JsonValue::Object( object .iter() .filter(|(key, _)| !DOC_ONLY_SCHEMA_FIELDS.contains(&key.as_str())) .map(|(key, value)| (key.clone(), strip_doc_only_fields(value))) .collect(), ), JsonValue::Array(values) => { JsonValue::Array(values.iter().map(strip_doc_only_fields).collect()) } _ => value.clone(), } } fn top_level_semantic_fields(schema: &JsonValue) -> BTreeMap { let JsonValue::Object(object) = strip_doc_only_fields(schema) else { return BTreeMap::new(); }; object .into_iter() .filter(|(key, _)| { key != "properties" && key != "required" && !CONSTRAINT_FIELDS.contains(&key.as_str()) }) .collect() } fn changed_top_level_semantic_keys(previous: &JsonValue, next: &JsonValue) -> Vec { let previous = top_level_semantic_fields(previous); let next = top_level_semantic_fields(next); previous .keys() .chain(next.keys()) .collect::>() .into_iter() .filter(|key| previous.get(*key) != next.get(*key)) .cloned() .collect() } fn constrained_top_level_property_names(schema: &JsonValue) -> Result, LixError> { let mut names = BTreeSet::new(); collect_top_level_pointer_names(schema.get("x-lix-primary-key"), &mut names)?; if let Some(unique_groups) = schema.get("x-lix-unique").and_then(JsonValue::as_array) { for group in unique_groups { collect_top_level_pointer_names(Some(group), &mut names)?; } } if let Some(foreign_keys) = schema .get("x-lix-foreign-keys") .and_then(JsonValue::as_array) { for foreign_key in foreign_keys { collect_top_level_pointer_names(foreign_key.get("properties"), &mut names)?; } } if let Some(foreign_keys) = schema .get("x-lix-state-foreign-keys") .and_then(JsonValue::as_array) { for foreign_key in foreign_keys { collect_top_level_pointer_names(Some(foreign_key), &mut names)?; } } Ok(names) } fn collect_top_level_pointer_names( value: Option<&JsonValue>, names: &mut BTreeSet, ) -> Result<(), LixError> { let Some(value) = value else { return Ok(()); }; let Some(pointers) = value.as_array() else { return schema_amendment_error( "schema constraint fields must contain arrays of JSON Pointers".to_string(), ); }; for pointer in pointers { let Some(pointer) = pointer.as_str() else { return schema_amendment_error( "schema constraint fields must contain JSON Pointer strings".to_string(), ); }; if let Some(name) = top_level_property_name(pointer)? { names.insert(name); } } Ok(()) } fn schema_amendment_error(message: String) -> Result { Err(LixError::new(LixError::CODE_SCHEMA_DEFINITION, message)) } #[cfg(test)] mod tests { use serde_json::{json, Value as JsonValue}; use super::validate_schema_amendment; fn base_schema() -> JsonValue { json!({ "x-lix-key": "library_book", "type": "object", "x-lix-primary-key": ["/id"], "x-lix-unique": [["/isbn"]], "x-lix-foreign-keys": [ { "properties": ["/author_id"], "references": { "schemaKey": "library_author", "properties": ["/id"] } } ], "x-lix-state-foreign-keys": [ ["/target_entity_id", "/target_schema_key", "/target_file_id"] ], "properties": { "id": { "type": "string", "description": "Stable id" }, "isbn": { "type": "string" }, "title": { "type": "string", "title": "Title" }, "author_id": { "type": "string" }, "target_entity_id": { "type": "array", "items": { "type": "string" } }, "target_schema_key": { "type": "string" }, "target_file_id": { "type": ["string", "null"] } }, "required": [ "id", "isbn", "title", "author_id", "target_entity_id", "target_schema_key", "target_file_id" ], "additionalProperties": false }) } #[test] fn allows_doc_only_changes_on_existing_properties() { let previous = base_schema(); let mut next = base_schema(); next["description"] = json!("A library book relation"); next["title"] = json!("Library Book"); next["$comment"] = json!("Top-level schema docs"); next["deprecated"] = json!(false); next["properties"]["title"]["description"] = json!("Human readable title"); next["properties"]["title"]["title"] = json!("Book title"); next["properties"]["title"]["$comment"] = json!("Shown in schema docs"); next["properties"]["title"]["deprecated"] = json!(true); validate_schema_amendment(&previous, &next).expect("doc-only changes are compatible"); } #[test] fn allows_adding_optional_property() { let previous = base_schema(); let mut next = base_schema(); next["properties"]["subtitle"] = json!({ "type": "string", "description": "Optional subtitle" }); validate_schema_amendment(&previous, &next) .expect("optional property addition is compatible"); } #[test] fn allows_empty_properties_to_grow_with_optional_properties() { let previous = json!({ "x-lix-key": "library_empty", "type": "object", "properties": {}, "additionalProperties": false }); let next = json!({ "x-lix-key": "library_empty", "type": "object", "properties": { "title": { "type": "string" } }, "additionalProperties": false }); validate_schema_amendment(&previous, &next) .expect("optional property addition from an empty schema is compatible"); } #[test] fn accepts_cosmetic_constraint_list_reordering() { let mut previous = base_schema(); previous["x-lix-unique"] = json!([["/isbn"], ["/title"]]); previous["x-lix-foreign-keys"] = json!([ { "properties": ["/author_id"], "references": { "schemaKey": "library_author", "properties": ["/id"] } }, { "properties": ["/isbn"], "references": { "schemaKey": "library_isbn", "properties": ["/id"] } } ]); previous["x-lix-state-foreign-keys"] = json!([ ["/target_entity_id", "/target_schema_key", "/target_file_id"], ["/other_entity_id", "/other_schema_key", "/other_file_id"] ]); let mut next = previous.clone(); next["x-lix-unique"] = json!([["/title"], ["/isbn"]]); next["x-lix-foreign-keys"] = json!([ { "properties": ["/isbn"], "references": { "schemaKey": "library_isbn", "properties": ["/id"] } }, { "properties": ["/author_id"], "references": { "schemaKey": "library_author", "properties": ["/id"] } } ]); next["x-lix-state-foreign-keys"] = json!([ ["/other_entity_id", "/other_schema_key", "/other_file_id"], ["/target_entity_id", "/target_schema_key", "/target_file_id"] ]); validate_schema_amendment(&previous, &next) .expect("cosmetic constraint list ordering should not matter"); } #[test] fn rejects_required_set_shrink() { let previous = base_schema(); let mut next = base_schema(); next["required"] = json!([ "id", "isbn", "author_id", "target_entity_id", "target_schema_key", "target_file_id" ]); let error = validate_schema_amendment(&previous, &next) .expect_err("required properties must be frozen"); assert!( error.message.contains("required properties"), "unexpected error: {error:?}" ); } #[test] fn rejects_schema_key_change() { let previous = base_schema(); let mut next = base_schema(); next["x-lix-key"] = json!("library_periodical"); let error = validate_schema_amendment(&previous, &next).expect_err("schema key must be stable"); assert!( error.message.contains("x-lix-key"), "unexpected error: {error:?}" ); } #[test] fn rejects_additional_properties_change() { let previous = base_schema(); let mut next = base_schema(); next["additionalProperties"] = json!(true); let error = validate_schema_amendment(&previous, &next) .expect_err("additionalProperties must remain false"); assert!( error.message.contains("additionalProperties"), "unexpected error: {error:?}" ); } #[test] fn rejects_primary_key_change() { let previous = base_schema(); let mut next = base_schema(); next["x-lix-primary-key"] = json!(["/isbn"]); let error = validate_schema_amendment(&previous, &next) .expect_err("primary-key changes are incompatible"); assert!( error.message.contains("x-lix-primary-key"), "unexpected error: {error:?}" ); } #[test] fn rejects_primary_key_reordering() { let mut previous = base_schema(); previous["x-lix-primary-key"] = json!(["/id", "/isbn"]); let mut next = previous.clone(); next["x-lix-primary-key"] = json!(["/isbn", "/id"]); let error = validate_schema_amendment(&previous, &next) .expect_err("primary-key column order is semantic"); assert!( error.message.contains("x-lix-primary-key"), "unexpected error: {error:?}" ); } #[test] fn rejects_unique_constraint_change() { let previous = base_schema(); let mut next = base_schema(); next["x-lix-unique"] = json!([["/title"]]); let error = validate_schema_amendment(&previous, &next) .expect_err("unique changes are incompatible"); assert!( error.message.contains("x-lix-unique"), "unexpected error: {error:?}" ); } #[test] fn rejects_foreign_key_change() { let previous = base_schema(); let mut next = base_schema(); next["x-lix-foreign-keys"][0]["references"]["schemaKey"] = json!("library_person"); let error = validate_schema_amendment(&previous, &next) .expect_err("foreign-key changes are incompatible"); assert!( error.message.contains("x-lix-foreign-keys"), "unexpected error: {error:?}" ); } #[test] fn rejects_inner_foreign_key_pointer_reordering() { let mut previous = base_schema(); previous["x-lix-foreign-keys"] = json!([ { "properties": ["/author_id", "/isbn"], "references": { "schemaKey": "library_author", "properties": ["/id", "/isbn"] } } ]); let mut next = previous.clone(); next["x-lix-foreign-keys"] = json!([ { "properties": ["/isbn", "/author_id"], "references": { "schemaKey": "library_author", "properties": ["/isbn", "/id"] } } ]); let error = validate_schema_amendment(&previous, &next) .expect_err("FK tuple order is semantic and must remain frozen"); assert!( error.message.contains("x-lix-foreign-keys"), "unexpected error: {error:?}" ); } #[test] fn rejects_state_foreign_key_change() { let previous = base_schema(); let mut next = base_schema(); next["x-lix-state-foreign-keys"] = json!([]); let error = validate_schema_amendment(&previous, &next) .expect_err("state foreign-key changes are incompatible"); assert!( error.message.contains("x-lix-state-foreign-keys"), "unexpected error: {error:?}" ); } #[test] fn rejects_existing_property_type_change() { let previous = base_schema(); let mut next = base_schema(); next["properties"]["title"]["type"] = json!("number"); let error = validate_schema_amendment(&previous, &next) .expect_err("existing property semantics must not change"); assert!( error.message.contains("/title"), "unexpected error: {error:?}" ); } #[test] fn rejects_nested_object_property_addition() { let mut previous = base_schema(); previous["properties"]["metadata"] = json!({ "type": "object", "properties": { "source": { "type": "string" } }, "additionalProperties": false }); let mut next = previous.clone(); next["properties"]["metadata"]["properties"]["page"] = json!({ "type": "number" }); let error = validate_schema_amendment(&previous, &next) .expect_err("nested schema amendments are frozen for MVP"); assert!( error.message.contains("/metadata"), "unexpected error: {error:?}" ); } #[test] fn rejects_top_level_type_change() { let previous = base_schema(); let mut next = base_schema(); next["type"] = json!("array"); let error = validate_schema_amendment(&previous, &next) .expect_err("top-level schema semantics must not change"); assert!( error.message.contains("top-level schema semantics"), "unexpected error: {error:?}" ); } #[test] fn rejects_top_level_examples_change_and_names_field() { let previous = base_schema(); let mut next = base_schema(); next["examples"] = json!([{ "title": "Example" }]); let error = validate_schema_amendment(&previous, &next) .expect_err("examples are not an amendment annotation in the MVP"); assert!( error.message.contains("examples"), "unexpected error: {error:?}" ); } #[test] fn rejects_existing_property_default_change() { let mut previous = base_schema(); let mut next = base_schema(); previous["properties"]["title"]["default"] = json!("Untitled"); next["properties"]["title"]["default"] = json!("Draft"); let error = validate_schema_amendment(&previous, &next) .expect_err("existing defaults must not change"); assert!( error.message.contains("/title"), "unexpected error: {error:?}" ); } #[test] fn rejects_removed_property() { let previous = base_schema(); let mut next = base_schema(); next["properties"].as_object_mut().unwrap().remove("title"); let error = validate_schema_amendment(&previous, &next) .expect_err("properties must not be removed"); assert!( error.message.contains("remove property '/title'"), "unexpected error: {error:?}" ); } #[test] fn rejects_added_required_property() { let previous = base_schema(); let mut next = base_schema(); next["properties"]["subtitle"] = json!({ "type": "string" }); next["required"] .as_array_mut() .unwrap() .push(json!("subtitle")); let error = validate_schema_amendment(&previous, &next) .expect_err("new properties must be optional"); assert!( error.message.contains("required"), "unexpected error: {error:?}" ); } #[test] fn rejects_added_property_that_is_part_of_existing_constraints() { let mut previous = base_schema(); previous["x-lix-unique"] = json!([["/subtitle"]]); let mut next = previous.clone(); next["properties"]["subtitle"] = json!({ "type": "string" }); let error = validate_schema_amendment(&previous, &next) .expect_err("new properties must not be constraint participants"); assert!( error .message .contains("primary, unique, or foreign-key constraints"), "unexpected error: {error:?}" ); } #[test] fn rejects_required_growth_for_existing_property() { let mut previous = base_schema(); previous["required"] .as_array_mut() .unwrap() .retain(|value| value != "title"); let next = base_schema(); let error = validate_schema_amendment(&previous, &next).expect_err("required set must not grow"); assert!( error.message.contains("cannot amend required properties"), "unexpected error: {error:?}" ); } } ================================================ FILE: packages/engine/src/schema/definition.json ================================================ { "$schema": "https://json-schema.org/draft/2020-12/schema", "title": "Lix Schema Definition", "description": "A Lix schema is a JSON Schema draft 2020-12 document augmented with `x-lix-*` extensions that identify and constrain an entity type. Every schema must declare `x-lix-key` (snake_case identifier, preferably prefixed with a plugin or domain namespace such as `library_book` to avoid collisions) and `additionalProperties: false`; add `x-lix-primary-key` (array of JSON Pointers to required string properties) to make the schema writable. Lix will auto-materialize a public virtual table named after `x-lix-key` with an `INSERT/UPDATE/DELETE` surface. See the `examples` field for a minimal working schema.", "examples": [ { "x-lix-key": "library_book", "type": "object", "x-lix-primary-key": [ "/id" ], "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()" }, "title": { "type": "string" }, "author": { "type": "string" } }, "required": [ "id", "title" ], "additionalProperties": false } ], "allOf": [ { "$ref": "https://json-schema.org/draft/2020-12/schema" }, { "type": "object", "properties": { "x-lix-unique": { "type": "array", "description": "Array of composite unique constraints. Each inner array is a JSON Pointer (RFC 6901) per participating property, e.g. `[[\"/email\"], [\"/tenant_id\", \"/handle\"]]`.", "items": { "type": "array", "minItems": 1, "uniqueItems": true, "items": { "type": "string", "format": "json-pointer", "description": "JSON Pointer (RFC 6901) to the property, e.g. `/id` or `/nested/field`. Note the leading slash." } } }, "additionalProperties": { "type": "boolean", "const": false, "description": "Objects describing Lix schemas must not allow arbitrary additional properties; set this explicitly to false." }, "x-lix-primary-key": { "type": "array", "minItems": 1, "uniqueItems": true, "description": "Primary-key fields as JSON Pointers (RFC 6901) into required string-valued entity properties, e.g. `[\"/id\"]` for a single-column key or `[\"/tenant_id\", \"/handle\"]` for a composite key. Note the leading slash; `\"id\"` without a slash is not a valid pointer.", "items": { "type": "string", "format": "json-pointer", "description": "JSON Pointer (RFC 6901) to a property that participates in the primary key, e.g. `/id` or `/nested/field`." } }, "x-lix-foreign-keys": { "type": "array", "items": { "type": "object", "required": [ "properties", "references" ], "additionalProperties": false, "properties": { "properties": { "type": "array", "minItems": 1, "items": { "type": "string", "format": "json-pointer", "description": "JSON Pointer (RFC 6901) to the local field, e.g. `/author_id` or `/nested/field`." }, "uniqueItems": true, "description": "Local-side participants in the FK, as JSON Pointers (RFC 6901), e.g. `[\"/author_id\"]` or `[\"/tenant_id\", \"/account_id\"]`." }, "references": { "type": "object", "required": [ "schemaKey", "properties" ], "additionalProperties": false, "properties": { "schemaKey": { "type": "string", "description": "The x-lix-key of the referenced schema" }, "properties": { "type": "array", "minItems": 1, "items": { "type": "string", "format": "json-pointer", "description": "JSON Pointer (RFC 6901) to the remote field on the referenced schema, e.g. `/id`." }, "uniqueItems": true, "description": "Remote-side participants on the referenced schema, as JSON Pointers (RFC 6901). Must be the same length as the local `properties`." } } } } } }, "x-lix-state-foreign-keys": { "type": "array", "description": "Foreign keys from local fields to arbitrary live state rows. Each entry is exactly three required local JSON Pointers ordered as `[entity_id, schema_key, file_id]`: index 0 points to the local entity_id JSON array, index 1 points to the local schema_key string, and index 2 points to the local file_id string-or-null. Use explicit null for global file_id targets; omitted fields are invalid. The referenced state row is resolved in the same version.", "items": { "type": "array", "minItems": 3, "maxItems": 3, "uniqueItems": true, "prefixItems": [ { "type": "string", "format": "json-pointer", "description": "[0] Local JSON Pointer for the target entity_id. The value must be a non-empty JSON array of strings." }, { "type": "string", "format": "json-pointer", "description": "[1] Local JSON Pointer for the target schema_key. The value must be a string." }, { "type": "string", "format": "json-pointer", "description": "[2] Local JSON Pointer for the target file_id. The value must be a string or null." } ], "items": false } }, "x-lix-key": { "type": "string", "pattern": "^[a-z][a-z0-9_]*$", "description": "The schema identifier. Must be snake_case (lowercase, underscores) to safely embed in SQL identifiers. Prefix keys with a plugin, app, or domain namespace such as `library_book` or `csv_plugin_cell` to avoid collisions with other schemas.", "examples": [ "library_book", "csv_plugin_cell" ] }, "properties": { "type": "object", "additionalProperties": { "allOf": [ { "$ref": "https://json-schema.org/draft/2020-12/schema" }, { "type": "object", "properties": { "x-lix-default": { "type": "string", "format": "cel", "description": "CEL expression evaluated to produce the default value when the property is omitted. Available Lix-registered functions: `lix_uuid_v7()` (RFC 9562 UUIDv7), `lix_timestamp()` (ISO-8601 string). CEL literals are also valid: strings (`'open'`), numbers (`0`, `3.14`), booleans (`true` / `false`), and `null`.", "examples": [ "lix_uuid_v7()", "lix_timestamp()", "false", "'open'" ] } } } ] } } }, "required": [ "x-lix-key", "additionalProperties" ] } ] } ================================================ FILE: packages/engine/src/schema/definition.rs ================================================ use cel::Program; use jsonschema::{Draft, JSONSchema}; use serde_json::Value as JsonValue; use std::collections::BTreeSet; use std::sync::OnceLock; use crate::common::parse_json_pointer; use crate::LixError; static LIX_SCHEMA_DEFINITION: OnceLock = OnceLock::new(); static LIX_SCHEMA_VALIDATOR: OnceLock> = OnceLock::new(); pub fn lix_schema_definition() -> &'static JsonValue { LIX_SCHEMA_DEFINITION.get_or_init(|| { let raw = include_str!("definition.json"); serde_json::from_str(raw).expect("definition.json must be valid JSON") }) } pub fn lix_schema_definition_json() -> &'static str { include_str!("definition.json") } pub fn validate_lix_schema_definition(schema: &JsonValue) -> Result<(), LixError> { if let Some(err) = detect_missing_pointer_slash(schema) { return Err(err); } if let Some(err) = detect_state_foreign_key_tuple_shape(schema) { return Err(err); } let validator = lix_schema_validator()?; if let Err(errors) = validator.validate(schema) { let details = format_lix_schema_validation_errors(errors); return Err(LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message: format!("Invalid Lix schema definition: {details}"), hint: None, details: None, }); } assert_primary_key_pointers(schema)?; assert_unique_pointers(schema)?; assert_state_foreign_key_pointers(schema)?; assert_known_x_lix_top_level_fields(schema)?; assert_entity_properties_do_not_use_reserved_lix_prefix(schema)?; assert_entity_properties_have_projectable_types(schema)?; Ok(()) } fn assert_entity_properties_do_not_use_reserved_lix_prefix( schema: &JsonValue, ) -> Result<(), LixError> { let Some(schema_key) = schema.get("x-lix-key").and_then(JsonValue::as_str) else { return Ok(()); }; let Some(properties) = schema.get("properties").and_then(JsonValue::as_object) else { return Ok(()); }; for property_name in properties.keys() { if property_name.starts_with("lix") { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "Invalid Lix schema definition: schema '{schema_key}' property '/{property_name}' uses reserved prefix 'lix'." ), ) .with_hint("Property names starting with 'lix' are reserved for Lix system fields.")); } } Ok(()) } fn assert_entity_properties_have_projectable_types(schema: &JsonValue) -> Result<(), LixError> { let Some(schema_key) = schema.get("x-lix-key").and_then(JsonValue::as_str) else { return Ok(()); }; let Some(properties) = schema.get("properties").and_then(JsonValue::as_object) else { return Ok(()); }; for (property_name, property_schema) in properties { if !schema_property_has_sql_projection_type(property_schema) { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "Invalid Lix schema definition: schema '{schema_key}' property '/{property_name}' must declare a SQL-projectable JSON Schema type" ), ) .with_hint("Use an explicit type such as string, number, integer, boolean, object, array, or a supported union of those types.")); } } Ok(()) } fn schema_property_has_sql_projection_type(schema: &JsonValue) -> bool { let mut kinds = BTreeSet::new(); collect_schema_type_kinds(schema, &mut kinds); kinds.remove("null"); kinds.iter().any(|kind| { matches!( *kind, "boolean" | "integer" | "number" | "string" | "object" | "array" ) }) } fn collect_schema_type_kinds<'a>(schema: &'a JsonValue, out: &mut BTreeSet<&'a str>) { match schema.get("type") { Some(JsonValue::String(kind)) => { out.insert(kind.as_str()); } Some(JsonValue::Array(kinds)) => { for kind in kinds.iter().filter_map(JsonValue::as_str) { out.insert(kind); } } _ => {} } for keyword in ["anyOf", "oneOf", "allOf"] { if let Some(JsonValue::Array(branches)) = schema.get(keyword) { for branch in branches { collect_schema_type_kinds(branch, out); } } } } /// Detect the common no-leading-slash mistake in JSON-Pointer-valued fields /// (`x-lix-primary-key`, `x-lix-unique`, `x-lix-foreign-keys[].properties`, /// `x-lix-foreign-keys[].references.properties`, /// `x-lix-state-foreign-keys[]`) and return a targeted /// error + hint suggesting the fix. /// /// Surfacing this before the meta-schema validator runs replaces the /// generic `format "json-pointer"` failure with a message that tells the /// user exactly what to change (e.g. `"id"` → `"/id"`). fn detect_missing_pointer_slash(schema: &JsonValue) -> Option { let mut offenders: Vec<(String, String)> = Vec::new(); fn collect(items: Option<&Vec>, label: &str, out: &mut Vec<(String, String)>) { let Some(items) = items else { return; }; for item in items { if let Some(s) = item.as_str() { if !s.is_empty() && !s.starts_with('/') { out.push((label.to_string(), s.to_string())); } } } } collect( schema .get("x-lix-primary-key") .and_then(JsonValue::as_array), "x-lix-primary-key", &mut offenders, ); if let Some(groups) = schema.get("x-lix-unique").and_then(JsonValue::as_array) { for group in groups { collect(group.as_array(), "x-lix-unique", &mut offenders); } } if let Some(fks) = schema .get("x-lix-foreign-keys") .and_then(JsonValue::as_array) { for fk in fks { collect( fk.get("properties").and_then(JsonValue::as_array), "x-lix-foreign-keys[].properties", &mut offenders, ); collect( fk.get("references") .and_then(|r| r.get("properties")) .and_then(JsonValue::as_array), "x-lix-foreign-keys[].references.properties", &mut offenders, ); } } if let Some(fks) = schema .get("x-lix-state-foreign-keys") .and_then(JsonValue::as_array) { for fk in fks { collect(fk.as_array(), "x-lix-state-foreign-keys", &mut offenders); } } if offenders.is_empty() { return None; } let examples = offenders .iter() .take(3) .map(|(field, value)| format!("{field}: \"{value}\" → \"/{value}\"")) .collect::>() .join("; "); let message = format!( "Invalid Lix schema definition: JSON Pointer values must begin with '/'. Offending entries: {examples}" ); let hint = format!( "Did you mean [\"/{}\"]? JSON Pointer values must prefix property names with '/' (RFC 6901).", offenders[0].1 ); Some( LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message, hint: None, details: None, } .with_hint(hint), ) } fn detect_state_foreign_key_tuple_shape(schema: &JsonValue) -> Option { let foreign_keys = schema .get("x-lix-state-foreign-keys") .and_then(JsonValue::as_array)?; for (index, foreign_key) in foreign_keys.iter().enumerate() { let Some(local_pointers) = foreign_key.as_array() else { continue; }; if local_pointers.len() != 3 { return Some(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "Invalid Lix schema definition: x-lix-state-foreign-keys[{index}] must contain exactly three JSON Pointers ordered as [entity_id, schema_key, file_id]; [0] entity_id, [1] schema_key, [2] file_id." ), )); } } None } pub fn validate_lix_schema(schema: &JsonValue, data: &JsonValue) -> Result<(), LixError> { validate_lix_schema_definition(schema)?; let validator = compile_lix_schema(schema)?; if let Err(errors) = validator.validate(data) { let details = format_lix_schema_validation_errors(errors); return Err(LixError { code: LixError::CODE_SCHEMA_VALIDATION.to_string(), message: format!("Data validation failed: {details}"), hint: None, details: None, }); } Ok(()) } fn lix_schema_validator() -> Result<&'static JSONSchema, LixError> { let result = LIX_SCHEMA_VALIDATOR.get_or_init(|| compile_lix_schema(lix_schema_definition())); match result { Ok(schema) => Ok(schema), Err(err) => Err(LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message: err.message.clone(), hint: None, details: None, }), } } pub(crate) fn compile_lix_schema(schema: &JsonValue) -> Result { let mut options = JSONSchema::options(); options.with_meta_schemas(); if schema_uses_draft_2020_12_without_fragment(schema) { options.with_draft(Draft::Draft202012); } options.should_validate_formats(true); options.with_format("json-pointer", is_json_pointer); options.with_format("cel", is_cel_expression); options.compile(schema).map_err(|err| LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message: format!("Failed to compile Lix schema definition: {err}"), hint: None, details: None, }) } fn schema_uses_draft_2020_12_without_fragment(schema: &JsonValue) -> bool { schema .get("$schema") .and_then(JsonValue::as_str) .is_some_and(|url| url == "https://json-schema.org/draft/2020-12/schema") } fn is_json_pointer(value: &str) -> bool { parse_json_pointer(value).is_ok() } fn is_cel_expression(value: &str) -> bool { Program::compile(value).is_ok() } fn assert_primary_key_pointers(schema: &JsonValue) -> Result<(), LixError> { let Some(primary_key) = schema .get("x-lix-primary-key") .and_then(|value| value.as_array()) else { return Ok(()); }; for pointer in primary_key { let Some(pointer) = pointer.as_str() else { continue; }; let segments = parse_json_pointer(pointer)?; let Some(property_schema) = (!segments.is_empty()) .then(|| schema_property(schema, &segments)) .flatten() else { return Err(LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message: format!( "Invalid Lix schema definition: x-lix-primary-key references missing property \"{}\".", pointer ), hint: None, details: None, }); }; if !schema_property_is_string_only(property_schema) { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "Invalid Lix schema definition: x-lix-primary-key property \"{pointer}\" must have type \"string\"." ), )); } if !schema_pointer_is_required(schema, &segments) { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "Invalid Lix schema definition: x-lix-primary-key property \"{pointer}\" must be required." ), )); } } Ok(()) } fn assert_unique_pointers(schema: &JsonValue) -> Result<(), LixError> { let Some(unique_groups) = schema .get("x-lix-unique") .and_then(|value| value.as_array()) else { return Ok(()); }; for group in unique_groups { let Some(group) = group.as_array() else { continue; }; for pointer in group { let Some(pointer) = pointer.as_str() else { continue; }; let segments = parse_json_pointer(pointer)?; if segments.is_empty() || !schema_has_property(schema, &segments) { return Err(LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message: format!( "Invalid Lix schema definition: x-lix-unique references missing property \"{}\".", pointer ), hint: None, details: None, }); } } } Ok(()) } fn assert_state_foreign_key_pointers(schema: &JsonValue) -> Result<(), LixError> { let Some(foreign_keys) = schema .get("x-lix-state-foreign-keys") .and_then(|value| value.as_array()) else { return Ok(()); }; for (index, foreign_key) in foreign_keys.iter().enumerate() { let Some(local_pointers) = foreign_key.as_array() else { continue; }; if local_pointers.len() != 3 { continue; } let roles = [ ("entity_id", "a non-empty JSON array of strings"), ("schema_key", "a string"), ("file_id", "a string or null"), ]; for (slot, (role, expected)) in roles.iter().enumerate() { let Some(pointer) = local_pointers[slot].as_str() else { continue; }; let segments = parse_json_pointer(pointer)?; let Some(property_schema) = (!segments.is_empty()) .then(|| schema_property(schema, &segments)) .flatten() else { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "Invalid Lix schema definition: x-lix-state-foreign-keys[{index}][{slot}] ({role}) references missing property \"{pointer}\"." ), )); }; if !schema_pointer_is_required(schema, &segments) { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "Invalid Lix schema definition: x-lix-state-foreign-keys[{index}][{slot}] ({role}) property \"{pointer}\" must be required. Tuple order is [entity_id, schema_key, file_id]." ), )); } let valid = match *role { "entity_id" => schema_property_is_string_array(property_schema), "schema_key" => schema_property_is_string_only(property_schema), "file_id" => schema_property_is_string_or_null(property_schema), _ => unreachable!("state foreign key roles are exhaustive"), }; if !valid { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "Invalid Lix schema definition: x-lix-state-foreign-keys[{index}][{slot}] ({role}) property \"{pointer}\" must be {expected}. Tuple order is [entity_id, schema_key, file_id]." ), )); } } } Ok(()) } fn assert_known_x_lix_top_level_fields(schema: &JsonValue) -> Result<(), LixError> { let Some(object) = schema.as_object() else { return Ok(()); }; for key in object.keys() { if !key.starts_with("x-lix-") { continue; } let known = matches!( key.as_str(), "x-lix-key" | "x-lix-primary-key" | "x-lix-unique" | "x-lix-foreign-keys" | "x-lix-state-foreign-keys" ); if !known { return Err(LixError { code: LixError::CODE_SCHEMA_DEFINITION.to_string(), message: format!( "Invalid Lix schema definition: unknown x-lix field '{}'.", key ), hint: None, details: None, }); } } Ok(()) } fn schema_has_property(schema: &JsonValue, segments: &[String]) -> bool { schema_property(schema, segments).is_some() } fn schema_pointer_is_required(schema: &JsonValue, segments: &[String]) -> bool { if segments.is_empty() { return false; } let mut node = schema; for segment in segments { let required = node .get("required") .and_then(JsonValue::as_array) .map(|required| { required .iter() .any(|required_property| required_property.as_str() == Some(segment)) }) .unwrap_or(false); if !required { return false; } let Some(next) = node .get("properties") .and_then(JsonValue::as_object) .and_then(|properties| properties.get(segment)) else { return false; }; node = next; } true } fn schema_property<'a>(schema: &'a JsonValue, segments: &[String]) -> Option<&'a JsonValue> { let mut node = schema; for segment in segments { let properties = node.get("properties")?.as_object()?; let next = properties.get(segment)?; node = next; } Some(node) } fn schema_property_is_string_only(schema: &JsonValue) -> bool { let mut kinds = BTreeSet::new(); collect_schema_type_kinds(schema, &mut kinds); kinds.len() == 1 && kinds.contains("string") } fn schema_property_is_string_or_null(schema: &JsonValue) -> bool { let mut kinds = BTreeSet::new(); collect_schema_type_kinds(schema, &mut kinds); kinds.remove("null"); kinds.len() == 1 && kinds.contains("string") } fn schema_property_is_string_array(schema: &JsonValue) -> bool { let mut kinds = BTreeSet::new(); collect_schema_type_kinds(schema, &mut kinds); if kinds.len() != 1 || !kinds.contains("array") { return false; } let Some(items) = schema.get("items") else { return false; }; if !schema_property_is_string_only(items) { return false; } schema .get("minItems") .and_then(JsonValue::as_u64) .is_some_and(|min_items| min_items >= 1) } pub(crate) fn format_lix_schema_validation_errors<'a>( errors: impl Iterator>, ) -> String { let mut parts = Vec::new(); for error in errors { let path = error.instance_path.to_string(); let message = error.to_string(); if path.is_empty() { parts.push(message); } else { parts.push(format!("{path} {message}")); } } if parts.is_empty() { "Unknown validation error".to_string() } else { parts.join("; ") } } #[cfg(test)] mod pointer_slash_detection_tests { use super::*; use serde_json::json; fn minimal_schema_with(extras: serde_json::Value) -> JsonValue { let mut obj = json!({ "type": "object", "x-lix-key": "book", "properties": { "id": { "type": "string" }, "author_id": { "type": "string" }, "tenant_id": { "type": "string" }, "handle": { "type": "string" }, }, "required": ["id"], "additionalProperties": false, }); let extras_obj = extras.as_object().expect("extras must be object").clone(); for (k, v) in extras_obj { obj.as_object_mut().unwrap().insert(k, v); } obj } fn err_for(schema: &JsonValue) -> LixError { validate_lix_schema_definition(schema).expect_err("should reject") } #[test] fn primary_key_without_slash_emits_targeted_hint() { let schema = minimal_schema_with(json!({ "x-lix-primary-key": ["id"] })); let err = err_for(&schema); assert_eq!( err.code, LixError::CODE_SCHEMA_DEFINITION, "schema-definition errors should carry the categorized code" ); assert!( err.message.contains("must begin with '/'"), "unexpected message: {}", err.message ); assert!( err.message.contains("x-lix-primary-key: \"id\" → \"/id\""), "message should show the fix: {}", err.message ); let hint = err.hint.as_deref().expect("should carry a hint"); assert!( hint.contains("/id"), "hint should show fixed pointer: {hint}" ); assert!( hint.contains("RFC 6901"), "hint should cite the RFC: {hint}" ); } #[test] fn unique_without_slash_emits_targeted_hint() { let schema = minimal_schema_with(json!({ "x-lix-primary-key": ["/id"], "x-lix-unique": [["handle"]], })); let err = err_for(&schema); assert!( err.message .contains("x-lix-unique: \"handle\" → \"/handle\""), "should flag x-lix-unique entry: {}", err.message ); assert!(err.hint.is_some()); } #[test] fn foreign_key_local_without_slash_emits_targeted_hint() { let schema = minimal_schema_with(json!({ "x-lix-primary-key": ["/id"], "x-lix-foreign-keys": [{ "properties": ["author_id"], "references": { "schemaKey": "author", "properties": ["/id"], } }] })); let err = err_for(&schema); assert!( err.message .contains("x-lix-foreign-keys[].properties: \"author_id\" → \"/author_id\""), "should flag FK local entry: {}", err.message ); } #[test] fn foreign_key_remote_without_slash_emits_targeted_hint() { let schema = minimal_schema_with(json!({ "x-lix-primary-key": ["/id"], "x-lix-foreign-keys": [{ "properties": ["/author_id"], "references": { "schemaKey": "author", "properties": ["id"], } }] })); let err = err_for(&schema); assert!( err.message .contains("x-lix-foreign-keys[].references.properties: \"id\" → \"/id\""), "should flag FK remote entry: {}", err.message ); } #[test] fn valid_pointers_pass_pre_check() { let schema = minimal_schema_with(json!({ "x-lix-primary-key": ["/id"], "x-lix-unique": [["/handle"], ["/tenant_id", "/handle"]], "x-lix-foreign-keys": [{ "properties": ["/author_id"], "references": { "schemaKey": "author", "properties": ["/id"], } }] })); assert!(detect_missing_pointer_slash(&schema).is_none()); } #[test] fn draft_2020_12_json_pointer_format_still_asserts() { let schema = json!({ "$schema": "https://json-schema.org/draft/2020-12/schema", "type": "object", "properties": { "pointer": { "type": "string", "format": "json-pointer" } } }); let validator = compile_lix_schema(&schema).expect("2020-12 schema should compile"); assert!(validator.is_valid(&json!({ "pointer": "/id" }))); assert!(!validator.is_valid(&json!({ "pointer": "id" }))); } } ================================================ FILE: packages/engine/src/schema/key.rs ================================================ use serde_json::Value as JsonValue; use crate::entity_identity::EntityIdentity; use crate::LixError; #[derive(Debug, Clone, PartialEq, Eq, Hash)] pub struct SchemaKey { pub schema_key: String, } impl SchemaKey { pub fn new(schema_key: impl Into) -> Self { Self { schema_key: schema_key.into(), } } } pub fn schema_key_from_definition(schema: &JsonValue) -> Result { let object = schema.as_object().ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "schema definition must be a JSON object".to_string(), hint: None, details: None, })?; let schema_key = object .get("x-lix-key") .and_then(JsonValue::as_str) .ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "schema definition must include string x-lix-key".to_string(), hint: None, details: None, })?; Ok(SchemaKey::new(schema_key.to_string())) } pub fn schema_from_registered_snapshot( snapshot: &JsonValue, ) -> Result<(SchemaKey, JsonValue), LixError> { let value = snapshot.get("value").ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "registered schema snapshot_content missing value".to_string(), hint: None, details: None, })?; let value = value.as_object().ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "registered schema snapshot_content value must be an object".to_string(), hint: None, details: None, })?; let schema_key = value .get("x-lix-key") .and_then(|value| value.as_str()) .ok_or_else(|| LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "registered schema value.x-lix-key must be string".to_string(), hint: None, details: None, })?; Ok(( SchemaKey::new(schema_key.to_string()), JsonValue::Object(value.clone()), )) } pub(crate) fn registered_schema_entity_id(schema_key: &str) -> Result { EntityIdentity::from_primary_key_paths( &serde_json::json!({ "value": { "x-lix-key": schema_key, } }), &[vec!["value".to_string(), "x-lix-key".to_string()]], ) .map_err(|error| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("registered schema identity could not be derived for schema '{schema_key}': {error}"), ) }) } #[cfg(test)] mod tests { use serde_json::json; use super::{schema_from_registered_snapshot, schema_key_from_definition, SchemaKey}; #[test] fn schema_from_registered_snapshot_extracts_key_and_schema() { let snapshot = json!({ "value": { "x-lix-key": "profile", "type": "object" } }); let (key, schema) = schema_from_registered_snapshot(&snapshot).expect("schema is valid"); assert_eq!(key, SchemaKey::new("profile")); assert_eq!(schema["type"], json!("object")); } #[test] fn schema_from_registered_snapshot_requires_value_object() { let snapshot = json!({}); let err = schema_from_registered_snapshot(&snapshot).expect_err("should fail"); assert!(err.message.contains("missing value"), "{err:?}"); } #[test] fn schema_from_registered_snapshot_requires_string_key() { let snapshot = json!({ "value": { "x-lix-key": 1, } }); let err = schema_from_registered_snapshot(&snapshot).expect_err("should fail"); assert!(err.message.contains("x-lix-key"), "{err:?}"); } #[test] fn schema_key_from_definition_extracts_key() { let schema = json!({ "x-lix-key": "users", "type": "object" }); let key = schema_key_from_definition(&schema).expect("schema key"); assert_eq!(key, SchemaKey::new("users")); } } ================================================ FILE: packages/engine/src/schema/mod.rs ================================================ mod builtin; #[allow(dead_code)] pub(crate) mod compatibility; mod definition; mod key; pub(crate) mod seed; #[cfg(test)] mod tests; pub(crate) use compatibility::validate_schema_amendment; pub(crate) use definition::{compile_lix_schema, format_lix_schema_validation_errors}; pub use definition::{ lix_schema_definition, lix_schema_definition_json, validate_lix_schema, validate_lix_schema_definition, }; pub(crate) use key::registered_schema_entity_id; pub use key::{schema_from_registered_snapshot, schema_key_from_definition, SchemaKey}; #[cfg(test)] pub(crate) use seed::seed_schema_definition; pub(crate) use seed::{is_seed_schema_key, seed_schema_definitions}; ================================================ FILE: packages/engine/src/schema/seed.rs ================================================ use serde_json::Value as JsonValue; pub(crate) fn is_seed_schema_key(schema_key: &str) -> bool { super::builtin::is_seed_schema_key(schema_key) } #[cfg(test)] pub(crate) fn seed_schema_definition(schema_key: &str) -> Option<&'static JsonValue> { super::builtin::seed_schema_definition(schema_key) } pub(crate) fn seed_schema_definitions() -> Vec<&'static JsonValue> { super::builtin::seed_schema_definitions() } ================================================ FILE: packages/engine/src/schema/tests.rs ================================================ use crate::{validate_lix_schema, validate_lix_schema_definition}; use serde_json::json; #[test] fn validate_lix_schema_definition_passes_for_valid_schema() { let valid_schema = json!({ "x-lix-key": "test_entity", "type": "object", "properties": { "id": { "type": "string" } }, "additionalProperties": false }); assert!(validate_lix_schema_definition(&valid_schema).is_ok()); } #[test] fn validate_lix_schema_definition_rejects_unprojectable_entity_properties() { let schema = json!({ "x-lix-key": "test_entity", "type": "object", "properties": { "id": { "type": "string" }, "kind": {} }, "required": ["id", "kind"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema).unwrap_err(); assert!( err.to_string().contains("property '/kind'"), "error should identify the unprojectable property: {err:?}" ); assert!( err.to_string().contains("SQL-projectable JSON Schema type"), "error should explain the projection requirement: {err:?}" ); } #[test] fn validate_lix_schema_definition_rejects_reserved_lix_property_prefixes() { for property_name in ["lixcol_entity_id", "lix_internal", "lixfoo"] { let schema = json!({ "x-lix-key": "test_entity", "type": "object", "properties": { "id": { "type": "string" }, property_name: { "type": "string" } }, "required": ["id", property_name], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema) .expect_err("reserved property names should be rejected"); assert!( err.to_string().contains(&format!( "property '/{property_name}' uses reserved prefix 'lix'" )), "error should identify the reserved property name: {err:?}" ); } } #[test] fn validate_lix_schema_definition_throws_for_invalid_schema() { let invalid_schema = json!({ "type": "object", "properties": { "id": { "type": "string" } }, "additionalProperties": false }); let err = validate_lix_schema_definition(&invalid_schema).unwrap_err(); assert!(err.to_string().contains("Invalid Lix schema definition")); } #[test] fn validate_lix_schema_validates_both_schema_and_data_successfully() { let schema = json!({ "x-lix-key": "user", "type": "object", "properties": { "id": { "type": "string" }, "name": { "type": "string" } }, "required": ["id", "name"], "additionalProperties": false }); let valid_data = json!({ "id": "123", "name": "John Doe" }); assert!(validate_lix_schema(&schema, &valid_data).is_ok()); } #[test] fn validate_lix_schema_throws_when_schema_is_invalid() { let invalid_schema = json!({ "type": "object", "properties": { "id": { "type": "string" } }, "additionalProperties": false }); let data = json!({ "id": "123" }); let err = validate_lix_schema(&invalid_schema, &data).unwrap_err(); assert!(err.to_string().contains("Invalid Lix schema definition")); } #[test] fn validate_lix_schema_throws_when_data_does_not_match_schema() { let schema = json!({ "x-lix-key": "user", "type": "object", "properties": { "id": { "type": "string" }, "name": { "type": "string" } }, "required": ["id", "name"], "additionalProperties": false }); let invalid_data = json!({ "id": "123" }); let err = validate_lix_schema(&schema, &invalid_data).unwrap_err(); assert!(err.to_string().contains("Data validation failed")); } #[test] fn validate_lix_schema_definition_rejects_when_additional_properties_missing() { let schema = json!({ "x-lix-key": "user", "type": "object", "properties": { "id": { "type": "string" } }, "required": ["id"] }); let err = validate_lix_schema_definition(&schema).unwrap_err(); assert!(err.to_string().contains("Invalid Lix schema definition")); } #[test] fn additional_properties_must_be_false() { let schema_with_additional_props = json!({ "x-lix-key": "user", "type": "object", "properties": { "id": { "type": "string" }, "name": { "type": "string" } }, "required": ["id", "name"], "additionalProperties": true }); assert!(validate_lix_schema_definition(&schema_with_additional_props).is_err()); let valid_schema = json!({ "x-lix-key": "user", "type": "object", "properties": { "id": { "type": "string" }, "name": { "type": "string" } }, "required": ["id", "name"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&valid_schema).is_ok()); let data = json!({ "id": "123", "name": "John Doe", "extraField": "not allowed" }); let err = validate_lix_schema(&valid_schema, &data).unwrap_err(); assert!(err.to_string().contains("Data validation failed")); } #[test] fn validate_lix_schema_definition_rejects_missing_primary_key_properties() { let schema = json!({ "x-lix-key": "missing_pk", "type": "object", "properties": { "value": { "type": "string" } }, "required": ["value"], "x-lix-primary-key": ["/entity_id"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema).unwrap_err(); assert!(err .to_string() .contains("x-lix-primary-key references missing property")); } #[test] fn validate_lix_schema_definition_rejects_non_string_primary_key_properties() { let schema = json!({ "x-lix-key": "numeric_pk", "type": "object", "properties": { "id": { "type": "number" }, "value": { "type": "string" } }, "required": ["id", "value"], "x-lix-primary-key": ["/id"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema).unwrap_err(); assert!(err .to_string() .contains("x-lix-primary-key property \"/id\" must have type \"string\"")); } #[test] fn validate_lix_schema_definition_rejects_optional_primary_key_properties() { let schema = json!({ "x-lix-key": "optional_pk", "type": "object", "properties": { "id": { "type": "string" }, "value": { "type": "string" } }, "required": ["value"], "x-lix-primary-key": ["/id"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema) .expect_err("primary-key property should be required"); assert!(err .to_string() .contains("x-lix-primary-key property \"/id\" must be required")); } #[test] fn validate_lix_schema_definition_rejects_missing_unique_constraint_properties() { let schema = json!({ "x-lix-key": "missing_unique", "type": "object", "properties": { "value": { "type": "string" } }, "x-lix-unique": [["/entity_id", "/value"]], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema).unwrap_err(); assert!(err .to_string() .contains("x-lix-unique references missing property")); } #[test] fn x_key_is_required() { let schema = json!({ "type": "object", "x-lix-key": null, "properties": { "name": { "type": "string" } }, "required": ["name"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_err()); } #[test] fn x_lix_key_must_be_snake_case() { let base_schema = json!({ "type": "object", "properties": { "name": { "type": "string" } }, "required": ["name"], "additionalProperties": false }); let invalid_keys = [ "Invalid-Key!", "also.invalid", "123starts_with_number", "contains space", "camelCaseKey", "UPPER_CASE", "mixed-Case_Value", ]; for key in invalid_keys { let mut schema = base_schema.clone(); schema["x-lix-key"] = json!(key); assert!(validate_lix_schema_definition(&schema).is_err()); } let valid_keys = ["abc", "abc123", "abc_123", "a", "snake_case_key"]; for key in valid_keys { let mut schema = base_schema.clone(); schema["x-lix-key"] = json!(key); assert!(validate_lix_schema_definition(&schema).is_ok()); } } #[test] fn x_lix_unique_is_optional() { let schema = json!({ "type": "object", "x-lix-key": "mock", "properties": { "name": { "type": "string" } }, "required": ["name"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_unique_must_be_array_of_arrays_when_present() { let schema = json!({ "type": "object", "x-lix-key": "mock", "x-lix-unique": [["/id"], ["/name", "/age"]], "properties": { "id": { "type": "string" }, "name": { "type": "string" }, "age": { "type": "number" } }, "required": ["id", "name", "age"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_unique_fails_with_invalid_structure() { let schema = json!({ "type": "object", "x-lix-key": "mock", "x-lix-unique": ["/id", "/name"], "properties": { "id": { "type": "string" }, "name": { "type": "string" } }, "required": ["id", "name"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_err()); } #[test] fn x_lix_primary_key_must_include_at_least_one_unique_pointer() { let base_schema = json!({ "type": "object", "x-lix-key": "mock", "properties": { "id": { "type": "string" } }, "required": ["id"], "additionalProperties": false }); let mut empty_pk = base_schema.clone(); empty_pk["x-lix-primary-key"] = json!([]); assert!(validate_lix_schema_definition(&empty_pk).is_err()); let mut duplicate_pk = base_schema.clone(); duplicate_pk["x-lix-primary-key"] = json!(["/id", "/id"]); assert!(validate_lix_schema_definition(&duplicate_pk).is_err()); let mut valid_pk = base_schema.clone(); valid_pk["x-lix-primary-key"] = json!(["/id"]); assert!(validate_lix_schema_definition(&valid_pk).is_ok()); } #[test] fn x_lix_unique_groups_must_include_unique_pointers() { let base_schema = json!({ "type": "object", "x-lix-key": "mock", "properties": { "id": { "type": "string" }, "email": { "type": "string" } }, "required": ["id", "email"], "additionalProperties": false }); let mut empty_group = base_schema.clone(); empty_group["x-lix-unique"] = json!([[]]); assert!(validate_lix_schema_definition(&empty_group).is_err()); let mut duplicate_pointers = base_schema.clone(); duplicate_pointers["x-lix-unique"] = json!([["/email", "/email"]]); assert!(validate_lix_schema_definition(&duplicate_pointers).is_err()); let mut valid_unique = base_schema.clone(); valid_unique["x-lix-unique"] = json!([["/email"]]); assert!(validate_lix_schema_definition(&valid_unique).is_ok()); } #[test] fn x_lix_entity_views_is_rejected() { let schema = json!({ "type": "object", "x-lix-key": "mock", "x-lix-entity-views": ["lix_state", "lix_state_by_version"], "properties": { "name": { "type": "string" } }, "required": ["name"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema).expect_err("x-lix-entity-views should be rejected"); assert!(err.to_string().contains("x-lix-entity-views")); } #[test] fn x_lix_primary_key_is_optional() { let schema = json!({ "type": "object", "x-lix-key": "mock", "properties": { "name": { "type": "string" } }, "required": ["name"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_primary_key_must_be_array_of_strings_when_present() { let schema = json!({ "type": "object", "x-lix-key": "mock", "x-lix-primary-key": ["/id", "/version"], "properties": { "id": { "type": "string" }, "version": { "type": "string" }, "name": { "type": "string" } }, "required": ["id", "version", "name"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_foreign_keys_is_optional() { let schema = json!({ "type": "object", "x-lix-key": "blog_post", "properties": { "id": { "type": "string" }, "author_id": { "type": "string" } }, "required": ["id", "author_id"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_foreign_keys_with_valid_structure() { let schema = json!({ "type": "object", "x-lix-key": "blog_post", "x-lix-foreign-keys": [ { "properties": ["/author_id"], "references": { "schemaKey": "user_profile", "properties": ["/id"] } }, { "properties": ["/category_id"], "references": { "schemaKey": "post_category", "properties": ["/id"] } } ], "properties": { "id": { "type": "string" }, "author_id": { "type": "string" }, "category_id": { "type": "string" } }, "required": ["id", "author_id", "category_id"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_foreign_keys_reject_duplicate_pointers() { let schema = json!({ "type": "object", "x-lix-key": "invalid_fk_duplicates", "x-lix-foreign-keys": [ { "properties": ["/local", "/local"], "references": { "schemaKey": "remote_schema", "properties": ["/id", "/version"] } } ], "properties": { "local": { "type": "string" } }, "required": ["local"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_err()); } #[test] fn x_lix_foreign_keys_fails_without_required_fields() { let schema = json!({ "type": "object", "x-lix-key": "blog_post", "x-lix-foreign-keys": [ { "properties": ["/author_id"] } ], "properties": { "id": { "type": "string" }, "author_id": { "type": "string" } }, "required": ["id", "author_id"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_err()); } #[test] fn x_lix_foreign_keys_use_schema_key_identity_only() { let schema = json!({ "type": "object", "x-lix-key": "comment", "x-lix-foreign-keys": [ { "properties": ["/post_id"], "references": { "schemaKey": "blog_post", "properties": ["/id"] } } ], "properties": { "id": { "type": "string" }, "post_id": { "type": "string" } }, "required": ["id", "post_id"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_foreign_keys_rejects_mode_field() { let schema = json!({ "type": "object", "x-lix-key": "child_entity", "x-lix-primary-key": ["/id"], "x-lix-foreign-keys": [ { "properties": ["/parent_id"], "references": { "schemaKey": "parent_entity", "properties": ["/id"] }, "mode": "materialized" } ], "properties": { "id": { "type": "string" }, "parent_id": { "type": "string" } }, "required": ["id", "parent_id"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema).expect_err("mode should be rejected"); assert!(err.to_string().contains("mode")); } #[test] fn x_lix_foreign_keys_rejects_scope_field() { let schema = json!({ "type": "object", "x-lix-key": "child_entity", "x-lix-primary-key": ["/id"], "x-lix-foreign-keys": [ { "properties": ["/parent_id"], "references": { "schemaKey": "parent_entity", "properties": ["/id"] }, "scope": ["file_id"] } ], "properties": { "id": { "type": "string" }, "parent_id": { "type": "string" } }, "required": ["id", "parent_id"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema).expect_err("scope should be rejected"); assert!(err.to_string().contains("scope")); } #[test] fn x_lix_state_foreign_keys_with_ordered_state_address_tuple() { let schema = json!({ "type": "object", "x-lix-key": "label_assignment", "x-lix-state-foreign-keys": [ ["/target_entity_id", "/target_schema_key", "/target_file_id"] ], "x-lix-foreign-keys": [ { "properties": ["/label_id"], "references": { "schemaKey": "lix_label", "properties": ["/id"] } } ], "properties": { "target_entity_id": { "type": "array", "items": { "type": "string" }, "minItems": 1 }, "target_schema_key": { "type": "string" }, "target_file_id": { "type": ["string", "null"] }, "label_id": { "type": "string" } }, "required": ["target_entity_id", "target_schema_key", "target_file_id", "label_id"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_state_foreign_keys_rejects_wrong_tuple_order_by_type() { let schema = json!({ "type": "object", "x-lix-key": "bad_label_assignment", "x-lix-state-foreign-keys": [ ["/target_schema_key", "/target_entity_id", "/target_file_id"] ], "properties": { "target_entity_id": { "type": "array", "items": { "type": "string" }, "minItems": 1 }, "target_schema_key": { "type": "string" }, "target_file_id": { "type": ["string", "null"] } }, "required": ["target_entity_id", "target_schema_key", "target_file_id"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema).expect_err("wrong tuple order should be rejected"); assert!( err.message.contains("[entity_id, schema_key, file_id]"), "unexpected error: {err:?}" ); } #[test] fn x_lix_state_foreign_keys_requires_address_tuple_properties() { let schema = json!({ "type": "object", "x-lix-key": "optional_label_assignment", "x-lix-state-foreign-keys": [ ["/target_entity_id", "/target_schema_key", "/target_file_id"] ], "properties": { "target_entity_id": { "type": "array", "items": { "type": "string" }, "minItems": 1 }, "target_schema_key": { "type": "string" }, "target_file_id": { "type": ["string", "null"] } }, "required": ["target_entity_id", "target_schema_key"], "additionalProperties": false }); let err = validate_lix_schema_definition(&schema) .expect_err("state foreign key tuple fields should be required"); assert!( err.message.contains("file_id") && err.message.contains("must be required"), "unexpected error: {err:?}" ); } #[test] fn x_lix_foreign_keys_treat_schema_keys_literally() { let schema = json!({ "type": "object", "x-lix-key": "custom_label_assignment", "x-lix-foreign-keys": [ { "properties": ["/label_id"], "references": { "schemaKey": "label", "properties": ["/id"] } } ], "properties": { "label_id": { "type": "string" } }, "required": ["label_id"], "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_default_accepts_valid_cel_expression() { let schema = json!({ "type": "object", "x-lix-key": "mock", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()" } }, "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_ok()); } #[test] fn x_lix_default_rejects_invalid_cel_expression() { let schema = json!({ "type": "object", "x-lix-key": "mock", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7(" } }, "additionalProperties": false }); assert!(validate_lix_schema_definition(&schema).is_err()); } ================================================ FILE: packages/engine/src/session/context.rs ================================================ use std::future::Future; use std::pin::Pin; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; use serde_json::Value as JsonValue; use crate::binary_cas::{BinaryCasContext, BlobDataReader}; use crate::catalog::CatalogContext; use crate::commit_graph::{CommitGraphContext, CommitGraphReader}; use crate::commit_store::CommitStoreContext; use crate::entity_identity::EntityIdentity; use crate::functions::FunctionProviderHandle; use crate::json_store::JsonStoreContext; use crate::live_state::{LiveStateContext, LiveStateReader, LiveStateRowRequest}; use crate::sql2::{CommitStoreQuerySource, SqlCommitStoreQuerySource, SqlExecutionContext}; use crate::storage::{ ScopedStorageReader, StorageContext, StorageReadScope, StorageReadTransaction, StorageReader, }; use crate::tracked_state::TrackedStateContext; use crate::transaction::{open_transaction, Transaction}; use crate::version::{ VersionContext, VersionLifecycle, VersionOperation, VersionRefReader, VersionReferenceRole, }; use crate::GLOBAL_VERSION_ID; use crate::{LixError, NullableKeyFilter}; pub(crate) const WORKSPACE_VERSION_KEY: &str = "lix_workspace_version_id"; #[derive(Clone)] pub(crate) enum SessionMode { Pinned { version_id: String }, Workspace, } /// Session-context state for engine execution. /// /// A session context pins the active version selector and shared execution /// services. Each call to `execute(...)` projects this state into a read-only /// SQL context or a transaction-owned write context. /// /// Write transaction invariant: any engine operation that may write must enter /// through `SessionContext::with_write_transaction`. Reads that influence writes /// are only available from that transaction capability, not from session-level /// helpers. #[derive(Clone)] pub struct SessionContext { pub(super) mode: SessionMode, pub(super) storage: StorageContext, pub(super) live_state: Arc, pub(super) tracked_state: Arc, pub(super) binary_cas: Arc, pub(super) commit_store: Arc, pub(super) version_ctx: Arc, pub(super) catalog_context: Arc, closed: Arc, } impl SessionContext { pub(crate) async fn open_workspace( storage: StorageContext, live_state: Arc, tracked_state: Arc, binary_cas: Arc, commit_store: Arc, version_ctx: Arc, catalog_context: Arc, ) -> Result { let session = Self::new( SessionMode::Workspace, storage, live_state, tracked_state, binary_cas, commit_store, version_ctx, catalog_context, ); session.active_version_id().await?; Ok(session) } pub(crate) async fn open( active_version_id: String, storage: StorageContext, live_state: Arc, tracked_state: Arc, binary_cas: Arc, commit_store: Arc, version_ctx: Arc, catalog_context: Arc, ) -> Result { Ok(Self::new( SessionMode::Pinned { version_id: active_version_id, }, storage, live_state, tracked_state, binary_cas, commit_store, version_ctx, catalog_context, )) } pub(super) fn new( mode: SessionMode, storage: StorageContext, live_state: Arc, tracked_state: Arc, binary_cas: Arc, commit_store: Arc, version_ctx: Arc, catalog_context: Arc, ) -> Self { Self::new_with_closed( mode, storage, live_state, tracked_state, binary_cas, commit_store, version_ctx, catalog_context, Arc::new(AtomicBool::new(false)), ) } pub(super) fn new_with_closed( mode: SessionMode, storage: StorageContext, live_state: Arc, tracked_state: Arc, binary_cas: Arc, commit_store: Arc, version_ctx: Arc, catalog_context: Arc, closed: Arc, ) -> Self { Self { mode, storage, live_state, tracked_state, binary_cas, commit_store, version_ctx, catalog_context, closed, } } /// Releases this logical session handle. This is a lifecycle boundary only: /// successful writes are committed before their operation returns. pub async fn close(&self) -> Result<(), LixError> { self.closed.store(true, Ordering::SeqCst); Ok(()) } pub fn is_closed(&self) -> bool { self.closed.load(Ordering::SeqCst) } pub(crate) fn closed_flag(&self) -> Arc { Arc::clone(&self.closed) } pub(crate) fn ensure_open(&self) -> Result<(), LixError> { if self.is_closed() { return Err(closed_error()); } Ok(()) } /// Resolves the version this session should operate on right now. /// /// This is a read-path helper. Write flows must resolve the active version /// through the transaction capability so the read is scoped to the /// same backend transaction as the writes it influences. /// /// Pinned sessions are pure in-memory views over one version. Workspace /// sessions read the shared workspace selector from untracked global /// `lix_key_value` state so multiple open app sessions can observe the same /// active workspace version. pub async fn active_version_id(&self) -> Result { let mut transaction = self.storage.begin_read_transaction().await?; let result = self .active_version_id_from_reader(transaction.as_mut()) .await; match result { Ok(version_id) => { transaction.rollback().await?; Ok(version_id) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } pub(super) async fn active_version_id_from_reader( &self, reader: &mut S, ) -> Result where S: StorageReader + ?Sized, { self.ensure_open()?; match &self.mode { SessionMode::Pinned { version_id } => Ok(version_id.clone()), SessionMode::Workspace => self.load_workspace_version_id(reader).await, } } async fn load_workspace_version_id(&self, reader: &mut S) -> Result where S: StorageReader + ?Sized, { let row = self .live_state .reader(&mut *reader) .load_row(&LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: EntityIdentity::single(WORKSPACE_VERSION_KEY), file_id: NullableKeyFilter::Null, }) .await? .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "workspace version selector is missing lix_key_value:lix_workspace_version_id", ) })?; let snapshot_content = row.snapshot_content.as_deref().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "workspace version selector is missing snapshot_content", ) })?; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("workspace version selector snapshot is invalid JSON: {error}"), ) })?; let version_id = snapshot .get("value") .and_then(JsonValue::as_str) .filter(|value| !value.is_empty()) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "workspace version selector value must be a non-empty string", ) })? .to_string(); let version_ref = self.version_ctx.ref_reader(&mut *reader); VersionLifecycle::new(&version_ref) .require_existing_ref( &version_id, VersionOperation::LoadWorkspaceSelector, VersionReferenceRole::WorkspaceSelector, ) .await?; Ok(version_id) } pub(crate) async fn with_write_transaction(&self, f: F) -> Result where F: for<'tx> FnOnce( &'tx mut Transaction, ) -> Pin> + 'tx>>, { self.ensure_open()?; let opened = open_transaction( &self.mode, self.storage.clone(), Arc::clone(&self.live_state), Arc::clone(&self.tracked_state), Arc::clone(&self.binary_cas), Arc::clone(&self.commit_store), Arc::clone(&self.version_ctx), Arc::clone(&self.catalog_context), ) .await?; let mut transaction = opened.transaction; let runtime_functions = opened.runtime_functions; match f(&mut transaction).await { Ok(value) => { transaction.commit(&runtime_functions).await?; Ok(value) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } } fn closed_error() -> LixError { LixError::new(LixError::CODE_CLOSED, "Lix handle is closed") .with_hint("Open a new Lix handle before calling this method.") } /// Read-only SQL execution context derived from a session. /// /// Write statements re-plan against `Transaction`; this context intentionally /// has no write stager. pub(super) struct SessionSqlExecutionContext<'a> { pub(super) active_version_id: &'a str, pub(super) read_store: ScopedStorageReader>, pub(super) live_state: Arc, pub(super) binary_cas: Arc, pub(super) commit_store: Arc, pub(super) version_ctx: Arc, pub(super) visible_schemas: Vec, pub(super) functions: FunctionProviderHandle, } impl SqlExecutionContext for SessionSqlExecutionContext<'_> { fn active_version_id(&self) -> &str { self.active_version_id } fn live_state(&self) -> Arc { Arc::new(self.live_state.reader(self.read_store.clone())) as Arc } fn commit_store_query_source(&self) -> SqlCommitStoreQuerySource { let read_scope = StorageReadScope::new(self.read_store.clone()); CommitStoreQuerySource { commit_store_reader: Arc::new(self.commit_store.reader(read_scope.store())), json_reader: JsonStoreContext::new().reader(read_scope.store()), } } fn commit_graph(&self) -> Box { Box::new(CommitGraphContext::new().reader(self.read_store.clone())) } fn version_ref(&self) -> Arc { Arc::new(self.version_ctx.ref_reader(self.read_store.clone())) } fn functions(&self) -> FunctionProviderHandle { self.functions.clone() } fn blob_reader(&self) -> Arc { Arc::new(self.binary_cas.reader(self.read_store.clone())) as Arc } fn list_visible_schemas(&self) -> Result, LixError> { Ok(self.visible_schemas.clone()) } } ================================================ FILE: packages/engine/src/session/create_version.rs ================================================ use crate::transaction::types::{TransactionWrite, TransactionWriteMode}; use crate::version::{ version_descriptor_stage_row, version_ref_stage_row, VersionLifecycle, VersionOperation, VersionReferenceRole, }; use crate::LixError; use super::context::SessionContext; /// Options for creating a new version from the session's active version. #[derive(Debug, Clone, PartialEq, Eq)] pub struct CreateVersionOptions { /// Optional caller-provided version id. If omitted, engine generates one. pub id: Option, /// User-facing version name. pub name: String, /// Optional commit id for the new version head. If omitted, the current /// active version head is used. pub from_commit_id: Option, } /// Receipt returned after creating a version. #[derive(Debug, Clone, PartialEq, Eq)] pub struct CreateVersionReceipt { pub id: String, pub name: String, pub hidden: bool, pub commit_id: String, } impl SessionContext { /// Creates a new version from this session's current version head. /// /// Version descriptors are tracked global facts so every version agrees on /// which versions exist. Version refs are untracked global moving pointers, /// so creating a ref does not add another changelog fact. pub async fn create_version( &self, options: CreateVersionOptions, ) -> Result { self.with_write_transaction(|transaction| { Box::pin(async move { let version_id = options .id .unwrap_or_else(|| transaction.functions().call_uuid_v7()); let source_head = if let Some(from_commit_id) = options.from_commit_id { let mut commit_graph = transaction.commit_graph_reader(); VersionLifecycle::require_existing_commit( &mut commit_graph, &from_commit_id, VersionOperation::CreateVersion, VersionReferenceRole::CommitSource, ) .await?; from_commit_id } else { let active_version_id = transaction.active_version_id().to_string(); let reader = transaction.version_ref_reader(); VersionLifecycle::new(&reader) .require_existing_commit_id( &active_version_id, VersionOperation::CreateVersion, VersionReferenceRole::Source, ) .await? }; transaction .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows: vec![ version_descriptor_stage_row(&version_id, &options.name, false), version_ref_stage_row(&version_id, &source_head), ], }) .await?; Ok(CreateVersionReceipt { id: version_id, name: options.name, hidden: false, commit_id: source_head, }) }) }) .await } } ================================================ FILE: packages/engine/src/session/execute.rs ================================================ use std::sync::Arc; use crate::functions::FunctionContext; use crate::sql2; use crate::storage::{StorageReadScope, StorageWriteSet}; use crate::{LixError, LixNotice, SqlQueryResult, Value}; use super::context::{SessionContext, SessionSqlExecutionContext}; /// Result of executing one SQL statement through engine. /// /// Column names live once at the result-set level. Individual rows only own /// values, which keeps the public API row-oriented without copying schema /// metadata into every row. #[derive(Debug, Clone, PartialEq)] pub struct ExecuteResult { columns: Vec, rows: Vec, rows_affected: u64, notices: Vec, } impl ExecuteResult { fn from_sql_query_result(result: SqlQueryResult) -> Self { Self { columns: result.columns, rows: Vec::new(), rows_affected: 0, notices: result.notices, } .with_rows(result.rows) } pub fn from_rows_affected(rows_affected: u64) -> Self { Self { columns: Vec::new(), rows: Vec::new(), rows_affected, notices: Vec::new(), } } pub fn from_rows(columns: Vec, rows: Vec>) -> Self { Self { columns, rows: Vec::new(), rows_affected: 0, notices: Vec::new(), } .with_rows(rows) } fn with_rows(mut self, rows: Vec>) -> Self { let columns = Arc::<[String]>::from(self.columns.clone().into_boxed_slice()); self.rows = rows .into_iter() .map(|values| Row { columns: Arc::clone(&columns), values, }) .collect(); self } /// Returns the result-set column names in row value order. pub fn columns(&self) -> &[String] { &self.columns } /// Returns the owned rows. Use `iter()` for name-based access. pub fn rows(&self) -> &[Row] { &self.rows } /// Iterates rows with borrowed access to the shared column metadata. pub fn iter(&self) -> impl Iterator> { self.rows.iter().map(|row| RowRef { columns: self.columns.as_slice(), values: row.values.as_slice(), }) } /// Returns the number of rows in this result set. pub fn len(&self) -> usize { self.rows.len() } /// Returns true when this result set has no rows. pub fn is_empty(&self) -> bool { self.rows.is_empty() } /// Returns the number of rows affected by a mutation statement. pub fn rows_affected(&self) -> u64 { self.rows_affected } /// Returns non-fatal diagnostics produced while executing the statement. pub fn notices(&self) -> &[LixNotice] { &self.notices } /// Looks up the value for `column_name` on an owned row from this set. pub fn get<'a>(&self, row: &'a Row, column_name: &str) -> Option<&'a Value> { let index = self.column_index(column_name)?; row.get_index(index) } /// Returns the index for a column name. pub fn column_index(&self, column_name: &str) -> Option { self.columns.iter().position(|column| column == column_name) } } /// One owned row returned by a query. #[derive(Debug, Clone, PartialEq)] pub struct Row { columns: Arc<[String]>, values: Vec, } impl Row { /// Returns the values in result-set column order. pub fn values(&self) -> &[Value] { &self.values } /// Returns the value at `index`. pub fn get_index(&self, index: usize) -> Option<&Value> { self.values.get(index) } /// Returns the raw value for `column_name`, or an error when the column is absent. pub fn value(&self, column_name: &str) -> Result<&Value, LixError> { let index = self.column_index(column_name)?; self.values.get(index).ok_or_else(|| { LixError::new( LixError::CODE_COLUMN_NOT_FOUND, format!( "column '{}' points past row width {}; available columns: {}", column_name, self.values.len(), self.available_columns() ), ) }) } /// Converts the named column to a native Rust value. pub fn get(&self, column_name: &str) -> Result where T: TryFromValue, { T::try_from_value(self.value(column_name)?) } fn column_index(&self, column_name: &str) -> Result { self.columns .iter() .position(|column| column == column_name) .ok_or_else(|| { LixError::new( LixError::CODE_COLUMN_NOT_FOUND, format!( "column '{}' does not exist; available columns: {}", column_name, self.available_columns() ), ) }) } fn available_columns(&self) -> String { if self.columns.is_empty() { "".to_string() } else { self.columns.join(", ") } } } pub trait TryFromValue: Sized { fn try_from_value(value: &Value) -> Result; } impl TryFromValue for Value { fn try_from_value(value: &Value) -> Result { Ok(value.clone()) } } impl TryFromValue for String { fn try_from_value(value: &Value) -> Result { match value { Value::Text(value) => Ok(value.clone()), other => Err(value_type_error("text", other)), } } } impl TryFromValue for bool { fn try_from_value(value: &Value) -> Result { match value { Value::Boolean(value) => Ok(*value), other => Err(value_type_error("boolean", other)), } } } impl TryFromValue for i64 { fn try_from_value(value: &Value) -> Result { match value { Value::Integer(value) => Ok(*value), other => Err(value_type_error("integer", other)), } } } impl TryFromValue for f64 { fn try_from_value(value: &Value) -> Result { match value { Value::Real(value) => Ok(*value), other => Err(value_type_error("real", other)), } } } impl TryFromValue for serde_json::Value { fn try_from_value(value: &Value) -> Result { match value { Value::Json(value) => Ok(value.clone()), other => Err(value_type_error("json", other)), } } } impl TryFromValue for Vec { fn try_from_value(value: &Value) -> Result { match value { Value::Blob(value) => Ok(value.clone()), other => Err(value_type_error("blob", other)), } } } fn value_type_error(expected: &str, actual: &Value) -> LixError { LixError::new( "LIX_ERROR_VALUE_TYPE", format!("expected {expected} value, got {actual:?}"), ) } /// Zero-copy row view with access to the result-set column names. /// /// This is the ergonomic path for callers that want `row.get("column")` /// without storing column metadata on every owned row. #[derive(Debug, Clone, Copy)] pub struct RowRef<'a> { columns: &'a [String], values: &'a [Value], } impl RowRef<'_> { /// Returns the result-set column names in row value order. pub fn columns(&self) -> &[String] { self.columns } /// Returns the row values in result-set column order. pub fn values(&self) -> &[Value] { self.values } /// Returns the value for `column_name`. pub fn get(&self, column_name: &str) -> Option<&Value> { let index = self .columns .iter() .position(|column| column == column_name)?; self.values.get(index) } /// Returns the value at `index`. pub fn get_index(&self, index: usize) -> Option<&Value> { self.values.get(index) } } impl SessionContext { /// Executes one DataFusion SQL statement against this Lix session. /// /// The SQL dialect is DataFusion SQL, not SQLite SQL. Positional /// placeholders use `$1`, `$2`, and so on. SQLite-specific catalog tables /// and transaction statements such as `sqlite_master`, `BEGIN`, and /// `COMMIT` are not part of this contract; use `information_schema` for /// catalog inspection. Lix owns transaction boundaries for each statement. pub async fn execute(&self, sql: &str, params: &[Value]) -> Result { self.ensure_open()?; let kind = sql2::classify_statement(sql)?; if kind == sql2::SqlStatementKind::Write { let sql = sql.to_string(); let sql_for_error = sql.clone(); let params = params.to_vec(); return self .with_write_transaction(|transaction| { Box::pin(async move { // Re-plan against the transaction-backed write // session so provider hooks read and stage through the // transaction-owned SQL write context. let tx_plan = sql2::create_write_logical_plan(transaction, &sql).await?; let result = sql2::execute_logical_plan(tx_plan, ¶ms).await?; let affected_rows = affected_rows_from_query_result(result)?; Ok(ExecuteResult::from_rows_affected(affected_rows)) }) }) .await .map_err(|error| normalize_sql_surface_error(error, &sql_for_error)); } let read_scope = StorageReadScope::new(self.storage.begin_read_transaction().await?); let read_result = async { let mut read_store = read_scope.store(); let live_state: Arc = Arc::new(self.live_state.reader(read_store.clone())); let runtime_functions = FunctionContext::prepare(live_state.as_ref()).await?; let functions = runtime_functions.provider(); let active_version_id = self.active_version_id_from_reader(&mut read_store).await?; let visible_schemas = self .catalog_context .schema_jsons_for_sql_read_planning(live_state.as_ref(), &active_version_id) .await?; let ctx = SessionSqlExecutionContext { active_version_id: &active_version_id, read_store, live_state: Arc::clone(&self.live_state), binary_cas: Arc::clone(&self.binary_cas), commit_store: Arc::clone(&self.commit_store), version_ctx: Arc::clone(&self.version_ctx), visible_schemas, functions: functions.clone(), }; let plan = sql2::create_logical_plan(&ctx, sql).await?; let result = sql2::execute_logical_plan(plan, params).await?; drop(ctx); drop(live_state); Ok::<_, LixError>((runtime_functions, result)) }; let (runtime_functions, result) = match read_result.await { Ok(result) => { read_scope.rollback().await?; result } Err(error) => { let _ = read_scope.rollback().await; return Err(normalize_sql_surface_error(error, sql)); } }; self.persist_runtime_functions_if_needed(&runtime_functions) .await?; Ok(ExecuteResult::from_sql_query_result(result)) } /// Persists execution-scoped runtime function state after a successful read. /// /// Reads do not otherwise own a write transaction, but SQL functions such as /// `lix_uuid_v7()` can still advance runtime state. Persisting happens only /// after successful execution so failed reads do not consume durable /// sequence state. async fn persist_runtime_functions_if_needed( &self, runtime_functions: &FunctionContext, ) -> Result<(), LixError> { let mut transaction = self.storage.begin_write_transaction().await?; let mut writes = StorageWriteSet::new(); runtime_functions .stage_persist_if_needed(&mut writes) .await?; if !writes.is_empty() { writes.apply(&mut transaction.as_mut()).await?; } transaction.commit().await } } fn normalize_sql_surface_error(error: LixError, sql: &str) -> LixError { if error.code.starts_with("LIX_ERROR_PATH_") && sql_uses_public_filesystem_path_surface(sql) { return LixError { code: LixError::CODE_INVALID_PARAM.to_string(), ..error }; } if error.code == LixError::CODE_INVALID_JSON_PATH && error .message .to_ascii_lowercase() .contains("uses variadic path segments") { return LixError { code: LixError::CODE_INVALID_PARAM.to_string(), ..error }; } if error.code == LixError::CODE_FOREIGN_KEY { let lower = error.message.to_ascii_lowercase(); if lower.contains("schema 'lix_version_ref'") && lower.contains("target 'lix_commit.") { return LixError { code: LixError::CODE_VERSION_NOT_FOUND.to_string(), ..error }; } } error } fn sql_uses_public_filesystem_path_surface(sql: &str) -> bool { let lower = sql.to_ascii_lowercase(); (lower.contains("lix_file") || lower.contains("lix_directory")) && lower.contains("path") } fn affected_rows_from_query_result(result: SqlQueryResult) -> Result { let Some(first_row) = result.rows.first() else { return Ok(0); }; let Some(first_value) = first_row.first() else { return Ok(0); }; match first_value { Value::Integer(value) if *value >= 0 => Ok(*value as u64), Value::Text(value) => value.parse::().map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("failed to parse affected row count from SQL result: {error}"), ) }), other => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("expected affected row count, got {other:?}"), )), } } #[cfg(test)] mod tests { use super::*; #[test] fn row_get_converts_native_values_and_value_keeps_wrapper() { let result = ExecuteResult::from_rows( vec!["title".to_string(), "done".to_string()], vec![vec![Value::Text("Hello".to_string()), Value::Boolean(true)]], ); let row = &result.rows()[0]; assert_eq!(row.get::("title").unwrap(), "Hello"); assert!(row.get::("done").unwrap()); assert_eq!( row.value("title").unwrap(), &Value::Text("Hello".to_string()) ); } #[test] fn row_get_errors_on_missing_column_and_wrong_type() { let result = ExecuteResult::from_rows( vec!["title".to_string()], vec![vec![Value::Text("Hello".to_string())]], ); let row = &result.rows()[0]; let missing = row.get::("missing").unwrap_err(); assert_eq!(missing.code, LixError::CODE_COLUMN_NOT_FOUND); assert!(missing.message.contains("available columns: title")); let wrong_type = row.get::("title").unwrap_err(); assert_eq!(wrong_type.code, "LIX_ERROR_VALUE_TYPE"); } } ================================================ FILE: packages/engine/src/session/merge/analysis.rs ================================================ use crate::storage::StorageReader; use crate::tracked_state::{ plan_merge, TrackedStateDiff, TrackedStateDiffRequest, TrackedStateMergePlan, TrackedStateStoreReader, }; use crate::LixError; use super::conflicts::{conflicts_from_plan, MergeConflict}; use super::stats::{stats_from_diff, stats_from_plan, MergeStats}; #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum MergeOutcome { AlreadyUpToDate, FastForward, MergeCommitted, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct MergeCommits { pub(crate) base_commit_id: String, pub(crate) target_commit_id: String, pub(crate) source_commit_id: String, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct MergeAnalysis { pub(crate) outcome: MergeOutcome, pub(crate) commits: MergeCommits, pub(crate) source_diff: TrackedStateDiff, pub(crate) target_diff: TrackedStateDiff, pub(crate) stats: MergeStats, pub(crate) conflicts: Vec, pub(crate) merge_plan: Option, } impl MergeAnalysis { pub(crate) fn merge_plan(&self) -> Option<&TrackedStateMergePlan> { self.merge_plan.as_ref() } } pub(crate) async fn analyze( reader: &mut TrackedStateStoreReader, commits: MergeCommits, ) -> Result where S: StorageReader, { let request = TrackedStateDiffRequest::default(); let source_diff = reader .diff_commits(&commits.base_commit_id, &commits.source_commit_id, &request) .await?; let target_diff = if commits.base_commit_id == commits.source_commit_id || commits.base_commit_id == commits.target_commit_id { TrackedStateDiff::default() } else { reader .diff_commits(&commits.base_commit_id, &commits.target_commit_id, &request) .await? }; let outcome = if commits.base_commit_id == commits.source_commit_id { MergeOutcome::AlreadyUpToDate } else if commits.base_commit_id == commits.target_commit_id { MergeOutcome::FastForward } else { MergeOutcome::MergeCommitted }; let merge_plan = if outcome == MergeOutcome::MergeCommitted { Some(plan_merge(&target_diff, &source_diff)?) } else { None }; let stats = match outcome { MergeOutcome::AlreadyUpToDate => MergeStats::default(), MergeOutcome::FastForward => stats_from_diff(&source_diff), MergeOutcome::MergeCommitted => merge_plan .as_ref() .map(|plan| stats_from_plan(plan, &source_diff)) .transpose()? .unwrap_or_default(), }; let conflicts = merge_plan .as_ref() .map(conflicts_from_plan) .transpose()? .unwrap_or_default(); Ok(MergeAnalysis { outcome, commits, source_diff, target_diff, stats, conflicts, merge_plan, }) } ================================================ FILE: packages/engine/src/session/merge/apply.rs ================================================ use crate::tracked_state::TrackedStateMergePlan; use crate::transaction::types::TransactionAdoptedChange; pub(crate) fn adopted_changes_from_merge_plan( plan: &TrackedStateMergePlan, target_version_id: &str, ) -> Vec { plan.patches .iter() .map(|patch| stage_adopted_change_from_patch(patch, target_version_id)) .collect() } fn stage_adopted_change_from_patch( patch: &crate::tracked_state::TrackedStateMergePatch, target_version_id: &str, ) -> TransactionAdoptedChange { TransactionAdoptedChange { version_id: target_version_id.to_string(), change_id: patch.change_id().to_string(), projected_row: patch.projected_row().clone(), } } ================================================ FILE: packages/engine/src/session/merge/conflicts.rs ================================================ use crate::tracked_state::{ TrackedStateDiffEntry, TrackedStateDiffKind, TrackedStateMergeConflict, TrackedStateMergePlan, }; use crate::LixError; use serde_json::Value as JsonValue; #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct MergeConflict { pub(crate) kind: MergeConflictKind, pub(crate) schema_key: String, pub(crate) entity_id: JsonValue, pub(crate) file_id: Option, pub(crate) target: MergeConflictSide, pub(crate) source: MergeConflictSide, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum MergeConflictKind { SameEntityChanged, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct MergeConflictSide { pub(crate) kind: MergeConflictChangeKind, pub(crate) before_change_id: Option, pub(crate) after_change_id: Option, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum MergeConflictChangeKind { Added, Modified, Removed, } pub(crate) fn conflicts_from_plan( plan: &TrackedStateMergePlan, ) -> Result, LixError> { plan.conflicts.iter().map(conflict_from_tracked).collect() } fn conflict_from_tracked(conflict: &TrackedStateMergeConflict) -> Result { Ok(MergeConflict { kind: MergeConflictKind::SameEntityChanged, schema_key: conflict.identity.schema_key.clone(), entity_id: conflict.identity.entity_id.as_json_array_value()?, file_id: conflict.identity.file_id.clone(), target: conflict_side_from_diff_entry(&conflict.target), source: conflict_side_from_diff_entry(&conflict.source), }) } fn conflict_side_from_diff_entry(entry: &TrackedStateDiffEntry) -> MergeConflictSide { MergeConflictSide { kind: match entry.kind { TrackedStateDiffKind::Added => MergeConflictChangeKind::Added, TrackedStateDiffKind::Modified => MergeConflictChangeKind::Modified, TrackedStateDiffKind::Removed => MergeConflictChangeKind::Removed, }, before_change_id: entry.before.as_ref().map(|row| row.change_id.clone()), after_change_id: entry.after.as_ref().map(|row| row.change_id.clone()), } } ================================================ FILE: packages/engine/src/session/merge/mod.rs ================================================ mod analysis; mod apply; mod conflicts; mod stats; mod version; pub use version::{ MergeChangeStats, MergeConflict, MergeConflictChangeKind, MergeConflictKind, MergeConflictSide, MergeVersionOptions, MergeVersionOutcome, MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt, }; ================================================ FILE: packages/engine/src/session/merge/stats.rs ================================================ use crate::tracked_state::{ TrackedStateDiff, TrackedStateDiffKind, TrackedStateMergePatch, TrackedStateMergePlan, }; use crate::LixError; #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct MergeStats { pub(crate) total: usize, pub(crate) added: usize, pub(crate) modified: usize, pub(crate) removed: usize, } pub(crate) fn stats_from_diff(diff: &TrackedStateDiff) -> MergeStats { let mut stats = MergeStats::default(); for entry in &diff.entries { stats.add(entry.kind); } stats } pub(crate) fn stats_from_plan( plan: &TrackedStateMergePlan, source_diff: &TrackedStateDiff, ) -> Result { let mut stats = MergeStats::default(); for patch in &plan.patches { let identity = patch_identity(patch); let Some(entry) = source_diff .entries .iter() .find(|entry| &entry.identity == identity) else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "merge analysis could not find source diff entry for adopted schema '{}' entity '{}'", identity.schema_key, identity.entity_id.as_json_array_text()? ), )); }; stats.add(entry.kind); } Ok(stats) } impl MergeStats { fn add(&mut self, kind: TrackedStateDiffKind) { self.total += 1; match kind { TrackedStateDiffKind::Added => self.added += 1, TrackedStateDiffKind::Modified => self.modified += 1, TrackedStateDiffKind::Removed => self.removed += 1, } } } fn patch_identity( patch: &TrackedStateMergePatch, ) -> &crate::tracked_state::TrackedStateDiffIdentity { match patch { TrackedStateMergePatch::Adopt { identity, .. } => identity, } } ================================================ FILE: packages/engine/src/session/merge/version.rs ================================================ use serde_json::{json, Value as JsonValue}; use crate::transaction::types::TransactionWrite; use crate::version::{VersionLifecycle, VersionOperation, VersionReferenceRole}; use crate::LixError; use super::analysis::{analyze, MergeCommits, MergeOutcome}; use super::apply::adopted_changes_from_merge_plan; use super::conflicts::{ MergeConflict as AnalysisMergeConflict, MergeConflictChangeKind as AnalysisMergeConflictChangeKind, MergeConflictKind as AnalysisMergeConflictKind, MergeConflictSide as AnalysisMergeConflictSide, }; use super::stats::MergeStats; use crate::session::context::SessionContext; /// Options for merging another version into this session's active version. #[derive(Debug, Clone, PartialEq, Eq)] pub struct MergeVersionOptions { /// Version whose changes should be merged into the active session version. pub source_version_id: String, } /// Options for previewing a merge from another version into this session's /// active version. #[derive(Debug, Clone, PartialEq, Eq)] pub struct MergeVersionPreviewOptions { /// Version whose changes would be merged into the active session version. pub source_version_id: String, } /// Receipt returned after merging a version. #[derive(Debug, Clone, PartialEq, Eq)] pub struct MergeVersionReceipt { pub outcome: MergeVersionOutcome, pub target_version_id: String, pub source_version_id: String, pub base_commit_id: String, pub target_head_before_commit_id: String, pub source_head_before_commit_id: String, pub target_head_after_commit_id: String, pub created_merge_commit_id: Option, pub change_stats: MergeChangeStats, } #[derive(Debug, Clone, PartialEq, Eq, Default)] pub struct MergeChangeStats { pub total: usize, pub added: usize, pub modified: usize, pub removed: usize, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct MergeVersionPreview { pub outcome: MergeVersionOutcome, pub target_version_id: String, pub source_version_id: String, pub base_commit_id: String, pub target_head_commit_id: String, pub source_head_commit_id: String, pub change_stats: MergeChangeStats, pub conflicts: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct MergeConflict { pub kind: MergeConflictKind, pub schema_key: String, pub entity_id: JsonValue, pub file_id: Option, pub target: MergeConflictSide, pub source: MergeConflictSide, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum MergeConflictKind { SameEntityChanged, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct MergeConflictSide { pub kind: MergeConflictChangeKind, pub before_change_id: Option, pub after_change_id: Option, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum MergeConflictChangeKind { Added, Modified, Removed, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum MergeVersionOutcome { AlreadyUpToDate, FastForward, MergeCommitted, } impl SessionContext { /// Previews merging `source_version_id` into this session's active version /// without advancing refs, staging changes, or creating commits. pub async fn merge_version_preview( &self, options: MergeVersionPreviewOptions, ) -> Result { let source_version_id = options.source_version_id; self.with_write_transaction(|transaction| { Box::pin(async move { let active_version_id = transaction.active_version_id().to_string(); if source_version_id == active_version_id { return Err(LixError::invalid_self_merge(active_version_id)); } let (target_head, source_head) = { let reader = transaction.version_ref_reader(); let lifecycle = VersionLifecycle::new(&reader); let target_head = lifecycle .require_existing_commit_id( &active_version_id, VersionOperation::MergeVersionPreview, VersionReferenceRole::Target, ) .await?; let source_head = lifecycle .require_existing_commit_id( &source_version_id, VersionOperation::MergeVersionPreview, VersionReferenceRole::Source, ) .await?; (target_head, source_head) }; let merge_base = { let mut reader = transaction.commit_graph_reader(); reader.merge_base(&target_head, &source_head).await? }; let analysis = { let mut reader = transaction.tracked_state_reader(); analyze( &mut reader, MergeCommits { base_commit_id: merge_base.commit_id, target_commit_id: target_head, source_commit_id: source_head, }, ) .await? }; Ok(preview_from_analysis( &active_version_id, &source_version_id, &analysis, )) }) }) .await } /// Merges `source_version_id` into this session's active version. /// /// The generated target commit keeps the previous target head as its first /// parent and records the source head as an additional parent, so the /// commit graph preserves branch ancestry while tracked-state storage can /// build the new root by applying source effects onto the target root. pub async fn merge_version( &self, options: MergeVersionOptions, ) -> Result { let source_version_id = options.source_version_id; self.with_write_transaction(|transaction| { Box::pin(async move { let active_version_id = transaction.active_version_id().to_string(); if source_version_id == active_version_id { return Err(LixError::invalid_self_merge(active_version_id)); } let (target_head, source_head) = { let reader = transaction.version_ref_reader(); let lifecycle = VersionLifecycle::new(&reader); let target_head = lifecycle .require_existing_commit_id( &active_version_id, VersionOperation::MergeVersion, VersionReferenceRole::Target, ) .await?; let source_head = lifecycle .require_existing_commit_id( &source_version_id, VersionOperation::MergeVersion, VersionReferenceRole::Source, ) .await?; (target_head, source_head) }; let merge_base = { let mut reader = transaction.commit_graph_reader(); reader.merge_base(&target_head, &source_head).await? }; let base_commit_id = merge_base.commit_id; let analysis = { let mut reader = transaction.tracked_state_reader(); analyze( &mut reader, MergeCommits { base_commit_id, target_commit_id: target_head, source_commit_id: source_head, }, ) .await? }; if analysis.outcome == MergeOutcome::AlreadyUpToDate { return Ok(MergeVersionReceipt { outcome: MergeVersionOutcome::AlreadyUpToDate, target_version_id: active_version_id, source_version_id, base_commit_id: analysis.commits.base_commit_id, target_head_after_commit_id: analysis.commits.target_commit_id.clone(), target_head_before_commit_id: analysis.commits.target_commit_id, source_head_before_commit_id: analysis.commits.source_commit_id, created_merge_commit_id: None, change_stats: merge_change_stats_from_analysis(&analysis.stats), }); } if analysis.outcome == MergeOutcome::FastForward { transaction .advance_version_ref(&active_version_id, &analysis.commits.source_commit_id) .await?; return Ok(MergeVersionReceipt { outcome: MergeVersionOutcome::FastForward, target_version_id: active_version_id, source_version_id, base_commit_id: analysis.commits.base_commit_id, target_head_before_commit_id: analysis.commits.target_commit_id, source_head_before_commit_id: analysis.commits.source_commit_id.clone(), target_head_after_commit_id: analysis.commits.source_commit_id, created_merge_commit_id: None, change_stats: merge_change_stats_from_analysis(&analysis.stats), }); } let merge_plan = analysis .merge_plan() .expect("merge analysis should include a plan for mergeCommitted"); if !analysis.conflicts.is_empty() { return Err(merge_conflict_error( &analysis .conflicts .iter() .map(merge_conflict_from_analysis) .collect::>(), )?); } let adopted_changes = adopted_changes_from_merge_plan(merge_plan, &active_version_id); if adopted_changes.is_empty() { let created_merge_commit_id = transaction.stage_empty_commit(active_version_id.clone())?; transaction.add_commit_parent( active_version_id.clone(), analysis.commits.source_commit_id.clone(), )?; return Ok(MergeVersionReceipt { outcome: MergeVersionOutcome::MergeCommitted, target_version_id: active_version_id, source_version_id, base_commit_id: analysis.commits.base_commit_id, target_head_after_commit_id: created_merge_commit_id.clone(), target_head_before_commit_id: analysis.commits.target_commit_id, source_head_before_commit_id: analysis.commits.source_commit_id, created_merge_commit_id: Some(created_merge_commit_id), change_stats: merge_change_stats_from_analysis(&analysis.stats), }); } transaction .stage_write(TransactionWrite::AdoptedChanges { changes: adopted_changes, }) .await?; let created_merge_commit_id = transaction .staged_commit_id(&active_version_id)? .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "merge_version staged tracked rows without a commit id", ) })?; transaction.add_commit_parent( active_version_id.clone(), analysis.commits.source_commit_id.clone(), )?; Ok(MergeVersionReceipt { outcome: MergeVersionOutcome::MergeCommitted, target_version_id: active_version_id, source_version_id, base_commit_id: analysis.commits.base_commit_id, target_head_before_commit_id: analysis.commits.target_commit_id, source_head_before_commit_id: analysis.commits.source_commit_id, created_merge_commit_id: Some(created_merge_commit_id.clone()), target_head_after_commit_id: created_merge_commit_id, change_stats: merge_change_stats_from_analysis(&analysis.stats), }) }) }) .await } } fn preview_from_analysis( target_version_id: &str, source_version_id: &str, analysis: &super::analysis::MergeAnalysis, ) -> MergeVersionPreview { MergeVersionPreview { outcome: merge_version_outcome_from_analysis(analysis.outcome), target_version_id: target_version_id.to_string(), source_version_id: source_version_id.to_string(), base_commit_id: analysis.commits.base_commit_id.clone(), target_head_commit_id: analysis.commits.target_commit_id.clone(), source_head_commit_id: analysis.commits.source_commit_id.clone(), change_stats: merge_change_stats_from_analysis(&analysis.stats), conflicts: analysis .conflicts .iter() .map(merge_conflict_from_analysis) .collect(), } } fn merge_version_outcome_from_analysis(outcome: MergeOutcome) -> MergeVersionOutcome { match outcome { MergeOutcome::AlreadyUpToDate => MergeVersionOutcome::AlreadyUpToDate, MergeOutcome::FastForward => MergeVersionOutcome::FastForward, MergeOutcome::MergeCommitted => MergeVersionOutcome::MergeCommitted, } } fn merge_change_stats_from_analysis(stats: &MergeStats) -> MergeChangeStats { MergeChangeStats { total: stats.total, added: stats.added, modified: stats.modified, removed: stats.removed, } } fn merge_conflict_from_analysis(conflict: &AnalysisMergeConflict) -> MergeConflict { MergeConflict { kind: match conflict.kind { AnalysisMergeConflictKind::SameEntityChanged => MergeConflictKind::SameEntityChanged, }, schema_key: conflict.schema_key.clone(), entity_id: conflict.entity_id.clone(), file_id: conflict.file_id.clone(), target: merge_conflict_side_from_analysis(&conflict.target), source: merge_conflict_side_from_analysis(&conflict.source), } } fn merge_conflict_side_from_analysis(side: &AnalysisMergeConflictSide) -> MergeConflictSide { MergeConflictSide { kind: match side.kind { AnalysisMergeConflictChangeKind::Added => MergeConflictChangeKind::Added, AnalysisMergeConflictChangeKind::Modified => MergeConflictChangeKind::Modified, AnalysisMergeConflictChangeKind::Removed => MergeConflictChangeKind::Removed, }, before_change_id: side.before_change_id.clone(), after_change_id: side.after_change_id.clone(), } } fn merge_conflict_error(conflicts: &[MergeConflict]) -> Result { let conflict_count = conflicts.len(); Ok(LixError::new( LixError::CODE_MERGE_CONFLICT, format!("merge_version found {conflict_count} tracked-state conflict(s)"), ) .with_hint("Resolve the conflicting entities in the target version, then retry the merge.") .with_details(json!({ "conflicts": conflicts.iter() .map(merge_conflict_details) .collect::>(), }))) } fn merge_conflict_details(conflict: &MergeConflict) -> serde_json::Value { json!({ "kind": match conflict.kind { MergeConflictKind::SameEntityChanged => "sameEntityChanged", }, "schemaKey": conflict.schema_key, "entityId": conflict.entity_id, "fileId": conflict.file_id, "target": merge_conflict_side_details(&conflict.target), "source": merge_conflict_side_details(&conflict.source), }) } fn merge_conflict_side_details(side: &MergeConflictSide) -> serde_json::Value { json!({ "kind": match side.kind { MergeConflictChangeKind::Added => "added", MergeConflictChangeKind::Modified => "modified", MergeConflictChangeKind::Removed => "removed", }, "beforeChangeId": side.before_change_id, "afterChangeId": side.after_change_id, }) } ================================================ FILE: packages/engine/src/session/mod.rs ================================================ //! Engine session boundary. //! //! Transaction invariant: //! any engine operation that may write must enter through //! `SessionContext::with_write_transaction`. Reads that influence writes are //! only available from the transaction capability. Session APIs must not //! open `Transaction` directly or use session-level read helpers inside write //! flows. mod context; mod create_version; mod execute; mod merge; #[cfg(feature = "storage-benches")] pub mod optimization9_sql2_bench; mod switch_version; pub use context::SessionContext; pub(crate) use context::{SessionMode, WORKSPACE_VERSION_KEY}; pub use create_version::{CreateVersionOptions, CreateVersionReceipt}; pub use execute::{ExecuteResult, Row, RowRef, TryFromValue}; pub use merge::{ MergeChangeStats, MergeConflict, MergeConflictChangeKind, MergeConflictKind, MergeConflictSide, MergeVersionOptions, MergeVersionOutcome, MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt, }; pub use switch_version::{SwitchVersionOptions, SwitchVersionReceipt}; ================================================ FILE: packages/engine/src/session/optimization9_sql2_bench.rs ================================================ use crate::functions::FunctionContext; use crate::session::context::{SessionContext, SessionSqlExecutionContext}; use crate::sql2::{self, SqlLogicalPlan}; use crate::storage::StorageReadScope; use crate::transaction::open_transaction; use crate::{LixError, SqlQueryResult, Value}; /// Opaque read plan used by the Optimization 9 SQL2 diagnostic benchmark. /// /// This module is gated behind `storage-benches` and exists only to split SQL2 /// planning cost from SQL2 execution cost without widening the normal session /// API. pub struct PreparedReadPlan { plan: SqlLogicalPlan, read_scope: StorageReadScope>, runtime_functions: FunctionContext, } pub async fn plan_read_only(session: &SessionContext, sql: &str) -> Result<(), LixError> { let prepared = prepare_read_plan(session, sql).await?; drop(prepared.plan); drop(prepared.runtime_functions); prepared.read_scope.rollback().await } pub async fn plan_write_only(session: &SessionContext, sql: &str) -> Result<(), LixError> { session.ensure_open()?; let opened = open_transaction( &session.mode, session.storage.clone(), std::sync::Arc::clone(&session.live_state), std::sync::Arc::clone(&session.tracked_state), std::sync::Arc::clone(&session.binary_cas), std::sync::Arc::clone(&session.commit_store), std::sync::Arc::clone(&session.version_ctx), std::sync::Arc::clone(&session.catalog_context), ) .await?; let mut transaction = opened.transaction; let runtime_functions = opened.runtime_functions; let plan = sql2::create_write_logical_plan(&mut transaction, sql).await?; drop(plan); drop(runtime_functions); transaction.rollback().await } pub async fn prepare_read_plan( session: &SessionContext, sql: &str, ) -> Result { session.ensure_open()?; let read_scope = StorageReadScope::new(session.storage.begin_read_transaction().await?); let mut read_store = read_scope.store(); let live_state: std::sync::Arc = std::sync::Arc::new(session.live_state.reader(read_store.clone())); let runtime_functions = FunctionContext::prepare(live_state.as_ref()).await?; let functions = runtime_functions.provider(); let active_version_id = session .active_version_id_from_reader(&mut read_store) .await?; let visible_schemas = session .catalog_context .schema_jsons_for_sql_read_planning(live_state.as_ref(), &active_version_id) .await?; let ctx = SessionSqlExecutionContext { active_version_id: &active_version_id, read_store, live_state: std::sync::Arc::clone(&session.live_state), binary_cas: std::sync::Arc::clone(&session.binary_cas), commit_store: std::sync::Arc::clone(&session.commit_store), version_ctx: std::sync::Arc::clone(&session.version_ctx), visible_schemas, functions: functions.clone(), }; let plan = sql2::create_logical_plan(&ctx, sql).await?; drop(ctx); drop(live_state); Ok(PreparedReadPlan { plan, read_scope, runtime_functions, }) } pub async fn execute_read_plan( prepared: PreparedReadPlan, params: &[Value], ) -> Result { let PreparedReadPlan { plan, read_scope, runtime_functions, } = prepared; let result = sql2::execute_logical_plan(plan, params).await; read_scope.rollback().await?; drop(runtime_functions); result } ================================================ FILE: packages/engine/src/session/switch_version.rs ================================================ use std::sync::Arc; use serde_json::json; use crate::transaction::types::{TransactionJson, TransactionWriteRow}; use crate::version::{VersionLifecycle, VersionOperation, VersionReferenceRole}; use crate::LixError; use crate::GLOBAL_VERSION_ID; use super::context::{SessionContext, SessionMode, WORKSPACE_VERSION_KEY}; const KEY_VALUE_SCHEMA_KEY: &str = "lix_key_value"; /// Options for switching a session to another version. #[derive(Debug, Clone, PartialEq, Eq)] pub struct SwitchVersionOptions { pub version_id: String, } /// Receipt returned after switching to another version. #[derive(Debug, Clone, PartialEq, Eq)] pub struct SwitchVersionReceipt { pub version_id: String, } impl SessionContext { /// Switches the session's active version selector. /// /// Pinned sessions switch in memory and return a new pinned session. /// Workspace sessions update the shared workspace selector so other /// workspace sessions observe the new active version on their next use. pub async fn switch_version( &self, options: SwitchVersionOptions, ) -> Result<(SessionContext, SwitchVersionReceipt), LixError> { let version_id = options.version_id; let receipt_version_id = version_id.clone(); let current_mode = self.mode.clone(); let next_mode = self .with_write_transaction(|transaction| { Box::pin(async move { { let reader = transaction.version_ref_reader(); VersionLifecycle::new(&reader) .require_existing_commit_id( &version_id, VersionOperation::SwitchVersion, VersionReferenceRole::Target, ) .await? }; match current_mode { SessionMode::Pinned { .. } => Ok(SessionMode::Pinned { version_id: version_id.clone(), }), SessionMode::Workspace => { transaction .stage_rows(vec![workspace_version_stage_row(&version_id)?]) .await?; Ok(SessionMode::Workspace) } } }) }) .await?; let session = SessionContext::new_with_closed( next_mode, self.storage.clone(), Arc::clone(&self.live_state), Arc::clone(&self.tracked_state), Arc::clone(&self.binary_cas), Arc::clone(&self.commit_store), Arc::clone(&self.version_ctx), Arc::clone(&self.catalog_context), self.closed_flag(), ); Ok(( session, SwitchVersionReceipt { version_id: receipt_version_id, }, )) } } fn workspace_version_stage_row(version_id: &str) -> Result { Ok(TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single( WORKSPACE_VERSION_KEY, )), schema_key: KEY_VALUE_SCHEMA_KEY.to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_unchecked(json!({ "key": WORKSPACE_VERSION_KEY, "value": version_id, }))), metadata: None, origin: None, created_at: None, updated_at: None, global: true, change_id: None, commit_id: None, untracked: true, version_id: GLOBAL_VERSION_ID.to_string(), }) } ================================================ FILE: packages/engine/src/sql2/change_provider.rs ================================================ use std::any::Any; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, StringArray}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::RecordBatch; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{DataFusionError, Result}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::{Expr, TableProviderFilterPushDown}; use datafusion::physical_expr::EquivalenceProperties; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use futures_util::stream; use crate::commit_store::ChangeScanRequest; use crate::serialize_row_metadata; use crate::LixError; use super::record_batch::record_batch_with_row_count; use super::result_metadata::json_field; use super::SqlCommitStoreQuerySource; use crate::commit_store::{materialize_change, MaterializedChange}; pub(crate) async fn register_lix_change_provider( session: &datafusion::prelude::SessionContext, query_source: SqlCommitStoreQuerySource, ) -> Result<(), LixError> { session .register_table("lix_change", Arc::new(LixChangeProvider::new(query_source))) .map_err(datafusion_error_to_lix_error)?; Ok(()) } struct LixChangeProvider { schema: SchemaRef, query_source: SqlCommitStoreQuerySource, } impl std::fmt::Debug for LixChangeProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixChangeProvider").finish() } } impl LixChangeProvider { fn new(query_source: SqlCommitStoreQuerySource) -> Self { Self { schema: lix_change_schema(), query_source, } } } #[async_trait] impl TableProvider for LixChangeProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::Base } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { Ok(filters .iter() .map(|_| TableProviderFilterPushDown::Unsupported) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, _filters: &[Expr], limit: Option, ) -> Result> { Ok(Arc::new(LixChangeScanExec::new( self.query_source.clone(), projected_schema(&self.schema, projection), projection.cloned(), limit, ))) } } struct LixChangeScanExec { query_source: SqlCommitStoreQuerySource, schema: SchemaRef, projection: Option>, limit: Option, properties: Arc, } impl std::fmt::Debug for LixChangeScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixChangeScanExec").finish() } } impl LixChangeScanExec { fn new( query_source: SqlCommitStoreQuerySource, schema: SchemaRef, projection: Option>, limit: Option, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(schema.clone()), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { query_source, schema, projection, limit, properties: Arc::new(properties), } } } impl DisplayAs for LixChangeScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixChangeScanExec") } DisplayFormatType::TreeRender => write!(f, "LixChangeScanExec"), } } } impl ExecutionPlan for LixChangeScanExec { fn name(&self) -> &str { "LixChangeScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixChangeScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixChangeScanExec only exposes one partition, got {partition}" ))); } let query_source = self.query_source.clone(); let projection = change_projection_for_scan(self.projection.as_ref()); let limit = self.limit; let schema = Arc::clone(&self.schema); let stream = stream::once(async move { let mut json_reader = query_source.json_reader; let canonical_changes = query_source .commit_store_reader .scan_changes(&ChangeScanRequest { limit }) .await .map_err(lix_error_to_datafusion_error)?; let mut changes = Vec::with_capacity(canonical_changes.len()); for change in canonical_changes { changes.push( materialize_change(&mut json_reader, change) .await .map_err(lix_error_to_datafusion_error)?, ); } change_record_batch(&projection, &changes) }); Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream))) } } #[derive(Debug, Clone, Copy)] enum ChangeColumn { Id, EntityId, SchemaKey, FileId, Metadata, CreatedAt, SnapshotContent, } fn lix_change_schema() -> SchemaRef { Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), json_field("entity_id", false), Field::new("schema_key", DataType::Utf8, false), Field::new("file_id", DataType::Utf8, true), json_field("metadata", true), Field::new("created_at", DataType::Utf8, false), json_field("snapshot_content", true), ])) } fn change_projection_for_scan(projection: Option<&Vec>) -> Vec { let all_columns = vec![ ChangeColumn::Id, ChangeColumn::EntityId, ChangeColumn::SchemaKey, ChangeColumn::FileId, ChangeColumn::Metadata, ChangeColumn::CreatedAt, ChangeColumn::SnapshotContent, ]; projection.map_or(all_columns.clone(), |indices| { indices .iter() .filter_map(|index| all_columns.get(*index).copied()) .collect() }) } fn projected_schema(schema: &SchemaRef, projection: Option<&Vec>) -> SchemaRef { match projection { Some(projection) => Arc::new(schema.project(projection).expect("projection is valid")), None => Arc::clone(schema), } } fn change_record_batch( projection: &[ChangeColumn], changes: &[MaterializedChange], ) -> Result { let arrays = projection .iter() .map(|column| match column { ChangeColumn::Id => string_array(changes.iter().map(|row| Some(row.id.as_str()))), ChangeColumn::EntityId => Arc::new(StringArray::from( changes .iter() .map(|row| { Some( row.entity_id .as_json_array_text() .expect("canonical change entity identity should project"), ) }) .collect::>(), )) as ArrayRef, ChangeColumn::SchemaKey => { string_array(changes.iter().map(|row| Some(row.schema_key.as_str()))) } ChangeColumn::FileId => string_array(changes.iter().map(|row| row.file_id.as_deref())), ChangeColumn::Metadata => Arc::new(StringArray::from( changes .iter() .map(|row| row.metadata.as_ref().map(serialize_row_metadata)) .collect::>(), )), ChangeColumn::CreatedAt => { string_array(changes.iter().map(|row| Some(row.created_at.as_str()))) } ChangeColumn::SnapshotContent => { string_array(changes.iter().map(|row| row.snapshot_content.as_deref())) } }) .collect::>(); record_batch_with_row_count(change_schema(projection), arrays, changes.len()).map_err(|error| { DataFusionError::Execution(format!("failed to build lix_change batch: {error}")) }) } fn change_schema(projection: &[ChangeColumn]) -> SchemaRef { Arc::new(Schema::new( projection .iter() .map(|column| match column { ChangeColumn::Id => Field::new("id", DataType::Utf8, false), ChangeColumn::EntityId => json_field("entity_id", false), ChangeColumn::SchemaKey => Field::new("schema_key", DataType::Utf8, false), ChangeColumn::FileId => Field::new("file_id", DataType::Utf8, true), ChangeColumn::Metadata => json_field("metadata", true), ChangeColumn::CreatedAt => Field::new("created_at", DataType::Utf8, false), ChangeColumn::SnapshotContent => json_field("snapshot_content", true), }) .collect::>(), )) } fn string_array<'a>(values: impl Iterator>) -> ArrayRef { Arc::new(StringArray::from(values.collect::>())) as ArrayRef } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } ================================================ FILE: packages/engine/src/sql2/classify.rs ================================================ use datafusion::sql::parser::Statement as DataFusionStatement; use datafusion::sql::sqlparser::ast::{ FromTable, ObjectName, Query, SetExpr, Statement as SqlStatement, TableFactor, TableObject, TableWithJoins, }; use datafusion::sql::sqlparser::dialect::GenericDialect; use datafusion::sql::sqlparser::parser::Parser; use crate::LixError; #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum SqlStatementKind { Read, Write, Other, } pub(crate) fn classify_statement(sql: &str) -> Result { let statements = parse_sql_statements(sql)?; let [statement] = statements.as_slice() else { return Ok(SqlStatementKind::Other); }; Ok(classify_ast_statement(statement)) } pub(crate) fn validate_supported_statement_ast(sql: &str) -> Result<(), LixError> { let statements = parse_sql_statements(sql)?; let [statement] = statements.as_slice() else { return Err(unsupported_sql_error( "Lix SQL only supports one statement per execute() call", )); }; validate_supported_ast_statement(statement) } pub(crate) fn validate_supported_datafusion_statement_ast( statement: &DataFusionStatement, ) -> Result<(), LixError> { match statement { DataFusionStatement::Statement(statement) => validate_supported_ast_statement(statement), DataFusionStatement::Explain(explain) => { validate_supported_datafusion_statement_ast(explain.statement.as_ref()) } _ => Err(unsupported_sql_error(format!( "SQL statement is not supported by Lix SQL: {statement}" ))), } } pub(crate) fn datafusion_statement_dml_target_table_names( statement: &DataFusionStatement, ) -> Vec { let mut targets = Vec::new(); collect_datafusion_statement_dml_target_table_names(statement, &mut targets); targets } fn parse_sql_statements(sql: &str) -> Result, LixError> { Parser::parse_sql(&GenericDialect {}, sql).map_err(|error| { LixError::new( LixError::CODE_PARSE_ERROR, format!("sql2 SQL parse error: {error}"), ) }) } fn collect_datafusion_statement_dml_target_table_names( statement: &DataFusionStatement, targets: &mut Vec, ) { match statement { DataFusionStatement::Statement(statement) => { collect_dml_target_table_names(statement, targets); } DataFusionStatement::Explain(explain) => { collect_datafusion_statement_dml_target_table_names( explain.statement.as_ref(), targets, ); } _ => {} } } fn collect_dml_target_table_names(statement: &SqlStatement, targets: &mut Vec) { match statement { SqlStatement::Insert(insert) => { if let TableObject::TableName(name) = &insert.table { if let Some(table_name) = object_name_table_part(name) { targets.push(table_name); } } } SqlStatement::Update(update) => { collect_table_with_joins_target(&update.table, targets); } SqlStatement::Delete(delete) => { let tables = match &delete.from { FromTable::WithFromKeyword(tables) | FromTable::WithoutKeyword(tables) => tables, }; for table in tables { collect_table_with_joins_target(table, targets); } } SqlStatement::Explain { statement, .. } => { collect_dml_target_table_names(statement.as_ref(), targets); } _ => {} } } fn collect_table_with_joins_target(table: &TableWithJoins, targets: &mut Vec) { if let TableFactor::Table { name, .. } = &table.relation { if let Some(table_name) = object_name_table_part(name) { targets.push(table_name); } } } fn object_name_table_part(name: &ObjectName) -> Option { name.0.last().and_then(|part| part.as_ident()).map(|ident| { if ident.quote_style.is_some() { ident.value.clone() } else { ident.value.to_ascii_lowercase() } }) } fn classify_ast_statement(statement: &SqlStatement) -> SqlStatementKind { match statement { SqlStatement::Insert(_) | SqlStatement::Update(_) | SqlStatement::Delete(_) => { SqlStatementKind::Write } SqlStatement::Query(_) => SqlStatementKind::Read, SqlStatement::Explain { statement, .. } => classify_ast_statement(statement.as_ref()), _ => SqlStatementKind::Other, } } fn validate_supported_ast_statement(statement: &SqlStatement) -> Result<(), LixError> { match statement { SqlStatement::Query(query) => validate_supported_query(query), SqlStatement::Insert(_) | SqlStatement::Update(_) | SqlStatement::Delete(_) => Ok(()), SqlStatement::Explain { statement, .. } => validate_supported_ast_statement(statement), _ => Err(unsupported_sql_error(format!( "SQL statement is not supported by Lix SQL: {statement}" ))), } } fn validate_supported_query(query: &Query) -> Result<(), LixError> { if query.with.as_ref().is_some_and(|with| with.recursive) { return Err( unsupported_sql_error("recursive CTEs are not supported by Lix SQL").with_hint( "Use explicit commit graph surfaces such as lix_commit, lix_commit_edge, and lix_state_history instead of WITH RECURSIVE.", ), ); } if let Some(with) = &query.with { for cte in &with.cte_tables { validate_supported_query(&cte.query)?; } } validate_supported_set_expr(&query.body) } fn validate_supported_set_expr(expr: &SetExpr) -> Result<(), LixError> { match expr { SetExpr::Query(query) => validate_supported_query(query), SetExpr::SetOperation { left, right, .. } => { validate_supported_set_expr(left)?; validate_supported_set_expr(right) } _ => Ok(()), } } fn unsupported_sql_error(message: impl Into) -> LixError { LixError::new(LixError::CODE_UNSUPPORTED_SQL, message) } ================================================ FILE: packages/engine/src/sql2/context.rs ================================================ use std::ptr::NonNull; use std::sync::Arc; use async_trait::async_trait; use serde_json::Value as JsonValue; use tokio::sync::Mutex; use crate::binary_cas::{BlobBytesBatch, BlobDataReader, BlobHash}; use crate::commit_graph::CommitGraphReader; use crate::commit_store::CommitStoreReader; use crate::functions::FunctionProviderHandle; use crate::json_store::JsonStoreReader; use crate::live_state::{ LiveStateFilter, LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::storage::{ScopedStorageReader, StorageReadTransaction}; use crate::transaction::types::{TransactionWrite, TransactionWriteOutcome}; use crate::version::{VersionHead, VersionRefReader}; use crate::LixError; pub(crate) type SqlReadStore = ScopedStorageReader>; pub(crate) type SqlCommitStoreQuerySource = CommitStoreQuerySource; pub(crate) type SqlJsonReader = JsonStoreReader>; #[derive(Clone)] pub(crate) struct CommitStoreQuerySource { pub(crate) commit_store_reader: Arc>>, pub(crate) json_reader: JsonStoreReader>, } /// Read-only execution boundary for `sql2::execute_sql(...)`. /// /// Session and transaction orchestration stay above `sql2`. They provide the /// execution-scoped committed read context for each call. /// /// This trait is for read SQL session construction. Write SQL should use /// `SqlWriteExecutionContext` so transaction-scoped reads and staging stay in /// the transaction capability instead of flowing through committed read /// sources. #[allow(dead_code)] pub(crate) trait SqlExecutionContext { fn active_version_id(&self) -> &str; fn live_state(&self) -> Arc; fn functions(&self) -> FunctionProviderHandle; fn commit_store_query_source(&self) -> SqlCommitStoreQuerySource; fn commit_graph(&self) -> Box; fn version_ref(&self) -> Arc; fn blob_reader(&self) -> Arc; fn list_visible_schemas(&self) -> Result, LixError>; } /// Write-capable SQL runtime boundary. /// /// Providers that mutate engine state should target this shape instead of /// reaching through session/backend escape hatches. The request and write /// payloads stay in the existing engine forms so this boundary centralizes /// authority without adding another translation layer. #[async_trait] #[allow(dead_code)] pub(crate) trait SqlWriteExecutionContext { fn active_version_id(&self) -> &str; fn functions(&self) -> FunctionProviderHandle; fn list_visible_schemas(&self) -> Result, LixError>; async fn load_bytes_many(&mut self, hashes: &[BlobHash]) -> Result; async fn scan_live_state( &mut self, request: &LiveStateScanRequest, ) -> Result, LixError>; async fn load_version_head(&mut self, version_id: &str) -> Result, LixError>; async fn stage_write( &mut self, write: TransactionWrite, ) -> Result; } #[derive(Clone)] pub(crate) struct SqlWriteContext { ptr: Arc, gate: Arc>, } struct SqlWriteContextPtr(NonNull); // DataFusion stores providers as owned Send + Sync trait objects. This context // is only constructed for one write execution and never outlives the borrowed // transaction context that owns it. unsafe impl Send for SqlWriteContextPtr {} unsafe impl Sync for SqlWriteContextPtr {} impl SqlWriteContext { pub(crate) fn new(ctx: &mut dyn SqlWriteExecutionContext) -> Self { let ptr = NonNull::from(ctx); let ptr = unsafe { std::mem::transmute::< NonNull, NonNull, >(ptr) }; Self { ptr: Arc::new(SqlWriteContextPtr(ptr)), gate: Arc::new(Mutex::new(())), } } pub(crate) fn functions(&self) -> FunctionProviderHandle { unsafe { self.ptr.0.as_ref().functions() } } pub(crate) fn blob_reader(&self) -> Arc { Arc::new(WriteContextBlobDataReader::new(self.clone())) } pub(crate) fn list_visible_schemas(&self) -> Result, LixError> { unsafe { self.ptr.0.as_ref().list_visible_schemas() } } pub(crate) fn active_version_id(&self) -> String { unsafe { self.ptr.0.as_ref().active_version_id().to_string() } } pub(crate) async fn scan_live_state( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { let _guard = self.gate.lock().await; unsafe { self.ptr .0 .as_ptr() .as_mut() .unwrap() .scan_live_state(request) .await } } pub(crate) async fn load_bytes_many( &self, hashes: &[BlobHash], ) -> Result { let _guard = self.gate.lock().await; unsafe { self.ptr .0 .as_ptr() .as_mut() .unwrap() .load_bytes_many(hashes) .await } } pub(crate) async fn load_version_head( &self, version_id: &str, ) -> Result, LixError> { let _guard = self.gate.lock().await; unsafe { self.ptr .0 .as_ptr() .as_mut() .unwrap() .load_version_head(version_id) .await } } pub(crate) async fn stage_write( &self, write: TransactionWrite, ) -> Result { let _guard = self.gate.lock().await; unsafe { self.ptr .0 .as_ptr() .as_mut() .unwrap() .stage_write(write) .await } } } pub(crate) struct WriteContextBlobDataReader { ctx: SqlWriteContext, } impl WriteContextBlobDataReader { pub(crate) fn new(ctx: SqlWriteContext) -> Self { Self { ctx } } } #[async_trait] impl BlobDataReader for WriteContextBlobDataReader { async fn load_bytes_many(&self, hashes: &[BlobHash]) -> Result { self.ctx.load_bytes_many(hashes).await } } #[derive(Clone)] pub(crate) enum WriteAccess { ReadOnly, Write { ctx: SqlWriteContext }, } impl WriteAccess { pub(crate) fn read_only() -> Self { Self::ReadOnly } pub(crate) fn write(ctx: SqlWriteContext) -> Self { Self::Write { ctx } } pub(crate) fn require_write( &self, action: &str, ) -> Result { match self { Self::Write { ctx } => Ok(ctx.clone()), Self::ReadOnly => Err(datafusion::error::DataFusionError::Execution(format!( "{action} requires a write transaction" ))), } } pub(crate) fn is_write(&self) -> bool { matches!(self, Self::Write { .. }) } } pub(crate) struct WriteContextLiveStateReader { ctx: SqlWriteContext, } impl WriteContextLiveStateReader { pub(crate) fn new(ctx: SqlWriteContext) -> Self { Self { ctx } } } #[async_trait] impl LiveStateReader for WriteContextLiveStateReader { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { self.ctx.scan_live_state(request).await } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { let mut rows = self .ctx .scan_live_state(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![request.schema_key.clone()], entity_ids: vec![request.entity_id.clone()], version_ids: vec![request.version_id.clone()], file_ids: vec![request.file_id.clone()], ..LiveStateFilter::default() }, projection: Default::default(), limit: Some(1), }) .await?; Ok(rows.pop()) } } pub(crate) struct WriteContextVersionRefReader { ctx: SqlWriteContext, } impl WriteContextVersionRefReader { pub(crate) fn new(ctx: SqlWriteContext) -> Self { Self { ctx } } } #[async_trait] impl VersionRefReader for WriteContextVersionRefReader { async fn load_head(&self, version_id: &str) -> Result, LixError> { Ok(self .ctx .load_version_head(version_id) .await? .map(|commit_id| VersionHead { version_id: version_id.to_string(), commit_id, })) } async fn scan_heads(&self) -> Result, LixError> { Err(LixError::new( "LIX_ERROR_UNKNOWN", "scan_heads is not available through sql2 write context", )) } } ================================================ FILE: packages/engine/src/sql2/directory_history_provider.rs ================================================ use std::any::Any; use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BooleanArray, Int64Array, StringArray}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions}; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{DataFusionError, Result}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::{Expr, TableProviderFilterPushDown}; use datafusion::physical_expr::EquivalenceProperties; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use futures_util::stream; use serde::Deserialize; use tokio::sync::Mutex; use crate::commit_graph::CommitGraphReader; use crate::serialize_row_metadata; use crate::LixError; use super::history_projection::{tombstone_identity_column_value, HistoryIdentityProjection}; use super::history_route::{ history_descriptor_event_matches, load_history_entries, parse_history_filter, HistoryColumnStyle, HistoryEntry, HistoryRoute, HistoryViewDescriptor, HISTORY_COL_CHANGE_ID, HISTORY_COL_COMMIT_CREATED_AT, HISTORY_COL_DEPTH, HISTORY_COL_ENTITY_ID, HISTORY_COL_FILE_ID, HISTORY_COL_METADATA, HISTORY_COL_OBSERVED_COMMIT_ID, HISTORY_COL_SCHEMA_KEY, HISTORY_COL_SNAPSHOT_CONTENT, HISTORY_COL_START_COMMIT_ID, }; use super::result_metadata::json_field; use super::SqlCommitStoreQuerySource; use crate::commit_store::MaterializedChange; const DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = "lix_directory_descriptor"; pub(crate) async fn register_lix_directory_history_provider( session: &datafusion::prelude::SessionContext, commit_graph: Box, query_source: SqlCommitStoreQuerySource, ) -> Result<(), LixError> { session .register_table( "lix_directory_history", Arc::new(LixDirectoryHistoryProvider::new( Arc::new(Mutex::new(commit_graph)), query_source, )), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } struct LixDirectoryHistoryProvider { schema: SchemaRef, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, } impl std::fmt::Debug for LixDirectoryHistoryProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixDirectoryHistoryProvider").finish() } } impl LixDirectoryHistoryProvider { fn new( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, ) -> Self { Self { schema: lix_directory_history_schema(), commit_graph, query_source, } } } #[async_trait] impl TableProvider for LixDirectoryHistoryProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::View } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { Ok(filters .iter() .map(|filter| { if parse_history_filter(filter, HistoryColumnStyle::Prefixed).is_some() { TableProviderFilterPushDown::Exact } else { TableProviderFilterPushDown::Unsupported } }) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, filters: &[Expr], limit: Option, ) -> Result> { Ok(Arc::new(LixDirectoryHistoryScanExec::new( Arc::clone(&self.commit_graph), self.query_source.clone(), projected_schema(&self.schema, projection)?, HistoryRoute::from_filters(filters, HistoryColumnStyle::Prefixed), limit, ))) } } struct LixDirectoryHistoryScanExec { commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, schema: SchemaRef, route: HistoryRoute, limit: Option, properties: Arc, } impl std::fmt::Debug for LixDirectoryHistoryScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixDirectoryHistoryScanExec") .field("route", &self.route) .field("limit", &self.limit) .finish() } } impl LixDirectoryHistoryScanExec { fn new( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, schema: SchemaRef, route: HistoryRoute, limit: Option, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&schema)), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { commit_graph, query_source, schema, route, limit, properties: Arc::new(properties), } } } impl DisplayAs for LixDirectoryHistoryScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => write!( f, "LixDirectoryHistoryScanExec(route={:?}, limit={:?})", self.route, self.limit ), DisplayFormatType::TreeRender => write!(f, "LixDirectoryHistoryScanExec"), } } } impl ExecutionPlan for LixDirectoryHistoryScanExec { fn name(&self) -> &str { "LixDirectoryHistoryScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixDirectoryHistoryScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixDirectoryHistoryScanExec only exposes one partition, got {partition}" ))); } let commit_graph = Arc::clone(&self.commit_graph); let query_source = self.query_source.clone(); let schema = Arc::clone(&self.schema); let stream_schema = Arc::clone(&schema); let route = self.route.clone(); let limit = self.limit; let fut = async move { let mut rows = load_directory_history_rows(commit_graph, query_source, &route) .await .map_err(lix_error_to_datafusion_error)?; if let Some(limit) = limit { rows.truncate(limit); } directory_history_record_batch(&stream_schema, &rows) .map_err(lix_error_to_datafusion_error) }; Ok(Box::pin(RecordBatchStreamAdapter::new( schema, stream::once(fut), ))) } } #[derive(Debug, Clone)] struct DirectoryHistoryRecord { id: String, parent_id: Option, name: Option, hidden: Option, entry: HistoryEntry, } #[derive(Debug, Clone)] struct DirectoryHistoryOutputRow { entity_id: String, id: String, path: Option, parent_id: Option, name: Option, hidden: Option, descriptor_change: MaterializedChange, event: DirectoryHistoryEvent, } #[derive(Debug, Clone)] struct DirectoryHistoryEvent { directory_id: String, start_commit_id: String, depth: u32, change: MaterializedChange, observed_commit_id: String, commit_created_at: String, } #[derive(Debug, Deserialize)] struct DirectoryDescriptorSnapshot { id: String, parent_id: Option, name: String, hidden: Option, } async fn load_directory_history_rows( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, route: &HistoryRoute, ) -> Result, LixError> { let event_route = route.traversal_only(); let event_entries = load_history_entries( HistoryViewDescriptor { view_name: "lix_directory_history", start_commit_column: HISTORY_COL_START_COMMIT_ID, }, Arc::clone(&commit_graph), query_source.json_reader.clone(), &event_route, vec![DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string()], ) .await?; let context_route = route.starts_only(); let context_entries = load_history_entries( HistoryViewDescriptor { view_name: "lix_directory_history", start_commit_column: HISTORY_COL_START_COMMIT_ID, }, commit_graph, query_source.json_reader, &context_route, vec![DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string()], ) .await?; let event_descriptors = parse_directory_history_records(&event_entries)?; let descriptors = parse_directory_history_records(&context_entries)?; let mut output = Vec::new(); for descriptor in &event_descriptors { let event = directory_history_event_from_entry(&descriptor.id, &descriptor.entry); let Some(visible_descriptor) = nearest_directory_descriptor(&descriptors, &event) else { continue; }; let path = if visible_descriptor.name.is_some() { resolve_directory_history_path( &visible_descriptor.id, &event.start_commit_id, event.depth, &descriptors, &mut BTreeMap::new(), &mut BTreeSet::new(), ) } else { None }; let id = tombstone_identity_column_value( "id", &visible_descriptor.id, HistoryIdentityProjection::SingleColumn { column: "id" }, )? .and_then(|value| value.as_str().map(ToOwned::to_owned)) .unwrap_or_else(|| visible_descriptor.id.clone()); output.push(DirectoryHistoryOutputRow { entity_id: visible_descriptor.id.clone(), id, path, parent_id: visible_descriptor.parent_id.clone(), name: visible_descriptor.name.clone(), hidden: visible_descriptor.hidden, descriptor_change: visible_descriptor.entry.change.clone(), event, }); } output.retain(|row| { let entity_id = entity_id_json_array(&row.entity_id).ok(); route.matches_surface_row( DIRECTORY_DESCRIPTOR_SCHEMA_KEY, entity_id.as_deref().unwrap_or(&row.entity_id), None, row.event.depth, ) }); output.sort_by(|left, right| { left.entity_id .cmp(&right.entity_id) .then(left.event.start_commit_id.cmp(&right.event.start_commit_id)) .then(left.event.depth.cmp(&right.event.depth)) .then( left.event .observed_commit_id .cmp(&right.event.observed_commit_id), ) .then(left.event.change.id.cmp(&right.event.change.id)) }); Ok(output) } fn parse_directory_history_records( entries: &[HistoryEntry], ) -> Result, LixError> { entries .iter() .filter(|entry| entry.change.schema_key == DIRECTORY_DESCRIPTOR_SCHEMA_KEY) .map(|entry| { let Some(snapshot_content) = entry.change.snapshot_content.as_deref() else { return Ok(DirectoryHistoryRecord { id: entry.change.entity_id.as_single_string_owned()?, parent_id: None, name: None, hidden: None, entry: entry.clone(), }); }; let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_directory_descriptor history snapshot JSON: {error}"), ) })?; Ok(DirectoryHistoryRecord { id: snapshot.id, parent_id: snapshot.parent_id, name: Some(snapshot.name), hidden: Some(snapshot.hidden.unwrap_or(false)), entry: entry.clone(), }) }) .collect() } fn directory_history_event_from_entry( directory_id: &str, entry: &HistoryEntry, ) -> DirectoryHistoryEvent { DirectoryHistoryEvent { directory_id: directory_id.to_string(), start_commit_id: entry.start_commit_id.clone(), depth: entry.depth, change: entry.change.clone(), observed_commit_id: entry.observed_commit_id.clone(), commit_created_at: entry.commit_created_at.clone(), } } fn nearest_directory_descriptor<'a>( descriptors: &'a [DirectoryHistoryRecord], event: &DirectoryHistoryEvent, ) -> Option<&'a DirectoryHistoryRecord> { descriptors .iter() .filter(|descriptor| { let exact_descriptor_event = history_descriptor_event_matches(&descriptor.entry, event.depth, &event.change.id); (exact_descriptor_event || descriptor.name.is_some()) && descriptor.id == event.directory_id && descriptor.entry.start_commit_id == event.start_commit_id && descriptor.entry.depth >= event.depth }) .min_by(|left, right| { left.entry .depth .cmp(&right.entry.depth) .then(left.entry.change.id.cmp(&right.entry.change.id)) }) } fn resolve_directory_history_path( directory_id: &str, start_commit_id: &str, target_depth: u32, directories: &[DirectoryHistoryRecord], cache: &mut BTreeMap>, visiting: &mut BTreeSet, ) -> Option { if let Some(path) = cache.get(directory_id) { return path.clone(); } if !visiting.insert(directory_id.to_string()) { cache.insert(directory_id.to_string(), None); return None; } let directory = directories .iter() .filter(|directory| { directory.name.is_some() && directory.id == directory_id && directory.entry.start_commit_id == start_commit_id && directory.entry.depth >= target_depth }) .min_by(|left, right| { left.entry .depth .cmp(&right.entry.depth) .then(left.entry.change.id.cmp(&right.entry.change.id)) })?; let name = directory.name.as_ref()?; let path = match directory.parent_id.as_deref() { Some(parent_id) => { let parent_path = resolve_directory_history_path( parent_id, start_commit_id, target_depth, directories, cache, visiting, )?; format!("{parent_path}{name}/") } None => format!("/{name}/"), }; visiting.remove(directory_id); cache.insert(directory_id.to_string(), Some(path.clone())); Some(path) } fn directory_history_record_batch( schema: &SchemaRef, rows: &[DirectoryHistoryOutputRow], ) -> Result { let columns = schema .fields() .iter() .map(|field| directory_history_column_array(field.name(), rows)) .collect::, _>>()?; let options = RecordBatchOptions::new().with_row_count(Some(rows.len())); RecordBatch::try_new_with_options(Arc::clone(schema), columns, &options).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("sql2 failed to build lix_directory_history record batch: {error}"), ) }) } fn directory_history_column_array( column_name: &str, rows: &[DirectoryHistoryOutputRow], ) -> Result { Ok(match column_name { "id" => string_array(rows.iter().map(|row| Some(row.id.as_str()))), "path" => string_array(rows.iter().map(|row| row.path.as_deref())), "parent_id" => string_array(rows.iter().map(|row| row.parent_id.as_deref())), "name" => string_array(rows.iter().map(|row| row.name.as_deref())), "hidden" => Arc::new(BooleanArray::from( rows.iter().map(|row| row.hidden).collect::>(), )) as ArrayRef, HISTORY_COL_ENTITY_ID => Arc::new(StringArray::from( rows.iter() .map(|row| entity_id_json_array(&row.entity_id).map(Some)) .collect::, _>>()?, )) as ArrayRef, HISTORY_COL_SCHEMA_KEY => { string_array(rows.iter().map(|_| Some(DIRECTORY_DESCRIPTOR_SCHEMA_KEY))) } HISTORY_COL_FILE_ID => string_array(rows.iter().map(|_| None)), HISTORY_COL_CHANGE_ID => { string_array(rows.iter().map(|row| Some(row.event.change.id.as_str()))) } HISTORY_COL_SNAPSHOT_CONTENT => string_array( rows.iter() .map(|row| row.descriptor_change.snapshot_content.as_deref()), ), HISTORY_COL_METADATA => Arc::new(StringArray::from( rows.iter() .map(|row| { row.descriptor_change .metadata .as_ref() .map(serialize_row_metadata) }) .collect::>(), )), HISTORY_COL_OBSERVED_COMMIT_ID => string_array( rows.iter() .map(|row| Some(row.event.observed_commit_id.as_str())), ), HISTORY_COL_COMMIT_CREATED_AT => string_array( rows.iter() .map(|row| Some(row.event.commit_created_at.as_str())), ), HISTORY_COL_START_COMMIT_ID => string_array( rows.iter() .map(|row| Some(row.event.start_commit_id.as_str())), ), HISTORY_COL_DEPTH => Arc::new(Int64Array::from( rows.iter() .map(|row| i64::from(row.event.depth)) .collect::>(), )) as ArrayRef, other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "sql2 lix_directory_history provider does not support projected column '{other}'" ), )) } }) } fn lix_directory_history_schema() -> SchemaRef { Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("path", DataType::Utf8, true), Field::new("parent_id", DataType::Utf8, true), Field::new("name", DataType::Utf8, true), Field::new("hidden", DataType::Boolean, true), json_field(HISTORY_COL_ENTITY_ID, false), Field::new(HISTORY_COL_SCHEMA_KEY, DataType::Utf8, false), Field::new(HISTORY_COL_FILE_ID, DataType::Utf8, true), json_field(HISTORY_COL_SNAPSHOT_CONTENT, true), Field::new(HISTORY_COL_CHANGE_ID, DataType::Utf8, false), json_field(HISTORY_COL_METADATA, true), Field::new(HISTORY_COL_OBSERVED_COMMIT_ID, DataType::Utf8, false), Field::new(HISTORY_COL_COMMIT_CREATED_AT, DataType::Utf8, false), Field::new(HISTORY_COL_START_COMMIT_ID, DataType::Utf8, false), Field::new(HISTORY_COL_DEPTH, DataType::Int64, false), ])) } fn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec>) -> Result { let Some(projection) = projection else { return Ok(Arc::clone(base_schema)); }; Ok(Arc::new(base_schema.project(projection)?)) } fn string_array<'a>(values: impl Iterator>) -> ArrayRef { Arc::new(StringArray::from(values.collect::>())) as ArrayRef } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn entity_id_json_array(entity_id: &str) -> Result { serde_json::to_string(&[entity_id]).map_err(|error| { LixError::unknown(format!( "failed to encode history entity id as JSON: {error}" )) }) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } ================================================ FILE: packages/engine/src/sql2/directory_provider.rs ================================================ use std::any::Any; use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ ArrayRef, BooleanArray, RecordBatchOptions, StringArray, UInt64Array, }; use datafusion::arrow::compute::{and, filter_record_batch}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::RecordBatch; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, ScalarValue}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::dml::InsertOp; use datafusion::logical_expr::{Expr, TableProviderFilterPushDown}; use datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr}; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use datafusion::prelude::SessionContext; use futures_util::{stream, TryStreamExt}; use serde::Deserialize; use crate::functions::FunctionProviderHandle; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{ LiveStateFilter, LiveStateProjection, LiveStateReader, LiveStateScanRequest, }; use crate::sql2::dml::{InsertExec, InsertSink}; use crate::sql2::filesystem_predicates::{ canonicalize_filesystem_path_filters, FilesystemPathKind, }; use crate::sql2::predicate_typecheck::validate_json_predicate_filters; use crate::sql2::version_scope::{ explicit_version_ids_from_dml_filters, resolve_provider_version_ids, resolve_write_version_scope, VersionBinding, }; use crate::sql2::write_normalization::{InsertCell, SqlCell, UpdateAssignmentValues}; use crate::transaction::types::{ LogicalPrimaryKey, TransactionJson, TransactionWriteOperation, TransactionWriteOrigin, TransactionWriteRow, }; use crate::version::VersionRefReader; use crate::{parse_row_metadata_value, serialize_row_metadata, LixError}; use super::filesystem_planner::{ directory_descriptor_write_row, directory_path_resolvers_from_state_rows, filesystem_storage_scope_key, plan_recursive_directory_delete, DirectoryDescriptorWriteIntent, DirectoryPathResolver, FilesystemDeletePlan, FilesystemRowContext, }; use super::filesystem_visibility::VisibleFilesystem; use super::result_metadata::json_field; use crate::sql2::{ SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader, }; use crate::transaction::types::{TransactionWrite, TransactionWriteMode}; const DIRECTORY_SCHEMA_KEY: &str = "lix_directory_descriptor"; const FILE_DESCRIPTOR_SCHEMA_KEY: &str = "lix_file_descriptor"; pub(crate) async fn register_lix_directory_providers( session: &SessionContext, active_version_id: &str, live_state: Arc, version_ref: Arc, functions: FunctionProviderHandle, ) -> Result<(), LixError> { session .register_table( "lix_directory_by_version", Arc::new(LixDirectoryProvider::by_version( Arc::clone(&live_state), Arc::clone(&version_ref), functions.clone(), )), ) .map_err(datafusion_error_to_lix_error)?; session .register_table( "lix_directory", Arc::new(LixDirectoryProvider::active_version( active_version_id, live_state, version_ref, functions, )), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } pub(crate) async fn register_lix_directory_write_providers( session: &SessionContext, write_ctx: SqlWriteContext, ) -> Result<(), LixError> { session .register_table( "lix_directory_by_version", Arc::new(LixDirectoryProvider::by_version_with_write( write_ctx.clone(), )), ) .map_err(datafusion_error_to_lix_error)?; session .register_table( "lix_directory", Arc::new(LixDirectoryProvider::active_version_with_write(write_ctx)), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } pub(crate) struct LixDirectoryProvider { schema: SchemaRef, live_state: Arc, version_ref: Arc, write_access: WriteAccess, functions: FunctionProviderHandle, version_binding: VersionBinding, } impl std::fmt::Debug for LixDirectoryProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixDirectoryProvider").finish() } } impl LixDirectoryProvider { fn active_version( active_version_id: impl Into, live_state: Arc, version_ref: Arc, functions: FunctionProviderHandle, ) -> Self { Self { schema: lix_directory_schema(), live_state, version_ref, write_access: WriteAccess::read_only(), functions, version_binding: VersionBinding::active(active_version_id), } } fn active_version_with_write(write_ctx: SqlWriteContext) -> Self { let active_version_id = write_ctx.active_version_id(); let functions = write_ctx.functions(); let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); Self { schema: lix_directory_schema(), live_state, version_ref, write_access: WriteAccess::write(write_ctx), functions, version_binding: VersionBinding::active(active_version_id), } } fn by_version( live_state: Arc, version_ref: Arc, functions: FunctionProviderHandle, ) -> Self { Self { schema: lix_directory_by_version_schema(), live_state, version_ref, write_access: WriteAccess::read_only(), functions, version_binding: VersionBinding::explicit(), } } fn by_version_with_write(write_ctx: SqlWriteContext) -> Self { let functions = write_ctx.functions(); let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); Self { schema: lix_directory_by_version_schema(), live_state, version_ref, write_access: WriteAccess::write(write_ctx), functions, version_binding: VersionBinding::explicit(), } } } #[async_trait] impl TableProvider for LixDirectoryProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::Base } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { Ok(filters .iter() .map(|_| TableProviderFilterPushDown::Exact) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, filters: &[Expr], limit: Option, ) -> Result> { let projected_schema = projected_schema(&self.schema, projection)?; let scan_limit = if filters.is_empty() { limit } else { None }; let mut request = lix_directory_scan_request( self.version_binding.active_version_id(), Some(projected_schema.as_ref()), scan_limit, ); if self.write_access.is_write() && matches!(self.version_binding, VersionBinding::Explicit) { request.filter.version_ids = explicit_version_ids_from_dml_filters(filters); if request.filter.version_ids.is_empty() { return Err(DataFusionError::Plan( "DELETE FROM lix_directory_by_version requires an explicit lixcol_version_id predicate" .to_string(), )); } } request.filter.version_ids = resolve_provider_version_ids( self.version_ref.as_ref(), &self.version_binding, request.filter.version_ids, ) .await .map_err(lix_error_to_datafusion_error)?; let filters = canonicalize_filesystem_path_filters(filters, FilesystemPathKind::Directory)?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, _state.execution_props())) .collect::>>()?; Ok(Arc::new(LixDirectoryScanExec::new( Arc::clone(&self.live_state), Arc::clone(&self.schema), projected_schema, projection.cloned(), request, physical_filters, limit, ))) } async fn insert_into( &self, _state: &dyn Session, input: Arc, insert_op: InsertOp, ) -> Result> { if insert_op != InsertOp::Append { return not_impl_err!("{insert_op} not implemented for lix_directory yet"); } let write_ctx = self .write_access .require_write("INSERT into lix_directory")?; let sink = LixDirectoryInsertSink::new( input.schema(), write_ctx.clone(), self.functions.clone(), self.version_binding.clone(), ); Ok(Arc::new(InsertExec::new(input, Arc::new(sink)))) } async fn delete_from( &self, state: &dyn Session, filters: Vec, ) -> Result> { let write_ctx = self .write_access .require_write("DELETE FROM lix_directory")?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; let filters = canonicalize_filesystem_path_filters(&filters, FilesystemPathKind::Directory)?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; let mut request = lix_directory_scan_request(self.version_binding.active_version_id(), None, None); if matches!(self.version_binding, VersionBinding::Explicit) { request.filter.version_ids = explicit_version_ids_from_dml_filters(&filters); if request.filter.version_ids.is_empty() { return Err(DataFusionError::Plan( "DELETE FROM lix_directory_by_version requires an explicit lixcol_version_id predicate" .to_string(), )); } } Ok(Arc::new(LixDirectoryDeleteExec::new( write_ctx.clone(), Arc::clone(&self.schema), self.version_binding.clone(), request, physical_filters, ))) } async fn update( &self, state: &dyn Session, assignments: Vec<(String, Expr)>, filters: Vec, ) -> Result> { let write_ctx = self.write_access.require_write("UPDATE lix_directory")?; validate_lix_directory_update_assignments(&self.schema, &assignments)?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; let physical_assignments = assignments .iter() .map(|(column_name, expr)| { Ok(( column_name.clone(), create_physical_expr(expr, &df_schema, state.execution_props())?, )) }) .collect::>>()?; let filters = canonicalize_filesystem_path_filters(&filters, FilesystemPathKind::Directory)?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; let request = lix_directory_scan_request(self.version_binding.active_version_id(), None, None); Ok(Arc::new(LixDirectoryUpdateExec::new( write_ctx.clone(), Arc::clone(&self.schema), self.version_binding.clone(), request, physical_assignments, physical_filters, ))) } } struct LixDirectoryInsertSink { write_ctx: SqlWriteContext, functions: FunctionProviderHandle, version_binding: VersionBinding, surface_name: &'static str, } impl std::fmt::Debug for LixDirectoryInsertSink { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixDirectoryInsertSink").finish() } } impl LixDirectoryInsertSink { fn new( _schema: SchemaRef, write_ctx: SqlWriteContext, functions: FunctionProviderHandle, version_binding: VersionBinding, ) -> Self { let surface_name = lix_directory_surface_name(&version_binding); Self { write_ctx, functions, version_binding, surface_name, } } } impl DisplayAs for LixDirectoryInsertSink { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixDirectoryInsertSink") } DisplayFormatType::TreeRender => write!(f, "LixDirectoryInsertSink"), } } } #[async_trait] impl InsertSink for LixDirectoryInsertSink { async fn write_batches( &self, batches: Vec, _context: &Arc, ) -> Result { let mut path_resolvers = None; let mut rows = Vec::new(); let mut count = 0_u64; for batch in batches { if path_resolvers.is_none() { path_resolvers = Some( directory_path_resolvers_from_live_state( Arc::new(WriteContextLiveStateReader::new(self.write_ctx.clone())), self.version_binding.active_version_id(), ) .await .map_err(lix_error_to_datafusion_error)?, ); } count = count .checked_add(u64::try_from(batch.num_rows()).map_err(|_| { DataFusionError::Execution("lix_directory INSERT row count overflow".into()) })?) .ok_or_else(|| { DataFusionError::Execution("lix_directory INSERT row count overflow".into()) })?; if record_batch_has_non_null_column(&batch, "path")? { rows.extend(lix_directory_write_rows_from_batch_with_path_resolvers( &batch, self.version_binding.active_version_id(), self.surface_name, path_resolvers .as_mut() .expect("path resolver should be initialized"), &mut || self.functions.call_uuid_v7(), )?); } else { rows.extend( lix_directory_write_rows_from_batch_with_options_and_path_resolvers( &batch, self.version_binding.active_version_id(), self.surface_name, true, path_resolvers.as_mut(), None, )?, ); } } self.write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows, }) .await .map_err(lix_error_to_datafusion_error)?; Ok(count) } } fn lix_directory_surface_name(version_binding: &VersionBinding) -> &'static str { match version_binding { VersionBinding::Active { .. } => "lix_directory", VersionBinding::Explicit => "lix_directory_by_version", } } #[allow(dead_code)] struct LixDirectoryDeleteExec { write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for LixDirectoryDeleteExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixDirectoryDeleteExec").finish() } } impl LixDirectoryDeleteExec { fn new( write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { write_ctx, table_schema, version_binding, request, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for LixDirectoryDeleteExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixDirectoryDeleteExec(filters={})", self.filters.len()) } DisplayFormatType::TreeRender => write!(f, "LixDirectoryDeleteExec"), } } } impl ExecutionPlan for LixDirectoryDeleteExec { fn name(&self) -> &str { "LixDirectoryDeleteExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixDirectoryDeleteExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixDirectoryDeleteExec only exposes one partition, got {partition}" ))); } let write_ctx = self.write_ctx.clone(); let table_schema = Arc::clone(&self.table_schema); let version_binding = self.version_binding.clone(); let request = self.request.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = write_ctx .scan_live_state(&request) .await .map_err(lix_error_to_datafusion_error)?; let source_batch = lix_directory_record_batch(&table_schema, rows) .map_err(lix_error_to_datafusion_error)?; let matched_batch = filter_lix_directory_batch(source_batch, &filters)?; let version_ids = directory_version_ids_from_batch( &matched_batch, version_binding.active_version_id(), )?; let mut visible_filesystems = BTreeMap::new(); for version_id in version_ids { visible_filesystems.insert( version_id.clone(), VisibleFilesystem::load( Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())), &version_id, ) .await .map_err(lix_error_to_datafusion_error)?, ); } let (write_rows, count) = lix_directory_recursive_delete_rows_from_batch( &matched_batch, version_binding.active_version_id(), &visible_filesystems, )?; if count > 0 { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: write_rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } #[allow(dead_code)] struct LixDirectoryUpdateExec { write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, assignments: Vec<(String, Arc)>, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for LixDirectoryUpdateExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixDirectoryUpdateExec").finish() } } impl LixDirectoryUpdateExec { fn new( write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, assignments: Vec<(String, Arc)>, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { write_ctx, table_schema, version_binding, request, assignments, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for LixDirectoryUpdateExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!( f, "LixDirectoryUpdateExec(assignments={}, filters={})", self.assignments.len(), self.filters.len() ) } DisplayFormatType::TreeRender => write!(f, "LixDirectoryUpdateExec"), } } } impl ExecutionPlan for LixDirectoryUpdateExec { fn name(&self) -> &str { "LixDirectoryUpdateExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixDirectoryUpdateExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixDirectoryUpdateExec only exposes one partition, got {partition}" ))); } let write_ctx = self.write_ctx.clone(); let table_schema = Arc::clone(&self.table_schema); let version_binding = self.version_binding.clone(); let request = self.request.clone(); let assignments = self.assignments.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = write_ctx .scan_live_state(&request) .await .map_err(lix_error_to_datafusion_error)?; let source_batch = lix_directory_record_batch(&table_schema, rows) .map_err(lix_error_to_datafusion_error)?; let matched_batch = filter_lix_directory_batch(source_batch, &filters)?; let mut path_resolvers = directory_path_resolvers_from_live_state( Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())), version_binding.active_version_id(), ) .await .map_err(lix_error_to_datafusion_error)?; let write_rows = lix_directory_update_write_rows_from_batch( &matched_batch, &assignments, version_binding.active_version_id(), &mut path_resolvers, )?; let count = u64::try_from(write_rows.len()).map_err(|_| { DataFusionError::Execution("lix_directory UPDATE row count overflow".into()) })?; if count > 0 { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: write_rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } struct LixDirectoryScanExec { live_state: Arc, batch_schema: SchemaRef, output_schema: SchemaRef, projection: Option>, request: LiveStateScanRequest, filters: Vec>, limit: Option, properties: Arc, } impl std::fmt::Debug for LixDirectoryScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixDirectoryScanExec").finish() } } impl LixDirectoryScanExec { fn new( live_state: Arc, batch_schema: SchemaRef, output_schema: SchemaRef, projection: Option>, request: LiveStateScanRequest, filters: Vec>, limit: Option, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&output_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { live_state, batch_schema, output_schema, projection, request, filters, limit, properties: Arc::new(properties), } } } impl DisplayAs for LixDirectoryScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixDirectoryScanExec(limit={:?})", self.limit) } DisplayFormatType::TreeRender => write!(f, "LixDirectoryScanExec"), } } } impl ExecutionPlan for LixDirectoryScanExec { fn name(&self) -> &str { "LixDirectoryScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixDirectoryScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixDirectoryScanExec only supports partition 0, got {partition}" ))); } let live_state = Arc::clone(&self.live_state); let request = self.request.clone(); let filters = self.filters.clone(); let limit = self.limit; let output_schema = Arc::clone(&self.output_schema); let batch_schema = Arc::clone(&self.batch_schema); let projection = self.projection.clone(); let fut = async move { let rows = live_state.scan_rows(&request).await.map_err(|error| { DataFusionError::Execution(format!("sql2 lix_directory scan failed: {error}")) })?; let batch = lix_directory_record_batch(&batch_schema, rows).map_err(|error| { DataFusionError::Execution(format!( "sql2 lix_directory batch build failed: {error}" )) })?; let filtered = filter_lix_directory_batch(batch, &filters)?; let projected = match projection { Some(indices) => filtered.project(&indices).map_err(DataFusionError::from), None => Ok(filtered), }?; match limit { Some(limit) => Ok(projected.slice(0, limit.min(projected.num_rows()))), None => Ok(projected), } }; Ok(Box::pin(RecordBatchStreamAdapter::new( output_schema, stream::once(fut).map_ok(|batch| batch), ))) } } #[derive(Debug, Clone)] struct DirectoryDescriptorRecord { id: String, parent_id: Option, name: String, hidden: bool, live: MaterializedLiveStateRow, } #[derive(Debug, Deserialize)] struct DirectoryDescriptorSnapshot { id: String, parent_id: Option, name: String, hidden: Option, } #[cfg(test)] fn lix_directory_write_rows_from_batch( batch: &RecordBatch, version_binding: Option<&str>, ) -> Result> { lix_directory_write_rows_from_batch_with_options(batch, version_binding, "lix_directory", true) } fn lix_directory_write_rows_from_batch_with_path_resolvers( batch: &RecordBatch, version_binding: Option<&str>, surface_name: &str, path_resolvers: &mut BTreeMap, generate_directory_id: &mut dyn FnMut() -> String, ) -> Result> { lix_directory_write_rows_from_batch_with_options_and_path_resolvers( batch, version_binding, surface_name, true, Some(path_resolvers), Some(generate_directory_id), ) } fn lix_directory_update_write_rows_from_batch( batch: &RecordBatch, assignments: &[(String, Arc)], version_binding: Option<&str>, path_resolvers: &mut BTreeMap, ) -> Result> { let assignment_values = UpdateAssignmentValues::evaluate(batch, assignments)?; let mut rows = Vec::new(); for row_index in 0..batch.num_rows() { let id = optional_string_value(batch, row_index, "id")?; let context = directory_row_context_from_update( batch, &assignment_values, row_index, version_binding, )?; let parent_id = update_optional_string_value(batch, &assignment_values, row_index, "parent_id")?; let name = update_required_string_value(batch, &assignment_values, row_index, "name")?; if let Some(directory_id) = id.as_ref() { let resolver = path_resolvers .entry(directory_path_resolver_key(&context)) .or_insert_with(DirectoryPathResolver::default); resolver .reserve_directory(parent_id.clone(), name.clone(), directory_id.clone()) .map_err(lix_error_to_datafusion_error)?; } rows.push(directory_descriptor_write_row( DirectoryDescriptorWriteIntent { id, parent_id, name, hidden: update_optional_bool_value(batch, &assignment_values, row_index, "hidden")?, context, }, )); } Ok(rows) } fn directory_version_ids_from_batch( batch: &RecordBatch, version_binding: Option<&str>, ) -> Result> { let mut version_ids = BTreeSet::new(); for row_index in 0..batch.num_rows() { version_ids.insert( directory_row_context_from_batch(batch, row_index, version_binding)?.version_id, ); } Ok(version_ids) } fn lix_directory_recursive_delete_rows_from_batch( batch: &RecordBatch, version_binding: Option<&str>, visible_filesystems: &BTreeMap, ) -> Result<(Vec, u64)> { let mut rows = Vec::new(); let mut seen = BTreeSet::new(); let mut count = 0u64; for row_index in 0..batch.num_rows() { let directory_id = required_string_value(batch, row_index, "id")?; let context = directory_row_context_from_batch(batch, row_index, version_binding)?; let visible_filesystem = visible_filesystems .get(&context.version_id) .ok_or_else(|| { DataFusionError::Execution(format!( "DELETE FROM lix_directory missing visible filesystem for version '{}'", context.version_id )) })?; append_deduped_delete_plan( &mut rows, &mut seen, plan_recursive_directory_delete(&directory_id, visible_filesystem, context), &mut count, ); } Ok((rows, count)) } fn append_deduped_delete_plan( rows: &mut Vec, seen: &mut BTreeSet, plan: FilesystemDeletePlan, count: &mut u64, ) { for row in plan.rows { if seen.insert(StateRowDedupeKey::from(&row)) { if is_user_visible_filesystem_delete_row(&row) { *count += 1; } rows.push(row); } } } fn is_user_visible_filesystem_delete_row(row: &TransactionWriteRow) -> bool { matches!( row.schema_key.as_str(), "lix_directory_descriptor" | "lix_file_descriptor" ) } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct StateRowDedupeKey { entity_id: String, schema_key: String, file_id: Option, version_id: String, global: bool, untracked: bool, } impl From<&TransactionWriteRow> for StateRowDedupeKey { fn from(row: &TransactionWriteRow) -> Self { Self { entity_id: row .entity_id .as_ref() .expect("directory provider staged row should carry entity_id") .as_single_string_owned() .expect("directory provider staged row entity identity should project"), schema_key: row.schema_key.clone(), file_id: row.file_id.clone(), version_id: row.version_id.clone(), global: row.global, untracked: row.untracked, } } } #[cfg(test)] fn lix_directory_write_rows_from_batch_with_options( batch: &RecordBatch, version_binding: Option<&str>, surface_name: &str, reject_read_only_fields: bool, ) -> Result> { lix_directory_write_rows_from_batch_with_options_and_path_resolvers( batch, version_binding, surface_name, reject_read_only_fields, None, None, ) } fn lix_directory_write_rows_from_batch_with_options_and_path_resolvers( batch: &RecordBatch, version_binding: Option<&str>, surface_name: &str, reject_read_only_fields: bool, mut path_resolvers: Option<&mut BTreeMap>, mut generate_directory_id: Option<&mut dyn FnMut() -> String>, ) -> Result> { let mut rows = Vec::new(); for row_index in 0..batch.num_rows() { if reject_read_only_fields { reject_read_only_lix_directory_insert_field(batch, row_index, "lixcol_entity_id")?; reject_read_only_lix_directory_insert_field(batch, row_index, "lixcol_schema_key")?; reject_read_only_lix_directory_insert_field(batch, row_index, "lixcol_change_id")?; reject_read_only_lix_directory_insert_field(batch, row_index, "lixcol_created_at")?; reject_read_only_lix_directory_insert_field(batch, row_index, "lixcol_updated_at")?; reject_read_only_lix_directory_insert_field(batch, row_index, "lixcol_commit_id")?; } let path = optional_string_value(batch, row_index, "path")?; let id = optional_string_value(batch, row_index, "id")?; let hidden = optional_bool_value(batch, row_index, "hidden")?; let context = directory_row_context_from_batch(batch, row_index, version_binding)?; if let Some(path) = path.filter(|_| reject_read_only_fields) { reject_read_only_lix_directory_insert_field(batch, row_index, "parent_id")?; reject_read_only_lix_directory_insert_field(batch, row_index, "name")?; let Some(path_resolvers) = path_resolvers.as_deref_mut() else { return Err(DataFusionError::Execution( "INSERT into lix_directory with path requires directory path resolver" .to_string(), )); }; let resolver = path_resolvers .entry(directory_path_resolver_key(&context)) .or_insert_with(DirectoryPathResolver::default); let Some(generate_directory_id) = generate_directory_id.as_deref_mut() else { return Err(DataFusionError::Execution( "INSERT into lix_directory with path requires directory id generator" .to_string(), )); }; let directory_id = id.unwrap_or_else(|| generate_directory_id()); let mut planned_rows = resolver .create_directory_path_with_leaf_id( &path, Some(directory_id.clone()), context, hidden.unwrap_or(false), generate_directory_id, ) .map_err(lix_error_to_datafusion_error)?; attach_lix_directory_insert_origin(&mut planned_rows, surface_name, &directory_id); rows.extend(planned_rows); continue; } let parent_id = optional_string_value(batch, row_index, "parent_id")?; let name = required_string_value(batch, row_index, "name")?; if let Some(path_resolvers) = path_resolvers.as_deref_mut() { if let Some(directory_id) = id.as_ref() { let resolver = path_resolvers .entry(directory_path_resolver_key(&context)) .or_insert_with(DirectoryPathResolver::default); resolver .reserve_directory(parent_id.clone(), name.clone(), directory_id.clone()) .map_err(lix_error_to_datafusion_error)?; } } let mut row = directory_descriptor_write_row(DirectoryDescriptorWriteIntent { id: id.clone(), parent_id, name, hidden, context, }); if let Some(directory_id) = id.as_ref() { row.origin = Some(lix_directory_insert_origin(surface_name, directory_id)); } rows.push(row); } Ok(rows) } fn attach_lix_directory_insert_origin( rows: &mut [TransactionWriteRow], surface_name: &str, directory_id: &str, ) { let origin = lix_directory_insert_origin(surface_name, directory_id); for row in rows { if row.schema_key != DIRECTORY_SCHEMA_KEY { continue; } let Some(entity_id) = row .entity_id .as_ref() .and_then(|entity_id| entity_id.as_single_string_owned().ok()) else { continue; }; if entity_id == directory_id { row.origin = Some(origin.clone()); } } } fn lix_directory_insert_origin(surface_name: &str, directory_id: &str) -> TransactionWriteOrigin { TransactionWriteOrigin { surface: surface_name.to_string(), operation: TransactionWriteOperation::Insert, primary_key: Some(LogicalPrimaryKey { columns: vec!["id".to_string()], values: vec![directory_id.to_string()], }), } } fn directory_row_context_from_batch( batch: &RecordBatch, row_index: usize, version_binding: Option<&str>, ) -> Result { let scope = resolve_write_version_scope( optional_bool_value(batch, row_index, "lixcol_global")?, optional_string_value(batch, row_index, "lixcol_version_id")?, version_binding, "INSERT into lix_directory_by_version", "lix_directory", )?; Ok(FilesystemRowContext { version_id: scope.version_id, global: scope.global, untracked: optional_bool_value(batch, row_index, "lixcol_untracked")?.unwrap_or(false), file_id: optional_string_value(batch, row_index, "lixcol_file_id")?, metadata: optional_metadata_value(batch, row_index, "lixcol_metadata", "lix_directory")?, }) } fn directory_row_context_from_update( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, version_binding: Option<&str>, ) -> Result { let scope = resolve_write_version_scope( optional_bool_value(batch, row_index, "lixcol_global")?, optional_string_value(batch, row_index, "lixcol_version_id")?, version_binding, "UPDATE into lix_directory_by_version", "lix_directory", )?; Ok(FilesystemRowContext { version_id: scope.version_id, global: scope.global, untracked: optional_bool_value(batch, row_index, "lixcol_untracked")?.unwrap_or(false), file_id: optional_string_value(batch, row_index, "lixcol_file_id")?, metadata: update_optional_metadata_value( batch, assignment_values, row_index, "lixcol_metadata", "lix_directory", )?, }) } fn directory_path_resolver_key(context: &FilesystemRowContext) -> String { filesystem_storage_scope_key( &context.version_id, context.global, context.untracked, context.file_id.as_deref(), ) } async fn directory_path_resolvers_from_live_state( live_state: Arc, version_binding: Option<&str>, ) -> std::result::Result, LixError> { let rows = live_state .scan_rows(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![ DIRECTORY_SCHEMA_KEY.to_string(), FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), ], version_ids: version_binding .map(|version_id| vec![version_id.to_string()]) .unwrap_or_default(), ..Default::default() }, ..Default::default() }) .await?; let mut resolvers = directory_path_resolvers_from_state_rows(rows)?; if let Some(version_id) = version_binding { let key = filesystem_storage_scope_key(version_id, false, false, None); resolvers .entry(key) .or_insert_with(DirectoryPathResolver::default); } Ok(resolvers) } fn lix_directory_record_batch( schema: &SchemaRef, rows: Vec, ) -> Result { let mut directory_rows = Vec::::new(); for row in rows { if row.schema_key != DIRECTORY_SCHEMA_KEY { continue; } let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_directory_descriptor snapshot JSON: {error}"), ) })?; directory_rows.push(DirectoryDescriptorRecord { id: snapshot.id, parent_id: snapshot.parent_id, name: snapshot.name, hidden: snapshot.hidden.unwrap_or(false), live: row, }); } let directory_paths = derive_directory_paths(&directory_rows)?; let mut ids = Vec::new(); let mut paths = Vec::new(); let mut parent_ids = Vec::new(); let mut names = Vec::new(); let mut hiddens = Vec::new(); let mut entity_ids = Vec::new(); let mut schema_keys = Vec::new(); let mut file_ids = Vec::new(); let mut globals = Vec::new(); let mut change_ids = Vec::new(); let mut created_ats = Vec::new(); let mut updated_ats = Vec::new(); let mut commit_ids = Vec::new(); let mut untracked_values = Vec::new(); let mut metadata_values = Vec::new(); let mut version_ids = Vec::new(); for directory in directory_rows { ids.push(Some(directory.id.clone())); paths.push( directory_paths .get(&(directory.live.version_id.clone(), directory.id.clone())) .cloned(), ); parent_ids.push(directory.parent_id); names.push(Some(directory.name)); hiddens.push(Some(directory.hidden)); entity_ids.push(Some(directory.live.entity_id.as_json_array_text()?)); schema_keys.push(Some(directory.live.schema_key)); file_ids.push(directory.live.file_id); globals.push(Some(directory.live.global)); change_ids.push(directory.live.change_id); created_ats.push(directory.live.created_at); updated_ats.push(directory.live.updated_at); commit_ids.push(directory.live.commit_id); untracked_values.push(Some(directory.live.untracked)); metadata_values.push(directory.live.metadata.as_ref().map(serialize_row_metadata)); version_ids.push(Some(directory.live.version_id)); } let mut columns = Vec::::with_capacity(schema.fields().len()); for field in schema.fields() { let array: ArrayRef = match field.name().as_str() { "id" => Arc::new(StringArray::from(ids.clone())), "path" => Arc::new(StringArray::from(paths.clone())), "parent_id" => Arc::new(StringArray::from(parent_ids.clone())), "name" => Arc::new(StringArray::from(names.clone())), "hidden" => Arc::new(BooleanArray::from(hiddens.clone())), "lixcol_entity_id" => Arc::new(StringArray::from(entity_ids.clone())), "lixcol_schema_key" => Arc::new(StringArray::from(schema_keys.clone())), "lixcol_file_id" => Arc::new(StringArray::from(file_ids.clone())), "lixcol_global" => Arc::new(BooleanArray::from(globals.clone())), "lixcol_change_id" => Arc::new(StringArray::from(change_ids.clone())), "lixcol_created_at" => Arc::new(StringArray::from(created_ats.clone())), "lixcol_updated_at" => Arc::new(StringArray::from(updated_ats.clone())), "lixcol_commit_id" => Arc::new(StringArray::from(commit_ids.clone())), "lixcol_untracked" => Arc::new(BooleanArray::from(untracked_values.clone())), "lixcol_metadata" => Arc::new(StringArray::from(metadata_values.clone())), "lixcol_version_id" => Arc::new(StringArray::from(version_ids.clone())), other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "sql2 lix_directory provider does not support projected column '{other}'" ), )) } }; columns.push(array); } let options = RecordBatchOptions::new().with_row_count(Some(ids.len())); RecordBatch::try_new_with_options(Arc::clone(schema), columns, &options).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("sql2 failed to build lix_directory record batch: {error}"), ) }) } fn derive_directory_paths( rows: &[DirectoryDescriptorRecord], ) -> std::result::Result, LixError> { let mut by_version = BTreeMap::>::new(); for row in rows { by_version .entry(row.live.version_id.clone()) .or_default() .insert(row.id.clone(), row); } let mut paths = BTreeMap::<(String, String), String>::new(); for (version_id, records) in by_version { for directory_id in records.keys() { derive_directory_path_for( &version_id, directory_id, &records, &mut paths, &mut BTreeSet::new(), )?; } } Ok(paths) } fn derive_directory_path_for( version_id: &str, directory_id: &str, records: &BTreeMap, paths: &mut BTreeMap<(String, String), String>, visiting: &mut BTreeSet, ) -> std::result::Result, LixError> { if let Some(path) = paths.get(&(version_id.to_string(), directory_id.to_string())) { return Ok(Some(path.clone())); } if !visiting.insert(directory_id.to_string()) { return Err(directory_parent_cycle_error(version_id, directory_id)); } let Some(row) = records.get(directory_id) else { visiting.remove(directory_id); return Ok(None); }; let path = match row.parent_id.as_deref() { Some(parent_id) => { let Some(parent_path) = derive_directory_path_for(version_id, parent_id, records, paths, visiting)? else { visiting.remove(directory_id); return Ok(None); }; format!("{parent_path}{}/", row.name) } None => format!("/{}/", row.name), }; visiting.remove(directory_id); paths.insert( (version_id.to_string(), directory_id.to_string()), path.clone(), ); Ok(Some(path)) } fn directory_parent_cycle_error(version_id: &str, directory_id: &str) -> LixError { LixError::new( LixError::CODE_CONSTRAINT_VIOLATION, format!( "lix_directory_descriptor parent_id cycle in version '{version_id}' while resolving directory '{directory_id}'" ), ) } fn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec>) -> Result { let fields = match projection { Some(indices) => indices .iter() .map(|index| base_schema.field(*index).as_ref().clone()) .collect::>(), None => base_schema .fields() .iter() .map(|field| field.as_ref().clone()) .collect::>(), }; Ok(Arc::new(Schema::new(fields))) } fn lix_directory_scan_request( version_binding: Option<&str>, projected_schema: Option<&Schema>, limit: Option, ) -> LiveStateScanRequest { LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![DIRECTORY_SCHEMA_KEY.to_string()], version_ids: version_binding .map(|version_id| vec![version_id.to_string()]) .unwrap_or_default(), ..LiveStateFilter::default() }, projection: lix_directory_live_state_projection(projected_schema), limit, } } fn lix_directory_live_state_projection(projected_schema: Option<&Schema>) -> LiveStateProjection { let Some(schema) = projected_schema else { return LiveStateProjection::default(); }; let mut columns = Vec::new(); let needs_snapshot = schema .fields() .iter() .any(|field| matches!(field.name().as_str(), "parent_id" | "name" | "hidden")); if needs_snapshot { columns.push("snapshot_content".to_string()); } if schema .fields() .iter() .any(|field| field.name() == "lixcol_metadata") { columns.push("metadata".to_string()); } LiveStateProjection { columns } } fn validate_lix_directory_update_assignments( schema: &SchemaRef, assignments: &[(String, Expr)], ) -> Result<()> { for (column_name, _) in assignments { schema.field_with_name(column_name).map_err(|_| { DataFusionError::Plan(format!( "UPDATE lix_directory failed: column '{column_name}' does not exist" )) })?; if !matches!( column_name.as_str(), "parent_id" | "name" | "hidden" | "lixcol_metadata" ) { return Err(DataFusionError::Execution(format!( "UPDATE lix_directory cannot stage read-only column '{column_name}'" ))); } } Ok(()) } fn filter_lix_directory_batch( batch: RecordBatch, filters: &[Arc], ) -> Result { let Some(mask) = evaluate_lix_directory_filters(&batch, filters)? else { return Ok(batch); }; Ok(filter_record_batch(&batch, &mask)?) } fn evaluate_lix_directory_filters( batch: &RecordBatch, filters: &[Arc], ) -> Result> { if filters.is_empty() { return Ok(None); } let mut combined_mask: Option = None; for filter in filters { let result = filter.evaluate(batch)?; let array = result.into_array(batch.num_rows())?; let bool_array = array .as_any() .downcast_ref::() .ok_or_else(|| { DataFusionError::Execution("lix_directory filter was not boolean".to_string()) })?; let normalized = bool_array .iter() .map(|value| Some(value == Some(true))) .collect::(); combined_mask = Some(match combined_mask { Some(existing) => and(&existing, &normalized)?, None => normalized, }); } Ok(combined_mask) } fn dml_count_schema() -> SchemaRef { Arc::new(Schema::new(vec![Field::new( "count", DataType::UInt64, false, )])) } fn dml_count_batch(schema: SchemaRef, count: u64) -> Result { RecordBatch::try_new( schema, vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef], ) .map_err(DataFusionError::from) } fn record_batch_has_non_null_column(batch: &RecordBatch, column_name: &str) -> Result { for row_index in 0..batch.num_rows() { if optional_scalar_value(batch, row_index, column_name)? .is_some_and(|value| !value.is_null()) { return Ok(true); } } Ok(false) } fn reject_read_only_lix_directory_insert_field( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result<()> { if optional_scalar_value(batch, row_index, column_name)?.is_some_and(|value| !value.is_null()) { return Err(DataFusionError::Execution(format!( "INSERT into lix_directory cannot stage read-only column '{column_name}'" ))); } Ok(()) } fn required_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result { optional_string_value(batch, row_index, column_name)?.ok_or_else(|| { DataFusionError::Execution(format!( "INSERT into lix_directory requires non-null text column '{column_name}'" )) }) } fn update_required_string_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result { update_optional_string_value(batch, assignment_values, row_index, column_name)?.ok_or_else( || { DataFusionError::Execution(format!( "UPDATE lix_directory requires non-null text column '{column_name}'" )) }, ) } fn update_optional_string_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result> { match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? { InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None), InsertCell::Provided(SqlCell::Value( ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)), )) => Ok(Some(value)), InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!( "UPDATE lix_directory expected text-compatible column '{column_name}', got {other:?}" ))), } } fn update_optional_metadata_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, context: &str, ) -> Result> { update_optional_string_value(batch, assignment_values, row_index, column_name)? .map(|value| { let metadata = parse_row_metadata_value(&value, context) .map_err(super::error::lix_error_to_datafusion_error)?; TransactionJson::from_value(metadata, &format!("{context} metadata")) .map_err(super::error::lix_error_to_datafusion_error) }) .transpose() } fn update_optional_bool_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result> { match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? { InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None), InsertCell::Provided(SqlCell::Value(ScalarValue::Boolean(Some(value)))) => Ok(Some(value)), InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!( "UPDATE lix_directory expected boolean column '{column_name}', got {other:?}" ))), } } fn optional_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Utf8(None)) | Some(ScalarValue::Utf8View(None)) | Some(ScalarValue::LargeUtf8(None)) => Ok(None), Some(ScalarValue::Utf8(Some(value))) | Some(ScalarValue::Utf8View(Some(value))) | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "INSERT into lix_directory expected text-compatible column '{column_name}', got {other:?}" ))), } } fn optional_metadata_value( batch: &RecordBatch, row_index: usize, column_name: &str, context: &str, ) -> Result> { optional_string_value(batch, row_index, column_name)? .map(|value| { let metadata = parse_row_metadata_value(&value, context) .map_err(super::error::lix_error_to_datafusion_error)?; TransactionJson::from_value(metadata, &format!("{context} metadata")) .map_err(super::error::lix_error_to_datafusion_error) }) .transpose() } fn optional_bool_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None), Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "INSERT into lix_directory expected boolean column '{column_name}', got {other:?}" ))), } } fn optional_scalar_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { let schema = batch.schema(); let column_index = match schema.index_of(column_name) { Ok(column_index) => column_index, Err(_) => return Ok(None), }; if row_index >= batch.num_rows() { return Err(DataFusionError::Execution(format!( "row index {row_index} out of bounds for lix_directory batch with {} rows", batch.num_rows() ))); } ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index) .map(Some) .map_err(|error| { DataFusionError::Execution(format!( "failed to decode lix_directory column '{column_name}' at row {row_index}: {error}" )) }) } fn lix_directory_schema() -> SchemaRef { Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, true), Field::new("path", DataType::Utf8, true), Field::new("parent_id", DataType::Utf8, true), Field::new("name", DataType::Utf8, false), Field::new("hidden", DataType::Boolean, true), json_field("lixcol_entity_id", false), Field::new("lixcol_schema_key", DataType::Utf8, false), Field::new("lixcol_file_id", DataType::Utf8, true), Field::new("lixcol_global", DataType::Boolean, true), Field::new("lixcol_change_id", DataType::Utf8, true), Field::new("lixcol_created_at", DataType::Utf8, true), Field::new("lixcol_updated_at", DataType::Utf8, true), Field::new("lixcol_commit_id", DataType::Utf8, true), Field::new("lixcol_untracked", DataType::Boolean, true), json_field("lixcol_metadata", true), ])) } fn lix_directory_by_version_schema() -> SchemaRef { let mut fields = lix_directory_schema() .fields() .iter() .map(|field| field.as_ref().clone()) .collect::>(); fields.push(Field::new("lixcol_version_id", DataType::Utf8, false)); Arc::new(Schema::new(fields)) } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } #[cfg(test)] mod tests { use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BooleanArray, StringArray}; use datafusion::arrow::datatypes::{DataType, Field, Schema}; use datafusion::arrow::record_batch::RecordBatch; use datafusion::execution::TaskContext; use serde_json::json; use crate::binary_cas::BlobDataReader; use crate::functions::{ FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider, }; use crate::live_state::{ LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::sql2::dml::InsertSink; use crate::sql2::{SqlWriteContext, SqlWriteExecutionContext}; use crate::transaction::types::{ TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome, TransactionWriteRow, }; use crate::LixError; use super::{ derive_directory_path_for, directory_path_resolvers_from_state_rows, lix_directory_by_version_schema, lix_directory_insert_origin, lix_directory_record_batch, lix_directory_recursive_delete_rows_from_batch, lix_directory_write_rows_from_batch, lix_directory_write_rows_from_batch_with_path_resolvers, DirectoryDescriptorRecord, LixDirectoryInsertSink, VersionBinding, }; use crate::sql2::filesystem_visibility::VisibleFilesystem; fn test_id_generator(ids: &'static [&'static str]) -> impl FnMut() -> String { let mut ids = ids.iter(); move || ids.next().expect("test id should exist").to_string() } fn test_functions() -> FunctionProviderHandle { SharedFunctionProvider::new( Box::new(SystemFunctionProvider) as Box ) } #[derive(Default)] struct CapturingWriteContext { rows: Vec, writes: Vec, } #[async_trait] impl BlobDataReader for CapturingWriteContext { async fn load_bytes_many( &self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { Ok(crate::binary_cas::BlobBytesBatch::new(vec![ None; hashes.len() ])) } } #[async_trait] impl SqlWriteExecutionContext for CapturingWriteContext { fn active_version_id(&self) -> &str { "version-a" } fn functions(&self) -> FunctionProviderHandle { test_functions() } fn list_visible_schemas(&self) -> Result, LixError> { Ok(Vec::new()) } async fn load_bytes_many( &mut self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { BlobDataReader::load_bytes_many(self, hashes).await } async fn scan_live_state( &mut self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_version_head( &mut self, version_id: &str, ) -> Result, LixError> { if version_id == "ghost-version" { return Ok(None); } Ok(Some(format!("commit-{version_id}"))) } async fn stage_write( &mut self, write: TransactionWrite, ) -> Result { self.writes.push(write); Ok(TransactionWriteOutcome { count: 0 }) } } #[derive(Default)] #[allow(dead_code)] struct RowsLiveStateReader { rows: Vec, } #[async_trait] impl LiveStateReader for RowsLiveStateReader { async fn scan_rows( &self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } fn live_row( entity_id: &str, version_id: &str, snapshot_content: &str, ) -> MaterializedLiveStateRow { live_filesystem_row( entity_id, super::DIRECTORY_SCHEMA_KEY, None, version_id, snapshot_content, ) } fn live_filesystem_row( entity_id: &str, schema_key: &str, file_id: Option<&str>, version_id: &str, snapshot_content: &str, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: schema_key.to_string(), file_id: file_id.map(ToOwned::to_owned), snapshot_content: Some(snapshot_content.to_string()), metadata: Some(json!({"source": "test"}).to_string()), deleted: false, version_id: version_id.to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } fn filesystem_rows() -> Vec { vec![ live_filesystem_row( "dir-docs", "lix_directory_descriptor", None, "version-a", r#"{"id":"dir-docs","parent_id":null,"name":"docs","hidden":false}"#, ), live_filesystem_row( "dir-guides", "lix_directory_descriptor", None, "version-a", r#"{"id":"dir-guides","parent_id":"dir-docs","name":"guides","hidden":false}"#, ), live_filesystem_row( "file-index", "lix_file_descriptor", None, "version-a", r#"{"id":"file-index","directory_id":"dir-docs","name":"index.md","hidden":false}"#, ), live_filesystem_row( "file-readme", "lix_file_descriptor", None, "version-a", r#"{"id":"file-readme","directory_id":"dir-guides","name":"readme.md","hidden":false}"#, ), live_filesystem_row( "file-readme", "lix_binary_blob_ref", Some("file-readme"), "version-a", r#"{"id":"file-readme","blob_hash":"abc123","size_bytes":5}"#, ), ] } fn string_column(values: Vec>) -> ArrayRef { Arc::new(StringArray::from(values)) as ArrayRef } fn directory_insert_batch(include_version: bool, global: bool) -> RecordBatch { let mut fields = vec![ Field::new("id", DataType::Utf8, false), Field::new("parent_id", DataType::Utf8, true), Field::new("name", DataType::Utf8, false), Field::new("hidden", DataType::Boolean, false), Field::new("lixcol_global", DataType::Boolean, false), Field::new("lixcol_metadata", DataType::Utf8, true), ]; let mut columns = vec![ string_column(vec![Some("dir-docs")]), string_column(vec![None]), string_column(vec![Some("docs")]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, Arc::new(BooleanArray::from(vec![global])) as ArrayRef, string_column(vec![Some("{\"source\":\"directory\"}")]), ]; if include_version { fields.push(Field::new("lixcol_version_id", DataType::Utf8, false)); columns.push(string_column(vec![Some("version-a")])); } RecordBatch::try_new(Arc::new(Schema::new(fields)), columns) .expect("directory insert batch should build") } fn directory_path_insert_batch(path: &str) -> RecordBatch { RecordBatch::try_new( Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("path", DataType::Utf8, true), Field::new("hidden", DataType::Boolean, false), Field::new("lixcol_version_id", DataType::Utf8, false), ])), vec![ string_column(vec![Some("dir-nested")]), string_column(vec![Some(path)]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, string_column(vec![Some("version-a")]), ], ) .expect("directory path insert batch should build") } fn directory_delete_batch(ids: &[&str]) -> RecordBatch { RecordBatch::try_new( Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("lixcol_version_id", DataType::Utf8, false), ])), vec![ string_column(ids.iter().copied().map(Some).collect::>()), string_column(vec![Some("version-a"); ids.len()]), ], ) .expect("directory delete batch should build") } #[test] fn derives_nested_directory_paths() { let root = DirectoryDescriptorRecord { id: "dir-docs".to_string(), parent_id: None, name: "docs".to_string(), hidden: false, live: live_row( "dir-docs", "version-a", "{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\",\"hidden\":false}", ), }; let child = DirectoryDescriptorRecord { id: "dir-guides".to_string(), parent_id: Some("dir-docs".to_string()), name: "guides".to_string(), hidden: false, live: live_row( "dir-guides", "version-a", "{\"id\":\"dir-guides\",\"parent_id\":\"dir-docs\",\"name\":\"guides\",\"hidden\":false}", ), }; let mut records = BTreeMap::new(); records.insert(root.id.clone(), &root); records.insert(child.id.clone(), &child); let mut paths = BTreeMap::new(); assert_eq!( derive_directory_path_for( "version-a", "dir-guides", &records, &mut paths, &mut BTreeSet::new() ) .expect("path derivation should succeed"), Some("/docs/guides/".to_string()) ); } #[test] fn record_batch_projects_directory_columns() { let rows = vec![ live_row( "dir-docs", "version-a", "{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\",\"hidden\":false}", ), live_row( "dir-guides", "version-a", "{\"id\":\"dir-guides\",\"parent_id\":\"dir-docs\",\"name\":\"guides\",\"hidden\":true}", ), ]; let batch = lix_directory_record_batch(&lix_directory_by_version_schema(), rows) .expect("directory batch should build"); assert_eq!(batch.num_rows(), 2); assert_eq!( batch .column_by_name("path") .expect("path column") .as_any() .downcast_ref::() .expect("path is string") .value(1), "/docs/guides/" ); assert_eq!( batch .column_by_name("lixcol_version_id") .expect("version column") .as_any() .downcast_ref::() .expect("version is string") .value(1), "version-a" ); } #[test] fn decodes_directory_insert_into_lix_state_write_row() { let rows = lix_directory_write_rows_from_batch(&directory_insert_batch(true, false), None) .expect("directory batch should decode"); assert_eq!( rows, vec![TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("dir-docs")), schema_key: super::DIRECTORY_SCHEMA_KEY.to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_for_test( json!({"hidden":false,"id":"dir-docs","name":"docs","parent_id":null}) )), metadata: Some(TransactionJson::from_value_for_test( json!({"source": "directory"}) )), origin: Some(lix_directory_insert_origin("lix_directory", "dir-docs")), created_at: None, updated_at: None, global: false, change_id: None, commit_id: None, untracked: false, version_id: "version-a".to_string(), }] ); } #[test] fn active_directory_insert_defaults_version_id() { let rows = lix_directory_write_rows_from_batch( &directory_insert_batch(false, false), Some("version-active"), ) .expect("active directory batch should decode"); assert_eq!(rows[0].version_id, "version-active"); } #[test] fn by_version_directory_insert_requires_version_id_for_non_global_rows() { let error = lix_directory_write_rows_from_batch(&directory_insert_batch(false, false), None) .expect_err("by-version insert should require version id"); assert!( error.to_string().contains("requires lixcol_version_id"), "unexpected error: {error}" ); } #[test] fn directory_insert_rejects_global_with_non_global_version_id() { let error = lix_directory_write_rows_from_batch(&directory_insert_batch(true, true), None) .expect_err("global directory write should reject conflicting version id"); assert!( error .to_string() .contains("cannot set lixcol_global=true with non-global lixcol_version_id"), "unexpected error: {error}" ); } #[test] fn directory_path_insert_reuses_existing_parent_descriptor() { let existing_rows = vec![live_row( "dir-docs", "version-a", "{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\",\"hidden\":false}", )]; let mut resolvers = directory_path_resolvers_from_state_rows(existing_rows) .expect("existing directory rows should seed paths"); let rows = lix_directory_write_rows_from_batch_with_path_resolvers( &directory_path_insert_batch("/docs/nested/"), None, "lix_directory", &mut resolvers, &mut test_id_generator(&["should-not-be-used"]), ) .expect("directory path batch should decode"); assert_eq!(rows.len(), 1); let snapshot = rows[0].snapshot.as_ref().unwrap(); assert_eq!(snapshot["id"], "dir-nested"); assert_eq!(snapshot["parent_id"], "dir-docs"); assert_eq!(snapshot["name"], "nested"); } #[test] fn recursive_directory_delete_deletes_nested_dirs_files_and_blob_refs() { let visible_filesystem = VisibleFilesystem::from_live_rows(filesystem_rows()) .expect("visible filesystem should build"); let mut visible_filesystems = BTreeMap::new(); visible_filesystems.insert("version-a".to_string(), visible_filesystem); let (rows, count) = lix_directory_recursive_delete_rows_from_batch( &directory_delete_batch(&["dir-docs"]), None, &visible_filesystems, ) .expect("recursive directory delete should plan"); assert_eq!(count, 4); assert_eq!( rows.iter() .map(|row| { ( row.schema_key.as_str(), row.entity_id .as_ref() .expect("planned delete row should carry entity_id") .as_single_string_owned() .expect("planned delete row should project entity_id"), ) }) .collect::>(), vec![ ("lix_file_descriptor", "file-readme".to_string()), ("lix_binary_blob_ref", "file-readme".to_string()), ("lix_directory_descriptor", "dir-guides".to_string()), ("lix_file_descriptor", "file-index".to_string()), ("lix_directory_descriptor", "dir-docs".to_string()), ] ); assert!(rows.iter().all(|row| row.snapshot.is_none())); } #[test] fn recursive_directory_delete_dedupes_overlapping_parent_and_child() { let visible_filesystem = VisibleFilesystem::from_live_rows(filesystem_rows()) .expect("visible filesystem should build"); let mut visible_filesystems = BTreeMap::new(); visible_filesystems.insert("version-a".to_string(), visible_filesystem); let (rows, count) = lix_directory_recursive_delete_rows_from_batch( &directory_delete_batch(&["dir-docs", "dir-guides"]), None, &visible_filesystems, ) .expect("recursive directory delete should plan"); assert_eq!(count, 4); let identities = rows .iter() .map(|row| { ( row.schema_key.clone(), row.entity_id.clone(), row.file_id.clone(), row.version_id.clone(), ) }) .collect::>(); assert_eq!(identities.len(), rows.len()); assert_eq!(rows.len(), 5); } #[tokio::test] async fn directory_insert_sink_stages_decoded_lix_state_rows() { let mut write_context = CapturingWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let batch = directory_insert_batch(true, false); let sink = LixDirectoryInsertSink::new( batch.schema(), write_ctx, test_functions(), VersionBinding::explicit(), ); let count = sink .write_batches(vec![batch], &Arc::new(TaskContext::default())) .await .expect("directory sink should stage write"); assert_eq!(count, 1); assert_eq!( write_context.writes.as_slice(), &[TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows: vec![TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("dir-docs")), schema_key: super::DIRECTORY_SCHEMA_KEY.to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_for_test( json!({"hidden":false,"id":"dir-docs","name":"docs","parent_id":null}) )), metadata: Some(TransactionJson::from_value_for_test( json!({"source": "directory"}) )), origin: Some(lix_directory_insert_origin( "lix_directory_by_version", "dir-docs" )), created_at: None, updated_at: None, global: false, change_id: None, commit_id: None, untracked: false, version_id: "version-a".to_string(), }] }] ); } #[tokio::test] async fn directory_insert_sink_seeds_path_resolver_from_live_state() { let mut write_context = CapturingWriteContext { rows: vec![live_row( "dir-docs", "version-a", "{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\",\"hidden\":false}", )], writes: Vec::new(), }; let write_ctx = SqlWriteContext::new(&mut write_context); let batch = directory_path_insert_batch("/docs/nested/"); let sink = LixDirectoryInsertSink::new( batch.schema(), write_ctx, test_functions(), VersionBinding::explicit(), ); let count = sink .write_batches(vec![batch], &Arc::new(TaskContext::default())) .await .expect("directory sink should stage path write"); assert_eq!(count, 1); let [TransactionWrite::Rows { rows, .. }] = write_context.writes.as_slice() else { panic!("expected one directory staged write"); }; assert_eq!(rows.len(), 1); let snapshot = rows[0].snapshot.as_ref().unwrap(); assert_eq!(snapshot["id"], "dir-nested"); assert_eq!(snapshot["parent_id"], "dir-docs"); assert_eq!(snapshot["name"], "nested"); } } ================================================ FILE: packages/engine/src/sql2/dml.rs ================================================ use std::any::Any; use std::fmt::Debug; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, UInt64Array}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::RecordBatch; use datafusion::common::{DataFusionError, Result}; use datafusion::execution::TaskContext; use datafusion::physical_expr::EquivalenceProperties; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use futures_util::stream; use super::runtime; #[async_trait] pub(crate) trait InsertSink: Debug + DisplayAs + Send + Sync { async fn write_batches( &self, batches: Vec, context: &Arc, ) -> Result; } pub(crate) struct InsertExec { input: Arc, sink: Arc, result_schema: SchemaRef, properties: Arc, } impl InsertExec { pub(crate) fn new(input: Arc, sink: Arc) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { input, sink, result_schema, properties: Arc::new(properties), } } } impl Debug for InsertExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("InsertExec").finish() } } impl DisplayAs for InsertExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "InsertExec: sink=")?; self.sink.fmt_as(t, f) } DisplayFormatType::TreeRender => write!(f, "InsertExec"), } } } impl ExecutionPlan for InsertExec { fn name(&self) -> &str { "InsertExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { vec![&self.input] } fn with_new_children( self: Arc, mut children: Vec>, ) -> Result> { if children.len() != 1 { return Err(DataFusionError::Execution(format!( "InsertExec expects one input child, got {}", children.len() ))); } Ok(Arc::new(Self::new( children.swap_remove(0), Arc::clone(&self.sink), ))) } fn execute( &self, partition: usize, context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "InsertExec only exposes one partition, got {partition}" ))); } let input = Arc::clone(&self.input); let sink = Arc::clone(&self.sink); let stream_schema = Arc::clone(&self.result_schema); let result_schema = Arc::clone(&self.result_schema); let stream = stream::once(async move { let batches = runtime::collect_input_plan(input, Arc::clone(&context)).await?; let count = sink.write_batches(batches, &context).await?; dml_count_batch(stream_schema, count) }); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } fn dml_count_schema() -> SchemaRef { Arc::new(Schema::new(vec![Field::new( "count", DataType::UInt64, false, )])) } fn dml_count_batch(schema: SchemaRef, count: u64) -> Result { RecordBatch::try_new( schema, vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef], ) .map_err(DataFusionError::from) } ================================================ FILE: packages/engine/src/sql2/entity_history_provider.rs ================================================ use std::any::Any; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BooleanArray, Float64Array, Int64Array, StringArray}; use datafusion::arrow::datatypes::SchemaRef; use datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions}; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{DataFusionError, Result}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::{Expr, TableProviderFilterPushDown}; use datafusion::physical_expr::EquivalenceProperties; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use futures_util::stream; use serde_json::Value as JsonValue; use tokio::sync::Mutex; use crate::commit_graph::CommitGraphReader; use crate::serialize_row_metadata; use crate::LixError; use super::entity_provider::{ entity_f64_value, entity_i64_value, entity_json_text_value, entity_surface_schema, parse_snapshot, string_array, EntityColumnType, EntityProviderVariant, EntitySurfaceSpec, }; use super::history_projection::{tombstone_identity_column_value, HistoryIdentityProjection}; use super::history_route::{ load_history_entries, parse_history_filter, HistoryColumnStyle, HistoryRoute, HistoryViewDescriptor, HISTORY_COL_START_COMMIT_ID, }; use super::SqlCommitStoreQuerySource; use crate::commit_store::MaterializedChange; /// Schema-specific history surface backed directly by the commit graph. /// /// The provider does not query `lix_state_history` through SQL. It uses the same /// commit graph primitive as the generic history surface, then shapes canonical /// changes into the typed entity columns for one registered schema. pub(crate) struct EntityHistoryProvider { spec: Arc, schema: SchemaRef, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, } impl std::fmt::Debug for EntityHistoryProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("EntityHistoryProvider") .field("schema_key", &self.spec.schema_key) .finish() } } impl EntityHistoryProvider { pub(crate) fn new( spec: Arc, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, ) -> Self { Self { schema: entity_surface_schema(&spec, EntityProviderVariant::History), spec, commit_graph, query_source, } } } #[async_trait] impl TableProvider for EntityHistoryProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::View } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { Ok(filters .iter() .map(|filter| { if parse_history_filter(filter, HistoryColumnStyle::Prefixed).is_some() { TableProviderFilterPushDown::Exact } else { TableProviderFilterPushDown::Unsupported } }) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, filters: &[Expr], limit: Option, ) -> Result> { let route = HistoryRoute::from_filters(filters, HistoryColumnStyle::Prefixed); let schema = projected_schema(&self.schema, projection)?; Ok(Arc::new(EntityHistoryScanExec::new( Arc::clone(&self.spec), Arc::clone(&self.commit_graph), self.query_source.clone(), schema, route, limit, ))) } } struct EntityHistoryScanExec { spec: Arc, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, schema: SchemaRef, route: HistoryRoute, limit: Option, properties: Arc, } impl std::fmt::Debug for EntityHistoryScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("EntityHistoryScanExec") .field("schema_key", &self.spec.schema_key) .field("route", &self.route) .field("limit", &self.limit) .finish() } } impl EntityHistoryScanExec { fn new( spec: Arc, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, schema: SchemaRef, route: HistoryRoute, limit: Option, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&schema)), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { spec, commit_graph, query_source, schema, route, limit, properties: Arc::new(properties), } } } impl DisplayAs for EntityHistoryScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => write!( f, "EntityHistoryScanExec(schema_key={}, route={:?}, limit={:?})", self.spec.schema_key, self.route, self.limit ), DisplayFormatType::TreeRender => write!(f, "EntityHistoryScanExec"), } } } impl ExecutionPlan for EntityHistoryScanExec { fn name(&self) -> &str { "EntityHistoryScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Internal( "EntityHistoryScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "EntityHistoryScanExec only exposes one partition, got {partition}" ))); } let spec = Arc::clone(&self.spec); let commit_graph = Arc::clone(&self.commit_graph); let query_source = self.query_source.clone(); let schema = Arc::clone(&self.schema); let route = self.route.clone(); let limit = self.limit; let stream_schema = Arc::clone(&schema); let fut = async move { let rows = load_entity_history_rows(&spec, commit_graph, query_source, &route, limit) .await .map_err(lix_error_to_datafusion_error)?; entity_history_record_batch(&stream_schema, &spec, &rows) }; Ok(Box::pin(RecordBatchStreamAdapter::new( schema, stream::once(fut), ))) } } #[derive(Debug, Clone)] struct EntityHistoryRow { change: MaterializedChange, observed_commit_id: String, commit_created_at: String, start_commit_id: String, depth: u32, } async fn load_entity_history_rows( spec: &EntitySurfaceSpec, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, route: &HistoryRoute, limit: Option, ) -> Result, LixError> { let history_view_name = format!("{}_history", spec.schema_key); let entries = load_history_entries( HistoryViewDescriptor { view_name: history_view_name.as_str(), start_commit_column: HISTORY_COL_START_COMMIT_ID, }, commit_graph, query_source.json_reader, route, vec![spec.schema_key.clone()], ) .await?; let mut rows = entries .into_iter() .map(|entry| EntityHistoryRow { change: entry.change, observed_commit_id: entry.observed_commit_id, commit_created_at: entry.commit_created_at, start_commit_id: entry.start_commit_id, depth: entry.depth, }) .collect::>(); if let Some(limit) = limit { rows.truncate(limit); } Ok(rows) } fn entity_history_record_batch( schema: &SchemaRef, spec: &EntitySurfaceSpec, rows: &[EntityHistoryRow], ) -> Result { let columns = schema .fields() .iter() .map(|field| entity_history_column_array(field.name(), spec, rows)) .collect::>>()?; Ok(RecordBatch::try_new_with_options( Arc::clone(schema), columns, &RecordBatchOptions::new().with_row_count(Some(rows.len())), )?) } fn entity_history_column_array( column_name: &str, spec: &EntitySurfaceSpec, rows: &[EntityHistoryRow], ) -> Result { if let Some(system_column) = column_name.strip_prefix("lixcol_") { return entity_history_system_column_array(system_column, rows); } let column_type = spec .visible_column(column_name) .ok_or_else(|| { DataFusionError::Execution(format!( "sql2 entity history provider '{}' does not expose column '{}'", spec.schema_key, column_name )) })? .column_type; let projected_values = rows .iter() .map(|row| entity_history_column_value(row, spec, column_name)) .collect::>>()?; Ok(match column_type { EntityColumnType::String | EntityColumnType::Json => Arc::new(StringArray::from( projected_values .iter() .map(|snapshot| entity_json_text_value(snapshot.as_ref(), column_type)) .collect::>>()?, )) as ArrayRef, EntityColumnType::Integer => Arc::new(Int64Array::from( projected_values .iter() .map(|snapshot| entity_i64_value(snapshot.as_ref())) .collect::>(), )) as ArrayRef, EntityColumnType::Number => Arc::new(Float64Array::from( projected_values .iter() .map(|snapshot| entity_f64_value(snapshot.as_ref())) .collect::>(), )) as ArrayRef, EntityColumnType::Boolean => Arc::new(BooleanArray::from( projected_values .iter() .map(|snapshot| snapshot.as_ref().and_then(JsonValue::as_bool)) .collect::>(), )) as ArrayRef, }) } fn entity_history_column_value( row: &EntityHistoryRow, spec: &EntitySurfaceSpec, column_name: &str, ) -> Result> { let snapshot = parse_snapshot(row.change.snapshot_content.as_deref())?; if let Some(snapshot) = snapshot { return Ok(snapshot.get(column_name).cloned()); } let entity_id = row.change.entity_id.as_json_array_text().map_err(|error| { DataFusionError::Execution(format!( "sql2 entity history provider failed to project entity id: {error}" )) })?; tombstone_identity_column_value( column_name, &entity_id, HistoryIdentityProjection::PrimaryKeyPaths(&spec.primary_key_paths), ) .map_err(|error| DataFusionError::Execution(error.to_string())) } fn entity_history_system_column_array( column_name: &str, rows: &[EntityHistoryRow], ) -> Result { Ok(match column_name { "entity_id" => Arc::new(StringArray::from( rows.iter() .map(|row| { Some( row.change .entity_id .as_json_array_text() .expect("canonical change entity identity should project"), ) }) .collect::>(), )) as ArrayRef, "schema_key" => string_array(rows.iter().map(|row| Some(row.change.schema_key.as_str()))), "file_id" => string_array(rows.iter().map(|row| row.change.file_id.as_deref())), "snapshot_content" => string_array( rows.iter() .map(|row| row.change.snapshot_content.as_deref()), ), "metadata" => Arc::new(StringArray::from( rows.iter() .map(|row| row.change.metadata.as_ref().map(serialize_row_metadata)) .collect::>(), )) as ArrayRef, "change_id" => string_array(rows.iter().map(|row| Some(row.change.id.as_str()))), "observed_commit_id" => { string_array(rows.iter().map(|row| Some(row.observed_commit_id.as_str()))) } "commit_created_at" => { string_array(rows.iter().map(|row| Some(row.commit_created_at.as_str()))) } "start_commit_id" => { string_array(rows.iter().map(|row| Some(row.start_commit_id.as_str()))) } "depth" => Arc::new(Int64Array::from( rows.iter() .map(|row| i64::from(row.depth)) .collect::>(), )) as ArrayRef, other => { return Err(DataFusionError::Execution(format!( "sql2 entity history provider does not support system column 'lixcol_{other}'" ))) } }) } fn projected_schema(schema: &SchemaRef, projection: Option<&Vec>) -> Result { let Some(projection) = projection else { return Ok(Arc::clone(schema)); }; Ok(Arc::new(schema.project(projection)?)) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } ================================================ FILE: packages/engine/src/sql2/entity_provider.rs ================================================ use std::any::Any; use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ ArrayRef, BooleanArray, Float64Array, Int64Array, StringArray, UInt64Array, }; use datafusion::arrow::compute::{and, filter_record_batch}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions}; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, ScalarValue}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::dml::InsertOp; use datafusion::logical_expr::expr::InList; use datafusion::logical_expr::{BinaryExpr, Expr, Operator, TableProviderFilterPushDown}; use datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr}; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use datafusion::prelude::SessionContext; use futures_util::{stream, TryStreamExt}; use serde_json::Value as JsonValue; use crate::commit_graph::CommitGraphReader; use crate::entity_identity::EntityIdentity; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{ LiveStateFilter, LiveStateProjection, LiveStateReader, LiveStateScanRequest, }; use crate::sql2::dml::{InsertExec, InsertSink}; use crate::sql2::predicate_typecheck::validate_json_predicate_filters; use crate::sql2::read_only::reject_read_only_entity_surface; use crate::sql2::version_scope::{ explicit_version_ids_from_dml_filters, resolve_provider_version_ids, resolve_write_version_scope, VersionBinding, }; use crate::sql2::write_normalization::{ InsertCell, InsertColumnIntents, SqlCell, UpdateAssignmentValues, UpdateCell, }; use crate::transaction::types::{TransactionJson, TransactionWriteRow}; use crate::version::VersionRefReader; use crate::{parse_row_metadata_value, serialize_row_metadata, LixError}; use super::entity_history_provider::EntityHistoryProvider; use super::history_route::{ HISTORY_COL_CHANGE_ID, HISTORY_COL_COMMIT_CREATED_AT, HISTORY_COL_DEPTH, HISTORY_COL_ENTITY_ID, HISTORY_COL_FILE_ID, HISTORY_COL_METADATA, HISTORY_COL_OBSERVED_COMMIT_ID, HISTORY_COL_SCHEMA_KEY, HISTORY_COL_SNAPSHOT_CONTENT, HISTORY_COL_START_COMMIT_ID, }; use super::result_metadata::{json_field, mark_json_field}; use crate::sql2::{ SqlCommitStoreQuerySource, SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader, }; use crate::transaction::types::{TransactionWrite, TransactionWriteMode}; pub(crate) async fn register_entity_providers( ctx: &SessionContext, active_version_id: &str, live_state: Arc, version_ref: Arc, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, schema_definitions: &[JsonValue], ) -> Result<(), LixError> { for schema in schema_definitions { let spec = match derive_entity_surface_spec_from_schema(schema) { Ok(spec) => Arc::new(spec), Err(_) => continue, }; if !schema_exposed_as_entity_surface(&spec.schema_key) { continue; } let by_version_name = format!("{}_by_version", spec.schema_key); ctx.register_table( &by_version_name, Arc::new(EntityProvider::by_version( Arc::clone(&spec), Arc::clone(&live_state), Arc::clone(&version_ref), )), ) .map_err(datafusion_error_to_lix_error)?; ctx.register_table( &spec.schema_key, Arc::new(EntityProvider::active( Arc::clone(&spec), Arc::clone(&live_state), Arc::clone(&version_ref), active_version_id.to_string(), )), ) .map_err(datafusion_error_to_lix_error)?; if schema_exposed_as_entity_history_surface(&spec.schema_key) { let history_name = format!("{}_history", spec.schema_key); ctx.register_table( &history_name, Arc::new(EntityHistoryProvider::new( Arc::clone(&spec), Arc::clone(&commit_graph), query_source.clone(), )), ) .map_err(datafusion_error_to_lix_error)?; } } Ok(()) } pub(crate) async fn register_entity_write_providers( ctx: &SessionContext, write_ctx: SqlWriteContext, schema_definitions: &[JsonValue], ) -> Result<(), LixError> { for schema in schema_definitions { let spec = match derive_entity_surface_spec_from_schema(schema) { Ok(spec) => Arc::new(spec), Err(_) => continue, }; if !schema_exposed_as_entity_surface(&spec.schema_key) { continue; } let by_version_name = format!("{}_by_version", spec.schema_key); ctx.register_table( &by_version_name, Arc::new(EntityProvider::by_version_with_write( Arc::clone(&spec), write_ctx.clone(), )), ) .map_err(datafusion_error_to_lix_error)?; ctx.register_table( &spec.schema_key, Arc::new(EntityProvider::active_with_write( Arc::clone(&spec), write_ctx.clone(), )), ) .map_err(datafusion_error_to_lix_error)?; } Ok(()) } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(super) enum EntityProviderVariant { Active, ByVersion, History, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(super) enum EntityColumnType { String, Json, Integer, Number, Boolean, } #[derive(Debug, Clone, PartialEq, Eq)] pub(super) struct EntitySurfaceColumn { pub(super) name: String, pub(super) column_type: EntityColumnType, } #[derive(Debug, Clone, PartialEq, Eq)] pub(super) struct EntitySurfaceSpec { pub(super) schema_key: String, pub(super) primary_key_paths: Vec>, pub(super) columns: Vec, } impl EntitySurfaceSpec { #[cfg(test)] fn visible_column_names(&self) -> impl Iterator { self.columns.iter().map(|column| column.name.as_str()) } pub(super) fn visible_column(&self, column_name: &str) -> Option<&EntitySurfaceColumn> { self.columns .iter() .find(|column| column.name == column_name) } fn is_visible_column(&self, column_name: &str) -> bool { self.visible_column(column_name).is_some() } } pub(crate) struct EntityProvider { spec: Arc, live_state: Arc, version_ref: Arc, write_access: WriteAccess, schema: SchemaRef, variant: EntityProviderVariant, version_binding: VersionBinding, } impl std::fmt::Debug for EntityProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("EntityProvider") .field("schema_key", &self.spec.schema_key) .field("variant", &self.variant) .finish() } } impl EntityProvider { fn active( spec: Arc, live_state: Arc, version_ref: Arc, active_version_id: String, ) -> Self { Self { schema: entity_surface_schema(&spec, EntityProviderVariant::Active), spec, live_state, version_ref, write_access: WriteAccess::read_only(), variant: EntityProviderVariant::Active, version_binding: VersionBinding::active(active_version_id), } } fn active_with_write(spec: Arc, write_ctx: SqlWriteContext) -> Self { let active_version_id = write_ctx.active_version_id(); let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); Self { schema: entity_surface_schema(&spec, EntityProviderVariant::Active), spec, live_state, version_ref, write_access: WriteAccess::write(write_ctx), variant: EntityProviderVariant::Active, version_binding: VersionBinding::active(active_version_id), } } fn by_version( spec: Arc, live_state: Arc, version_ref: Arc, ) -> Self { Self { schema: entity_surface_schema(&spec, EntityProviderVariant::ByVersion), spec, live_state, version_ref, write_access: WriteAccess::read_only(), variant: EntityProviderVariant::ByVersion, version_binding: VersionBinding::explicit(), } } fn by_version_with_write(spec: Arc, write_ctx: SqlWriteContext) -> Self { let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); Self { schema: entity_surface_schema(&spec, EntityProviderVariant::ByVersion), spec, live_state, version_ref, write_access: WriteAccess::write(write_ctx), variant: EntityProviderVariant::ByVersion, version_binding: VersionBinding::explicit(), } } } #[async_trait] impl TableProvider for EntityProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::Base } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { let analyzer = EntityPrimaryKeyFilterAnalyzer::new(&self.spec); Ok(filters .iter() .map(|filter| { if ExactVersionIdFilterAnalyzer.supports(filter) || analyzer.supports(filter) { TableProviderFilterPushDown::Exact } else { TableProviderFilterPushDown::Unsupported } }) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, filters: &[Expr], limit: Option, ) -> Result> { let projected_schema = projected_schema(&self.schema, projection)?; let mut request = entity_live_state_scan_request( &self.spec.schema_key, self.version_binding.active_version_id(), Some(projected_schema.as_ref()), limit, ); if self.write_access.is_write() && matches!(self.version_binding, VersionBinding::Explicit) { request.filter.version_ids = explicit_version_ids_from_dml_filters(filters); if request.filter.version_ids.is_empty() { return Err(DataFusionError::Plan(format!( "DELETE FROM {}_by_version requires an explicit lixcol_version_id predicate", self.spec.schema_key ))); } } request.filter.version_ids = resolve_provider_version_ids( self.version_ref.as_ref(), &self.version_binding, request.filter.version_ids, ) .await .map_err(lix_error_to_datafusion_error)?; apply_exact_version_id_filter(&mut request, exact_version_ids_from_filters(filters)?); apply_exact_entity_id_filters(&mut request, &self.spec, filters)?; Ok(Arc::new(EntityScanExec::new( Arc::clone(&self.spec), Arc::clone(&self.live_state), projected_schema, request, ))) } async fn insert_into( &self, _state: &dyn Session, input: Arc, insert_op: InsertOp, ) -> Result> { if insert_op != InsertOp::Append { return not_impl_err!("{insert_op} not implemented for entity surfaces yet"); } reject_read_only_entity_surface(&self.spec.schema_key, "INSERT")?; let write_ctx = self.write_access.require_write(&format!( "INSERT into {} entity surface", self.spec.schema_key ))?; let insert_version_binding = match self.variant { EntityProviderVariant::Active => self.version_binding.clone(), EntityProviderVariant::ByVersion => VersionBinding::explicit(), EntityProviderVariant::History => { return not_impl_err!("INSERT is not implemented for entity history surfaces"); } }; let sink = EntityInsertSink::new( Arc::clone(&self.spec), input.schema(), InsertColumnIntents::from_input(&input), write_ctx.clone(), insert_version_binding, ); Ok(Arc::new(InsertExec::new(input, Arc::new(sink)))) } async fn delete_from( &self, state: &dyn Session, filters: Vec, ) -> Result> { reject_read_only_entity_surface(&self.spec.schema_key, "DELETE")?; let write_ctx = self.write_access.require_write(&format!( "DELETE FROM {} entity surface", self.spec.schema_key ))?; let version_binding = match self.variant { EntityProviderVariant::Active => self.version_binding.clone(), EntityProviderVariant::ByVersion => VersionBinding::explicit(), EntityProviderVariant::History => { return not_impl_err!("DELETE is not implemented for entity history surfaces"); } }; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; let mut request = entity_live_state_scan_request( &self.spec.schema_key, version_binding.active_version_id(), None, None, ); if matches!(version_binding, VersionBinding::Explicit) { let exact_version_ids = exact_version_ids_from_filters(&filters)?; if exact_version_ids.is_none() { return Err(DataFusionError::Plan(format!( "DELETE FROM {}_by_version requires an explicit lixcol_version_id predicate", self.spec.schema_key ))); } apply_exact_version_id_filter(&mut request, exact_version_ids); } apply_exact_entity_id_filters(&mut request, &self.spec, &filters)?; Ok(Arc::new(EntityDeleteExec::new( Arc::clone(&self.spec), write_ctx.clone(), Arc::clone(&self.schema), version_binding, request, physical_filters, ))) } async fn update( &self, state: &dyn Session, assignments: Vec<(String, Expr)>, filters: Vec, ) -> Result> { reject_read_only_entity_surface(&self.spec.schema_key, "UPDATE")?; let write_ctx = self .write_access .require_write(&format!("UPDATE {} entity surface", self.spec.schema_key))?; validate_entity_update_assignments(&self.spec, &self.schema, &assignments)?; let version_binding = match self.variant { EntityProviderVariant::Active => self.version_binding.clone(), EntityProviderVariant::ByVersion => VersionBinding::explicit(), EntityProviderVariant::History => { return not_impl_err!("UPDATE is not implemented for entity history surfaces"); } }; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_assignments = assignments .iter() .map(|(column_name, expr)| { Ok(( column_name.clone(), create_physical_expr(expr, &df_schema, state.execution_props())?, )) }) .collect::>>()?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; let mut request = entity_live_state_scan_request( &self.spec.schema_key, version_binding.active_version_id(), None, None, ); apply_exact_entity_id_filters(&mut request, &self.spec, &filters)?; Ok(Arc::new(EntityUpdateExec::new( Arc::clone(&self.spec), write_ctx.clone(), Arc::clone(&self.schema), version_binding, request, physical_assignments, physical_filters, ))) } } fn entity_ids_from_primary_key_filters( spec: &EntitySurfaceSpec, filters: &[Expr], ) -> Result>> { let analyzer = EntityPrimaryKeyFilterAnalyzer::new(spec); let mut entity_ids: Option> = None; for filter in filters { let Some(filter_ids) = analyzer.analyze(filter)? else { continue; }; entity_ids = Some(match entity_ids { Some(existing_ids) => existing_ids.intersection(&filter_ids).cloned().collect(), None => filter_ids, }); } Ok(entity_ids.map(|ids| ids.into_iter().collect())) } fn apply_exact_entity_id_filters( request: &mut LiveStateScanRequest, spec: &EntitySurfaceSpec, filters: &[Expr], ) -> Result<()> { if let Some(entity_ids) = entity_ids_from_primary_key_filters(spec, filters)? { if entity_ids.is_empty() { request.limit = Some(0); } request.filter.entity_ids = entity_ids; } Ok(()) } fn exact_version_ids_from_filters(filters: &[Expr]) -> Result>> { let analyzer = ExactVersionIdFilterAnalyzer; let mut version_ids: Option> = None; for filter in filters { let Some(filter_ids) = analyzer.analyze(filter)? else { continue; }; version_ids = Some(match version_ids { Some(existing_ids) => existing_ids.intersection(&filter_ids).cloned().collect(), None => filter_ids, }); } Ok(version_ids.map(|ids| ids.into_iter().collect())) } fn apply_exact_version_id_filter( request: &mut LiveStateScanRequest, version_ids: Option>, ) { if let Some(version_ids) = version_ids { if version_ids.is_empty() { request.limit = Some(0); } request.filter.version_ids = version_ids; } } struct EntityPrimaryKeyFilterAnalyzer<'a> { primary_key_columns: Vec<&'a str>, } struct ExactVersionIdFilterAnalyzer; impl ExactVersionIdFilterAnalyzer { fn supports(&self, expr: &Expr) -> bool { self.analyze(expr) .is_ok_and(|constraint| constraint.is_some()) } fn analyze(&self, expr: &Expr) -> Result>> { match expr { Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => { let Some(left) = self.analyze(&binary_expr.left)? else { return Ok(None); }; let Some(right) = self.analyze(&binary_expr.right)? else { return Ok(None); }; Ok(Some(left.intersection(&right).cloned().collect())) } Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => { let Some(mut left) = self.analyze(&binary_expr.left)? else { return Ok(None); }; let Some(right) = self.analyze(&binary_expr.right)? else { return Ok(None); }; left.extend(right); Ok(Some(left)) } Expr::BinaryExpr(binary_expr) => { Ok(version_id_from_binary_filter(binary_expr).map(|value| BTreeSet::from([value]))) } Expr::InList(in_list) => { Ok(version_ids_from_in_list_filter(in_list) .map(|values| values.into_iter().collect())) } _ => Ok(None), } } } fn version_id_from_binary_filter(binary_expr: &BinaryExpr) -> Option { if binary_expr.op != Operator::Eq { return None; } version_id_from_column_literal_filter(&binary_expr.left, &binary_expr.right) .or_else(|| version_id_from_column_literal_filter(&binary_expr.right, &binary_expr.left)) } fn version_ids_from_in_list_filter(in_list: &InList) -> Option> { if in_list.negated { return None; } let Expr::Column(column) = in_list.expr.as_ref() else { return None; }; if column.name != "lixcol_version_id" { return None; } let values = in_list .list .iter() .map(string_expr_literal) .collect::>>()?; if values.is_empty() { return None; } Some(values) } fn version_id_from_column_literal_filter( column_expr: &Expr, literal_expr: &Expr, ) -> Option { let Expr::Column(column) = column_expr else { return None; }; if column.name != "lixcol_version_id" { return None; } string_expr_literal(literal_expr) } impl<'a> EntityPrimaryKeyFilterAnalyzer<'a> { fn new(spec: &'a EntitySurfaceSpec) -> Self { Self { primary_key_columns: string_primary_key_columns(spec), } } fn supports(&self, expr: &Expr) -> bool { self.analyze(expr) .is_ok_and(|constraint| constraint.is_some()) } fn analyze(&self, expr: &Expr) -> Result>> { if self.primary_key_columns.is_empty() { return Ok(None); }; let Some(constraint) = self.analyze_constraint(expr)? else { return Ok(None); }; Ok(constraint.into_entity_ids(&self.primary_key_columns)) } fn analyze_constraint(&self, expr: &Expr) -> Result> { match expr { Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => { let Some(left) = self.analyze_constraint(&binary_expr.left)? else { return Ok(None); }; let Some(right) = self.analyze_constraint(&binary_expr.right)? else { return Ok(None); }; Ok(Some(left.intersect(right, &self.primary_key_columns))) } Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => { let Some(left) = self.analyze_constraint(&binary_expr.left)? else { return Ok(None); }; let Some(right) = self.analyze_constraint(&binary_expr.right)? else { return Ok(None); }; let Some(left_ids) = left.into_entity_ids(&self.primary_key_columns) else { return Ok(None); }; let Some(mut right_ids) = right.into_entity_ids(&self.primary_key_columns) else { return Ok(None); }; right_ids.extend(left_ids); Ok(Some(EntityIdentityConstraint::Full(right_ids))) } Expr::BinaryExpr(binary_expr) => Ok(entity_identity_constraint_from_binary_filter( binary_expr, &self.primary_key_columns, )), Expr::InList(in_list) => Ok(entity_identity_constraint_from_in_list_filter( in_list, &self.primary_key_columns, )), _ => Ok(None), } } } #[derive(Debug, Clone, PartialEq, Eq)] enum EntityIdentityConstraint { Full(BTreeSet), Parts(BTreeMap>), } impl EntityIdentityConstraint { fn intersect(self, other: Self, primary_key_columns: &[&str]) -> Self { match (self, other) { (Self::Full(left), Self::Full(right)) => { Self::Full(left.intersection(&right).cloned().collect()) } (Self::Full(ids), Self::Parts(parts)) | (Self::Parts(parts), Self::Full(ids)) => { Self::Full( ids.into_iter() .filter(|identity| { identity_matches_parts(identity, primary_key_columns, &parts) }) .collect(), ) } (Self::Parts(mut left), Self::Parts(right)) => { for (column, right_values) in right { left.entry(column) .and_modify(|left_values| { *left_values = left_values.intersection(&right_values).cloned().collect(); }) .or_insert(right_values); } Self::Parts(left) } } } fn into_entity_ids(self, primary_key_columns: &[&str]) -> Option> { match self { Self::Full(ids) => Some(ids), Self::Parts(parts) => entity_ids_from_primary_key_parts(primary_key_columns, parts), } } } fn string_primary_key_columns(spec: &EntitySurfaceSpec) -> Vec<&str> { spec.primary_key_paths .iter() .map(|path| { let [column_name] = path.as_slice() else { return None; }; let column = spec.visible_column(column_name)?; (column.column_type == EntityColumnType::String).then_some(column.name.as_str()) }) .collect::>>() .unwrap_or_default() } fn entity_identity_constraint_from_binary_filter( binary_expr: &BinaryExpr, primary_key_columns: &[&str], ) -> Option { if binary_expr.op != Operator::Eq { return None; } entity_identity_constraint_from_column_literal_filter( &binary_expr.left, &binary_expr.right, primary_key_columns, ) .or_else(|| { entity_identity_constraint_from_column_literal_filter( &binary_expr.right, &binary_expr.left, primary_key_columns, ) }) } fn entity_identity_constraint_from_in_list_filter( in_list: &InList, primary_key_columns: &[&str], ) -> Option { if in_list.negated { return None; } let Expr::Column(column) = in_list.expr.as_ref() else { return None; }; let values = in_list .list .iter() .map(string_expr_literal) .collect::>>()?; if values.is_empty() { return None; } match column.name.as_str() { "lixcol_entity_id" => values .into_iter() .map(|value| EntityIdentity::from_json_array_text(&value).ok()) .collect::>>() .map(EntityIdentityConstraint::Full), column_name if primary_key_columns.contains(&column_name) => { Some(EntityIdentityConstraint::Parts(BTreeMap::from([( column_name.to_string(), values.into_iter().collect(), )]))) } _ => None, } } fn entity_identity_constraint_from_column_literal_filter( column_expr: &Expr, literal_expr: &Expr, primary_key_columns: &[&str], ) -> Option { let Expr::Column(column) = column_expr else { return None; }; let value = string_expr_literal(literal_expr)?; match column.name.as_str() { "lixcol_entity_id" => EntityIdentity::from_json_array_text(&value) .ok() .map(|identity| EntityIdentityConstraint::Full(BTreeSet::from([identity]))), column_name if primary_key_columns.contains(&column_name) => { Some(EntityIdentityConstraint::Parts(BTreeMap::from([( column_name.to_string(), BTreeSet::from([value]), )]))) } _ => None, } } fn entity_ids_from_primary_key_parts( primary_key_columns: &[&str], parts: BTreeMap>, ) -> Option> { if primary_key_columns .iter() .any(|column| !parts.contains_key(*column)) { return None; } let mut identities = BTreeSet::from([Vec::::new()]); for column in primary_key_columns { let values = parts.get(*column)?; identities = identities .into_iter() .flat_map(|prefix| { values.iter().map(move |value| { let mut parts = prefix.clone(); parts.push(value.clone()); parts }) }) .collect(); } Some( identities .into_iter() .map(|parts| EntityIdentity { parts }) .collect(), ) } fn identity_matches_parts( identity: &EntityIdentity, primary_key_columns: &[&str], parts: &BTreeMap>, ) -> bool { let identity_parts = identity.parts.as_slice(); primary_key_columns .iter() .zip(identity_parts.iter()) .all(|(column, value)| { parts .get(*column) .is_none_or(|values| values.contains(value)) }) } fn string_expr_literal(expr: &Expr) -> Option { let Expr::Literal(literal, _) = expr else { return None; }; match literal { ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)) => Some(value.clone()), _ => None, } } struct EntityInsertSink { spec: Arc, insert_column_intents: InsertColumnIntents, write_ctx: SqlWriteContext, version_binding: VersionBinding, } impl std::fmt::Debug for EntityInsertSink { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("EntityInsertSink") .field("schema_key", &self.spec.schema_key) .finish() } } impl EntityInsertSink { fn new( spec: Arc, _schema: SchemaRef, insert_column_intents: InsertColumnIntents, write_ctx: SqlWriteContext, version_binding: VersionBinding, ) -> Self { Self { spec, insert_column_intents, write_ctx, version_binding, } } } impl DisplayAs for EntityInsertSink { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "EntityInsertSink(schema_key={})", self.spec.schema_key) } DisplayFormatType::TreeRender => write!(f, "EntityInsertSink"), } } } #[async_trait] impl InsertSink for EntityInsertSink { async fn write_batches( &self, batches: Vec, _context: &Arc, ) -> Result { let mut rows = Vec::new(); for batch in batches { rows.extend(entity_lix_state_write_rows_from_batch( &self.spec, &batch, &self.insert_column_intents, self.version_binding.active_version_id(), )?); } let count = u64::try_from(rows.len()) .map_err(|_| DataFusionError::Execution("entity INSERT row count overflow".into()))?; self.write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows, }) .await .map_err(lix_error_to_datafusion_error)?; Ok(count) } } #[allow(dead_code)] struct EntityDeleteExec { spec: Arc, write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for EntityDeleteExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("EntityDeleteExec") .field("schema_key", &self.spec.schema_key) .finish() } } impl EntityDeleteExec { fn new( spec: Arc, write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { spec, write_ctx, table_schema, version_binding, request, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for EntityDeleteExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "EntityDeleteExec(schema_key={})", self.spec.schema_key) } DisplayFormatType::TreeRender => write!(f, "EntityDeleteExec"), } } } impl ExecutionPlan for EntityDeleteExec { fn name(&self) -> &str { "EntityDeleteExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "EntityDeleteExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "EntityDeleteExec only exposes one partition, got {partition}" ))); } let spec = Arc::clone(&self.spec); let write_ctx = self.write_ctx.clone(); let table_schema = Arc::clone(&self.table_schema); let version_binding = self.version_binding.clone(); let request = self.request.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = if request.limit == Some(0) { Vec::new() } else { write_ctx .scan_live_state(&request) .await .map_err(lix_error_to_datafusion_error)? }; let source_batch = entity_record_batch(&spec, Arc::clone(&table_schema), &rows)?; let matched_batch = filter_entity_batch(source_batch, &filters)?; let mut write_rows = entity_existing_lix_state_write_rows_from_batch( &spec, &matched_batch, version_binding.active_version_id(), )?; for row in &mut write_rows { row.snapshot = None; } let count = u64::try_from(write_rows.len()).map_err(|_| { DataFusionError::Execution("entity DELETE row count overflow".to_string()) })?; if count > 0 { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: write_rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } #[allow(dead_code)] struct EntityUpdateExec { spec: Arc, write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, assignments: Vec<(String, Arc)>, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for EntityUpdateExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("EntityUpdateExec") .field("schema_key", &self.spec.schema_key) .finish() } } impl EntityUpdateExec { fn new( spec: Arc, write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, assignments: Vec<(String, Arc)>, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { spec, write_ctx, table_schema, version_binding, request, assignments, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for EntityUpdateExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!( f, "EntityUpdateExec(schema_key={}, assignments={})", self.spec.schema_key, self.assignments.len() ) } DisplayFormatType::TreeRender => write!(f, "EntityUpdateExec"), } } } impl ExecutionPlan for EntityUpdateExec { fn name(&self) -> &str { "EntityUpdateExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "EntityUpdateExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "EntityUpdateExec only exposes one partition, got {partition}" ))); } let spec = Arc::clone(&self.spec); let write_ctx = self.write_ctx.clone(); let table_schema = Arc::clone(&self.table_schema); let version_binding = self.version_binding.clone(); let request = self.request.clone(); let assignments = self.assignments.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = if request.limit == Some(0) { Vec::new() } else { write_ctx .scan_live_state(&request) .await .map_err(lix_error_to_datafusion_error)? }; let source_batch = entity_record_batch(&spec, Arc::clone(&table_schema), &rows)?; let matched_batch = filter_entity_batch(source_batch, &filters)?; let write_rows = entity_update_write_rows_from_batch( &spec, &matched_batch, &assignments, version_binding.active_version_id(), )?; let count = u64::try_from(write_rows.len()).map_err(|_| { DataFusionError::Execution("entity UPDATE row count overflow".to_string()) })?; if count > 0 { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: write_rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } fn validate_entity_update_assignments( spec: &EntitySurfaceSpec, schema: &SchemaRef, assignments: &[(String, Expr)], ) -> Result<()> { for (column_name, _) in assignments { schema.field_with_name(column_name).map_err(|_| { DataFusionError::Plan(format!( "UPDATE entity surface '{}' failed: column '{column_name}' does not exist", spec.schema_key )) })?; if !spec.is_visible_column(column_name) && column_name != "lixcol_metadata" { return Err(DataFusionError::Execution(format!( "UPDATE entity surface '{}' cannot stage read-only column '{column_name}'", spec.schema_key ))); } } Ok(()) } fn filter_entity_batch( batch: RecordBatch, filters: &[Arc], ) -> Result { let Some(mask) = evaluate_entity_filters(&batch, filters)? else { return Ok(batch); }; Ok(filter_record_batch(&batch, &mask)?) } fn evaluate_entity_filters( batch: &RecordBatch, filters: &[Arc], ) -> Result> { if filters.is_empty() { return Ok(None); } let mut combined_mask: Option = None; for filter in filters { let result = filter.evaluate(batch)?; let array = result.into_array(batch.num_rows())?; let bool_array = array .as_any() .downcast_ref::() .ok_or_else(|| { DataFusionError::Execution("entity surface filter was not boolean".to_string()) })?; let normalized = bool_array .iter() .map(|value| Some(value == Some(true))) .collect::(); combined_mask = Some(match combined_mask { Some(existing) => and(&existing, &normalized)?, None => normalized, }); } Ok(combined_mask) } fn entity_update_write_rows_from_batch( spec: &EntitySurfaceSpec, batch: &RecordBatch, assignments: &[(String, Arc)], version_binding: Option<&str>, ) -> Result> { let assignment_values = UpdateAssignmentValues::evaluate(batch, assignments)?; (0..batch.num_rows()) .map(|row_index| { let scope = resolve_write_version_scope( optional_bool_value(batch, row_index, "lixcol_global")?, optional_string_value(batch, row_index, "lixcol_version_id")?, version_binding, &format!("UPDATE into {}_by_version", spec.schema_key), &spec.schema_key, )?; Ok(TransactionWriteRow { entity_id: optional_string_value(batch, row_index, "lixcol_entity_id")? .map(|entity_id| { EntityIdentity::from_json_array_text(&entity_id).map_err(|error| { DataFusionError::Execution(format!( "UPDATE entity surface '{}' has invalid lixcol_entity_id: {error}", spec.schema_key )) }) }) .transpose()?, schema_key: spec.schema_key.clone(), file_id: optional_string_value(batch, row_index, "lixcol_file_id")?, snapshot: Some( TransactionJson::from_value( entity_update_snapshot_content_from_batch( spec, batch, &assignment_values, row_index, )?, &format!("{} update snapshot_content", spec.schema_key), ) .map_err(super::error::lix_error_to_datafusion_error)?, ), metadata: entity_update_optional_metadata_value( batch, &assignment_values, row_index, "lixcol_metadata", &spec.schema_key, )?, origin: None, created_at: None, updated_at: None, global: scope.global, change_id: None, commit_id: None, untracked: optional_bool_value(batch, row_index, "lixcol_untracked")? .unwrap_or(false), version_id: scope.version_id, }) }) .collect() } fn entity_update_snapshot_content_from_batch( spec: &EntitySurfaceSpec, batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, ) -> Result { let snapshot_content = optional_string_value(batch, row_index, "lixcol_snapshot_content")? .ok_or_else(|| { DataFusionError::Execution(format!( "UPDATE entity surface '{}' requires existing lixcol_snapshot_content", spec.schema_key )) })?; let mut object = match serde_json::from_str::(&snapshot_content).map_err(|error| { DataFusionError::Execution(format!( "UPDATE entity surface '{}' expected existing snapshot_content to be valid JSON: {error}", spec.schema_key )) })? { JsonValue::Object(object) => object, other => { return Err(DataFusionError::Execution(format!( "UPDATE entity surface '{}' expected existing snapshot_content to be a JSON object, got {other}", spec.schema_key ))) } }; for column in &spec.columns { let value = match entity_update_json_value( assignment_values, row_index, &column.name, column.column_type, )? { Some(value) => value, None => continue, }; object.insert(column.name.clone(), value); } Ok(JsonValue::Object(object)) } fn entity_update_optional_string_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result> { match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? { InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None), InsertCell::Provided(SqlCell::Value( ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)), )) => Ok(Some(value)), InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!( "UPDATE entity surface expected text-compatible column '{column_name}', got {other:?}" ))), } } fn entity_update_optional_metadata_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, context: &str, ) -> Result> { entity_update_optional_string_value(batch, assignment_values, row_index, column_name)? .map(|value| { let metadata = parse_row_metadata_value(&value, context) .map_err(super::error::lix_error_to_datafusion_error)?; TransactionJson::from_value(metadata, &format!("{context} metadata")) .map_err(super::error::lix_error_to_datafusion_error) }) .transpose() } fn entity_update_json_value( assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, column_type: EntityColumnType, ) -> Result> { match assignment_values.assigned_cell(row_index, column_name)? { UpdateCell::Unassigned => Ok(None), UpdateCell::Assigned(SqlCell::Null) => Ok(Some(JsonValue::Null)), UpdateCell::Assigned(SqlCell::Value(value)) => { entity_json_value_from_scalar(Some(value), column_type).map(Some) } } } fn dml_count_schema() -> SchemaRef { Arc::new(Schema::new(vec![Field::new( "count", DataType::UInt64, false, )])) } fn dml_count_batch(schema: SchemaRef, count: u64) -> Result { RecordBatch::try_new( schema, vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef], ) .map_err(DataFusionError::from) } fn entity_lix_state_write_rows_from_batch( spec: &EntitySurfaceSpec, batch: &RecordBatch, insert_column_intents: &InsertColumnIntents, version_binding: Option<&str>, ) -> Result> { entity_lix_state_write_rows_from_batch_with_options( spec, batch, insert_column_intents, version_binding, true, ) } fn entity_existing_lix_state_write_rows_from_batch( spec: &EntitySurfaceSpec, batch: &RecordBatch, version_binding: Option<&str>, ) -> Result> { entity_lix_state_write_rows_from_batch_with_options( spec, batch, &InsertColumnIntents::all_explicit(), version_binding, false, ) } fn entity_lix_state_write_rows_from_batch_with_options( spec: &EntitySurfaceSpec, batch: &RecordBatch, insert_column_intents: &InsertColumnIntents, version_binding: Option<&str>, reject_read_only_fields: bool, ) -> Result> { (0..batch.num_rows()) .map(|row_index| { let scope = resolve_write_version_scope( optional_bool_value(batch, row_index, "lixcol_global")?, optional_string_value(batch, row_index, "lixcol_version_id")?, version_binding, &format!( "INSERT into {}_by_version", spec.schema_key ), &spec.schema_key, )?; if let Some(schema_key) = optional_string_value(batch, row_index, "lixcol_schema_key")? { if schema_key != spec.schema_key { return Err(DataFusionError::Execution(format!( "INSERT into entity surface '{}' cannot set lixcol_schema_key to '{}'", spec.schema_key, schema_key ))); } } if reject_read_only_fields { reject_present_entity_insert_field(batch, row_index, "lixcol_snapshot_content")?; reject_present_entity_insert_field(batch, row_index, "lixcol_created_at")?; reject_present_entity_insert_field(batch, row_index, "lixcol_updated_at")?; reject_present_entity_insert_field(batch, row_index, "lixcol_change_id")?; reject_present_entity_insert_field(batch, row_index, "lixcol_commit_id")?; } let snapshot_content = entity_snapshot_content_from_batch(spec, batch, insert_column_intents, row_index)?; let explicit_entity_id = optional_string_value(batch, row_index, "lixcol_entity_id")?; let entity_id = if spec.primary_key_paths.is_empty() { let entity_id = explicit_entity_id.ok_or_else(|| { DataFusionError::Execution(format!( "INSERT into entity surface '{}' requires lixcol_entity_id because the schema has no x-lix-primary-key", spec.schema_key )) })?; Some(EntityIdentity::from_json_array_text(&entity_id).map_err(|error| { DataFusionError::Execution(format!( "INSERT into entity surface '{}' has invalid lixcol_entity_id: {error}", spec.schema_key )) })?) } else { explicit_entity_id .map(|entity_id| { EntityIdentity::from_json_array_text(&entity_id).map_err(|error| { DataFusionError::Execution(format!( "INSERT into entity surface '{}' has invalid lixcol_entity_id: {error}", spec.schema_key )) }) }) .transpose()? }; Ok(TransactionWriteRow { entity_id, schema_key: spec.schema_key.clone(), file_id: optional_string_value(batch, row_index, "lixcol_file_id")?, snapshot: Some(TransactionJson::from_value( snapshot_content, &format!("{} insert snapshot_content", spec.schema_key), ) .map_err(super::error::lix_error_to_datafusion_error)?), metadata: optional_metadata_value( batch, row_index, "lixcol_metadata", &spec.schema_key, )?, origin: None, created_at: None, updated_at: None, global: scope.global, change_id: None, commit_id: None, untracked: optional_bool_value(batch, row_index, "lixcol_untracked")? .unwrap_or(false), version_id: scope.version_id, }) }) .collect() } fn entity_snapshot_content_from_batch( spec: &EntitySurfaceSpec, batch: &RecordBatch, insert_column_intents: &InsertColumnIntents, row_index: usize, ) -> Result { let mut object = serde_json::Map::new(); for column in &spec.columns { let value = match insert_column_intents.cell(batch, row_index, &column.name)? { InsertCell::Omitted => { continue; } InsertCell::Provided(SqlCell::Null) => JsonValue::Null, InsertCell::Provided(SqlCell::Value(value)) => { entity_json_value_from_scalar(Some(value), column.column_type)? } }; object.insert(column.name.clone(), value); } Ok(JsonValue::Object(object)) } fn entity_json_value_from_scalar( value: Option, column_type: EntityColumnType, ) -> Result { let Some(value) = value else { return Ok(JsonValue::Null); }; match value { ScalarValue::Null | ScalarValue::Utf8(None) | ScalarValue::Utf8View(None) | ScalarValue::LargeUtf8(None) | ScalarValue::Boolean(None) | ScalarValue::Int64(None) | ScalarValue::Int32(None) | ScalarValue::UInt64(None) | ScalarValue::UInt32(None) | ScalarValue::Float64(None) | ScalarValue::Float32(None) => Ok(JsonValue::Null), ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)) => match column_type { EntityColumnType::Json => { // JSON surface columns accept SQL strings as JSON string values, // while still allowing callers to pass serialized JSON text for // objects, arrays, numbers, booleans, and null. Ok(serde_json::from_str(&value).unwrap_or(JsonValue::String(value))) } EntityColumnType::Integer => { value.parse::().map(JsonValue::from).map_err(|error| { DataFusionError::Execution(format!( "entity integer column expected integer text, got error: {error}" )) }) } EntityColumnType::Number => value .parse::() .map_err(|error| { DataFusionError::Execution(format!( "entity number column expected number text, got error: {error}" )) }) .and_then(json_number_from_f64), EntityColumnType::Boolean => { value.parse::().map(JsonValue::from).map_err(|error| { DataFusionError::Execution(format!( "entity boolean column expected boolean text, got error: {error}" )) }) } EntityColumnType::String => Ok(JsonValue::String(value)), }, ScalarValue::Boolean(Some(value)) => Ok(JsonValue::Bool(value)), ScalarValue::Int64(Some(value)) => Ok(JsonValue::from(value)), ScalarValue::Int32(Some(value)) => Ok(JsonValue::from(value)), ScalarValue::UInt64(Some(value)) => Ok(JsonValue::from(value)), ScalarValue::UInt32(Some(value)) => Ok(JsonValue::from(value)), ScalarValue::Float64(Some(value)) => json_number_from_f64(value), ScalarValue::Float32(Some(value)) => json_number_from_f64(value as f64), ScalarValue::Binary(Some(_)) | ScalarValue::LargeBinary(Some(_)) | ScalarValue::FixedSizeBinary(_, Some(_)) => Err(lix_error_to_datafusion_error( LixError::new( LixError::CODE_TYPE_MISMATCH, "entity JSON columns cannot store blob values directly", ) .with_hint( "Encode bytes explicitly as JSON text/object, or store raw bytes in a blob-native surface such as lix_file.data.", ), )), ScalarValue::Binary(None) | ScalarValue::LargeBinary(None) | ScalarValue::FixedSizeBinary(_, None) => Ok(JsonValue::Null), other => Err(DataFusionError::Execution(format!( "entity insert does not support scalar value {other:?}" ))), } } fn json_number_from_f64(value: f64) -> Result { serde_json::Number::from_f64(value) .map(JsonValue::Number) .ok_or_else(|| { DataFusionError::Execution(format!("entity number column cannot store {value}")) }) } fn reject_present_entity_insert_field( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result<()> { if optional_scalar_value(batch, row_index, column_name)?.is_some_and(|value| !value.is_null()) { return Err(DataFusionError::Execution(format!( "INSERT into entity surface cannot stage read-only column '{column_name}'" ))); } Ok(()) } fn optional_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Utf8(None)) | Some(ScalarValue::Utf8View(None)) | Some(ScalarValue::LargeUtf8(None)) => Ok(None), Some(ScalarValue::Utf8(Some(value))) | Some(ScalarValue::Utf8View(Some(value))) | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "INSERT into entity surface expected text-compatible column '{column_name}', got {other:?}" ))), } } fn optional_metadata_value( batch: &RecordBatch, row_index: usize, column_name: &str, context: &str, ) -> Result> { optional_string_value(batch, row_index, column_name)? .map(|value| { let metadata = parse_row_metadata_value(&value, context) .map_err(super::error::lix_error_to_datafusion_error)?; TransactionJson::from_value(metadata, &format!("{context} metadata")) .map_err(super::error::lix_error_to_datafusion_error) }) .transpose() } fn optional_bool_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None), Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "INSERT into entity surface expected boolean column '{column_name}', got {other:?}" ))), } } fn optional_scalar_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { let schema = batch.schema(); let column_index = match schema.index_of(column_name) { Ok(column_index) => column_index, Err(_) => return Ok(None), }; if row_index >= batch.num_rows() { return Err(DataFusionError::Execution(format!( "row index {row_index} out of bounds for entity batch with {} rows", batch.num_rows() ))); } ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index) .map(Some) .map_err(|error| { DataFusionError::Execution(format!( "failed to decode entity column '{column_name}' at row {row_index}: {error}" )) }) } struct EntityScanExec { spec: Arc, live_state: Arc, schema: SchemaRef, request: LiveStateScanRequest, properties: Arc, } impl std::fmt::Debug for EntityScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("EntityScanExec") .field("schema_key", &self.spec.schema_key) .finish() } } impl EntityScanExec { fn new( spec: Arc, live_state: Arc, schema: SchemaRef, request: LiveStateScanRequest, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&schema)), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { spec, live_state, schema, request, properties: Arc::new(properties), } } } impl DisplayAs for EntityScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!( f, "EntityScanExec(schema_key={}, limit={:?})", self.spec.schema_key, self.request.limit ) } DisplayFormatType::TreeRender => write!(f, "EntityScanExec"), } } } impl ExecutionPlan for EntityScanExec { fn name(&self) -> &str { "EntityScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "EntityScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "EntityScanExec only exposes one partition, got {partition}" ))); } let spec = Arc::clone(&self.spec); let live_state = Arc::clone(&self.live_state); let schema = Arc::clone(&self.schema); let request = self.request.clone(); let stream_schema = Arc::clone(&schema); let stream = stream::once(async move { let rows = if request.limit == Some(0) { Vec::new() } else { live_state .scan_rows(&request) .await .map_err(lix_error_to_datafusion_error)? }; let batch = entity_record_batch(&spec, Arc::clone(&stream_schema), &rows)?; Ok::<_, DataFusionError>(stream::iter(vec![Ok::( batch, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream))) } } fn entity_live_state_scan_request( schema_key: &str, active_version_id: Option<&str>, projected_schema: Option<&Schema>, limit: Option, ) -> LiveStateScanRequest { LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![schema_key.to_string()], version_ids: active_version_id .map(|version_id| vec![version_id.to_string()]) .unwrap_or_default(), ..LiveStateFilter::default() }, projection: entity_live_state_projection(projected_schema), limit, } } fn entity_live_state_projection(projected_schema: Option<&Schema>) -> LiveStateProjection { let Some(schema) = projected_schema else { return LiveStateProjection::default(); }; let mut columns = projection_column_names(schema); if schema .fields() .iter() .any(|field| !field.name().starts_with("lixcol_")) && !columns.iter().any(|column| column == "snapshot_content") { columns.push("snapshot_content".to_string()); } LiveStateProjection { columns } } fn projection_column_names(schema: &Schema) -> Vec { schema .fields() .iter() .filter_map(|field| field.name().strip_prefix("lixcol_")) .map(str::to_string) .collect() } fn entity_record_batch( spec: &EntitySurfaceSpec, schema: SchemaRef, rows: &[MaterializedLiveStateRow], ) -> Result { if schema.fields().is_empty() { let options = RecordBatchOptions::new().with_row_count(Some(rows.len())); return RecordBatch::try_new_with_options(schema, vec![], &options) .map_err(DataFusionError::from); } let snapshots = rows .iter() .map(|row| parse_snapshot(row.snapshot_content.as_deref())) .collect::>>()?; let columns = schema .fields() .iter() .map(|field| entity_column_array(spec, field.name(), rows, &snapshots)) .collect::>>()?; RecordBatch::try_new(schema, columns).map_err(DataFusionError::from) } fn entity_column_array( spec: &EntitySurfaceSpec, column_name: &str, rows: &[MaterializedLiveStateRow], snapshots: &[Option], ) -> Result { if let Some(property_name) = column_name.strip_prefix("lixcol_") { return entity_system_column_array(property_name, rows); } let column_type = spec .visible_column(column_name) .ok_or_else(|| { DataFusionError::Execution(format!( "sql2 entity provider '{}' does not expose column '{}'", spec.schema_key, column_name )) })? .column_type; let values = snapshots .iter() .map(|snapshot| snapshot.as_ref().and_then(|value| value.get(column_name))) .collect::>(); Ok(match column_type { EntityColumnType::String | EntityColumnType::Json => Arc::new(StringArray::from( values .iter() .map(|value| entity_json_text_value(*value, column_type)) .collect::>>()?, )) as ArrayRef, EntityColumnType::Integer => Arc::new(Int64Array::from( values .iter() .map(|value| entity_i64_value(*value)) .collect::>(), )) as ArrayRef, EntityColumnType::Number => Arc::new(Float64Array::from( values .iter() .map(|value| entity_f64_value(*value)) .collect::>(), )) as ArrayRef, EntityColumnType::Boolean => Arc::new(BooleanArray::from( values .iter() .map(|value| value.and_then(JsonValue::as_bool)) .collect::>(), )) as ArrayRef, }) } fn entity_system_column_array( column_name: &str, rows: &[MaterializedLiveStateRow], ) -> Result { Ok(match column_name { "entity_id" => Arc::new(StringArray::from( rows.iter() .map(|row| { row.entity_id .as_json_array_text() .map(Some) .map_err(lix_error_to_datafusion_error) }) .collect::>>()?, )) as ArrayRef, "schema_key" => string_array(rows.iter().map(|row| Some(row.schema_key.as_str()))), "file_id" => string_array(rows.iter().map(|row| row.file_id.as_deref())), "snapshot_content" => string_array(rows.iter().map(|row| row.snapshot_content.as_deref())), "metadata" => Arc::new(StringArray::from( rows.iter() .map(|row| row.metadata.as_ref().map(serialize_row_metadata)) .collect::>(), )) as ArrayRef, "created_at" => string_array(rows.iter().map(|row| Some(row.created_at.as_str()))), "updated_at" => string_array(rows.iter().map(|row| Some(row.updated_at.as_str()))), "global" => Arc::new(BooleanArray::from( rows.iter().map(|row| row.global).collect::>(), )) as ArrayRef, "change_id" => string_array(rows.iter().map(|row| row.change_id.as_deref())), "commit_id" => string_array(rows.iter().map(|row| row.commit_id.as_deref())), "untracked" => Arc::new(BooleanArray::from( rows.iter().map(|row| row.untracked).collect::>(), )) as ArrayRef, "version_id" => string_array(rows.iter().map(|row| Some(row.version_id.as_str()))), other => { return Err(DataFusionError::Execution(format!( "sql2 entity provider does not support system column 'lixcol_{other}'" ))) } }) } pub(super) fn parse_snapshot(snapshot_content: Option<&str>) -> Result> { snapshot_content .map(|snapshot| { serde_json::from_str::(snapshot).map_err(|error| { DataFusionError::Execution(format!( "sql2 entity provider expected valid snapshot_content JSON: {error}" )) }) }) .transpose() } pub(super) fn entity_json_text_value( value: Option<&JsonValue>, column_type: EntityColumnType, ) -> Result> { Ok(match (column_type, value) { (_, None) | (_, Some(JsonValue::Null)) => None, (EntityColumnType::String, Some(JsonValue::Bool(value))) => Some(if *value { "true".to_string() } else { "false".to_string() }), (EntityColumnType::String, Some(JsonValue::String(value))) => Some(value.clone()), (EntityColumnType::String, Some(other)) => Some(json_to_string(other)?), (EntityColumnType::Json, Some(other)) => Some(json_to_string(other)?), _ => None, }) } pub(super) fn entity_i64_value(value: Option<&JsonValue>) -> Option { match value { Some(JsonValue::Number(number)) => number.as_i64(), Some(JsonValue::String(value)) => value.parse::().ok(), _ => None, } } pub(super) fn entity_f64_value(value: Option<&JsonValue>) -> Option { match value { Some(JsonValue::Number(number)) => number.as_f64(), Some(JsonValue::String(value)) => value.parse::().ok(), _ => None, } } fn json_to_string(value: &JsonValue) -> Result { serde_json::to_string(value).map_err(|error| { DataFusionError::Execution(format!("failed to render JSON value: {error}")) }) } pub(super) fn string_array<'a>(values: impl Iterator>) -> ArrayRef { let values = values .map(|value| value.map(ToOwned::to_owned)) .collect::>(); Arc::new(StringArray::from(values)) as ArrayRef } pub(super) fn entity_surface_schema( spec: &EntitySurfaceSpec, variant: EntityProviderVariant, ) -> SchemaRef { let mut fields = spec .columns .iter() .map(|column| { let field = Field::new( &column.name, arrow_data_type_for_entity_column_type(column.column_type), true, ); if column.column_type == EntityColumnType::Json { mark_json_field(field) } else { field } }) .collect::>(); fields.extend(entity_system_fields(variant)); Arc::new(Schema::new(fields)) } fn arrow_data_type_for_entity_column_type(column_type: EntityColumnType) -> DataType { match column_type { EntityColumnType::String | EntityColumnType::Json => DataType::Utf8, EntityColumnType::Integer => DataType::Int64, EntityColumnType::Number => DataType::Float64, EntityColumnType::Boolean => DataType::Boolean, } } pub(super) fn entity_system_fields(variant: EntityProviderVariant) -> Vec { if variant == EntityProviderVariant::History { return vec![ json_field(HISTORY_COL_ENTITY_ID, false), Field::new(HISTORY_COL_SCHEMA_KEY, DataType::Utf8, false), Field::new(HISTORY_COL_FILE_ID, DataType::Utf8, true), json_field(HISTORY_COL_SNAPSHOT_CONTENT, true), json_field(HISTORY_COL_METADATA, true), Field::new(HISTORY_COL_CHANGE_ID, DataType::Utf8, false), Field::new(HISTORY_COL_OBSERVED_COMMIT_ID, DataType::Utf8, false), Field::new(HISTORY_COL_COMMIT_CREATED_AT, DataType::Utf8, false), Field::new(HISTORY_COL_START_COMMIT_ID, DataType::Utf8, false), Field::new(HISTORY_COL_DEPTH, DataType::Int64, false), ]; } let mut fields = vec![ json_field("lixcol_entity_id", true), Field::new("lixcol_schema_key", DataType::Utf8, false), Field::new("lixcol_file_id", DataType::Utf8, true), json_field("lixcol_snapshot_content", true), json_field("lixcol_metadata", true), Field::new("lixcol_created_at", DataType::Utf8, true), Field::new("lixcol_updated_at", DataType::Utf8, true), Field::new("lixcol_global", DataType::Boolean, true), Field::new("lixcol_change_id", DataType::Utf8, true), Field::new("lixcol_commit_id", DataType::Utf8, true), Field::new("lixcol_untracked", DataType::Boolean, true), ]; if variant == EntityProviderVariant::ByVersion { fields.push(Field::new("lixcol_version_id", DataType::Utf8, false)); } fields } fn projected_schema(schema: &SchemaRef, projection: Option<&Vec>) -> Result { let Some(projection) = projection else { return Ok(Arc::clone(schema)); }; Ok(Arc::new(schema.project(projection)?)) } fn derive_entity_surface_spec_from_schema( schema: &JsonValue, ) -> std::result::Result { let schema_key = schema .get("x-lix-key") .and_then(JsonValue::as_str) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "schema is missing string x-lix-key".to_string(), ) })?; let properties = schema .get("properties") .and_then(JsonValue::as_object) .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("schema '{schema_key}' must define object properties"), ) })?; let mut columns = properties .iter() .filter(|(key, _)| !key.starts_with("lixcol_")) .map(|(key, property_schema)| { let column_type = entity_column_type_from_schema(property_schema).ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema '{schema_key}' property '/{key}' must declare a SQL-projectable JSON Schema type" ), ) .with_hint("Use an explicit type such as string, number, integer, boolean, object, array, or a supported union of those types.") })?; Ok(EntitySurfaceColumn { name: key.clone(), column_type, }) }) .collect::, LixError>>()?; columns.sort_by(|left, right| left.name.cmp(&right.name)); let primary_key_paths = parse_primary_key_paths(schema)?; Ok(EntitySurfaceSpec { schema_key: schema_key.to_string(), primary_key_paths, columns, }) } fn parse_primary_key_paths(schema: &JsonValue) -> std::result::Result>, LixError> { let Some(primary_key) = schema.get("x-lix-primary-key") else { return Ok(Vec::new()); }; let primary_key = primary_key.as_array().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "schema x-lix-primary-key must be an array of JSON Pointers".to_string(), ) })?; primary_key .iter() .enumerate() .map(|(index, pointer)| { let pointer = pointer.as_str().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("schema x-lix-primary-key entry at index {index} must be a string"), ) })?; parse_json_pointer(pointer) }) .collect() } // TODO(engine): share JSON Pointer parsing with schema/canonical validation once // those helpers have a clean module boundary for SQL providers. fn parse_json_pointer(pointer: &str) -> std::result::Result, LixError> { if pointer.is_empty() { return Ok(Vec::new()); } if !pointer.starts_with('/') { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid JSON pointer '{pointer}'"), )); } pointer[1..] .split('/') .map(decode_json_pointer_segment) .collect() } fn decode_json_pointer_segment(segment: &str) -> std::result::Result { let mut out = String::new(); let mut chars = segment.chars(); while let Some(ch) = chars.next() { if ch == '~' { match chars.next() { Some('0') => out.push('~'), Some('1') => out.push('/'), _ => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid JSON pointer segment '{segment}'"), )) } } } else { out.push(ch); } } Ok(out) } fn schema_exposed_as_entity_surface(schema_key: &str) -> bool { !matches!(schema_key, "lix_active_account" | "lix_change") } fn schema_exposed_as_entity_history_surface(schema_key: &str) -> bool { !matches!(schema_key, "lix_commit" | "lix_commit_edge") } fn entity_column_type_from_schema(schema: &JsonValue) -> Option { let mut kinds = BTreeSet::new(); collect_entity_type_kinds(schema, &mut kinds); kinds.remove("null"); if kinds.is_empty() { return None; } if kinds.len() == 1 { return match kinds.into_iter().next() { Some("boolean") => Some(EntityColumnType::Boolean), Some("integer") => Some(EntityColumnType::Integer), Some("number") => Some(EntityColumnType::Number), Some("string") => Some(EntityColumnType::String), Some("object" | "array") => Some(EntityColumnType::Json), _ => None, }; } Some(EntityColumnType::Json) } fn collect_entity_type_kinds<'a>(schema: &'a JsonValue, out: &mut BTreeSet<&'a str>) { match schema.get("type") { Some(JsonValue::String(kind)) => { out.insert(kind.as_str()); } Some(JsonValue::Array(kinds)) => { for kind in kinds.iter().filter_map(JsonValue::as_str) { out.insert(kind); } } _ => {} } for keyword in ["anyOf", "oneOf", "allOf"] { if let Some(JsonValue::Array(branches)) = schema.get(keyword) { for branch in branches { collect_entity_type_kinds(branch, out); } } } } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { DataFusionError::External(Box::new(error)) } #[cfg(test)] mod tests { use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BooleanArray, Float64Array, Int64Array, StringArray}; use datafusion::arrow::datatypes::{DataType, Field, Schema}; use datafusion::arrow::record_batch::RecordBatch; use datafusion::common::{Column, ScalarValue}; use datafusion::execution::TaskContext; use datafusion::logical_expr::expr::InList; use datafusion::logical_expr::{BinaryExpr, Expr, Operator}; use serde_json::json; use super::{ derive_entity_surface_spec_from_schema, entity_lix_state_write_rows_from_batch, entity_record_batch, entity_surface_schema, schema_exposed_as_entity_surface, EntityColumnType, EntityInsertSink, EntityProviderVariant, }; use crate::binary_cas::BlobDataReader; use crate::functions::{ FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider, }; use crate::live_state::{ LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::sql2::dml::InsertSink; use crate::sql2::write_normalization::InsertColumnIntents; use crate::sql2::{SqlWriteContext, SqlWriteExecutionContext}; use crate::transaction::types::{ TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome, TransactionWriteRow, }; use crate::version::{VersionHead, VersionRefReader}; use crate::LixError; struct EmptyLiveStateReader; struct EmptyVersionRefReader; #[derive(Default)] struct CapturingWriteContext { rows: Vec, writes: Vec, } #[async_trait] impl LiveStateReader for EmptyLiveStateReader { async fn scan_rows( &self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(vec![]) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } #[async_trait] impl VersionRefReader for EmptyVersionRefReader { async fn load_head(&self, _version_id: &str) -> Result, LixError> { Ok(None) } async fn scan_heads(&self) -> Result, LixError> { Ok(Vec::new()) } } fn empty_version_ref() -> Arc { Arc::new(EmptyVersionRefReader) } fn test_functions() -> FunctionProviderHandle { SharedFunctionProvider::new( Box::new(SystemFunctionProvider) as Box ) } #[async_trait] impl BlobDataReader for CapturingWriteContext { async fn load_bytes_many( &self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { Ok(crate::binary_cas::BlobBytesBatch::new(vec![ None; hashes.len() ])) } } #[async_trait] impl SqlWriteExecutionContext for CapturingWriteContext { fn active_version_id(&self) -> &str { "version-a" } fn functions(&self) -> FunctionProviderHandle { test_functions() } fn list_visible_schemas(&self) -> Result, LixError> { Ok(Vec::new()) } async fn load_bytes_many( &mut self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { BlobDataReader::load_bytes_many(self, hashes).await } async fn scan_live_state( &mut self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_version_head( &mut self, version_id: &str, ) -> Result, LixError> { if version_id == "ghost-version" { return Ok(None); } Ok(Some(format!("commit-{version_id}"))) } async fn stage_write( &mut self, write: TransactionWrite, ) -> Result { self.writes.push(write); Ok(TransactionWriteOutcome { count: 0 }) } } fn live_row() -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single("entity-1"), schema_key: "project_message".to_string(), file_id: None, snapshot_content: Some( "{\"body\":\"hello\",\"rating\":4.5,\"count\":7,\"enabled\":true,\"meta\":{\"x\":1}}" .to_string(), ), metadata: Some(json!({"source": "test"}).to_string()), deleted: false, version_id: "version-a".to_string(), change_id: Some("change-a".to_string()), commit_id: Some("commit-a".to_string()), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } fn entity_insert_spec() -> Arc { Arc::new( derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "type": "object", "properties": { "body": { "type": "string" }, "count": { "type": "integer" }, "enabled": { "type": "boolean" }, "meta": { "type": "object" }, "rating": { "type": "number" } } })) .expect("schema should derive entity surface spec"), ) } fn entity_insert_spec_with_primary_key() -> Arc { Arc::new( derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "body": { "type": "string" } }, "required": ["id", "body"] })) .expect("schema should derive entity surface spec"), ) } fn string_column(values: Vec>) -> ArrayRef { Arc::new(StringArray::from(values)) as ArrayRef } fn string_literal(value: &str) -> Expr { Expr::Literal(ScalarValue::Utf8(Some(value.to_string())), None) } fn column(name: &str) -> Expr { Expr::Column(Column::from_name(name)) } fn eq_filter(column_name: &str, value: &str) -> Expr { Expr::BinaryExpr(BinaryExpr::new( Box::new(column(column_name)), Operator::Eq, Box::new(string_literal(value)), )) } fn entity_insert_batch(include_version: bool, global: bool) -> RecordBatch { let mut fields = vec![ Field::new("body", DataType::Utf8, true), Field::new("count", DataType::Int64, true), Field::new("enabled", DataType::Boolean, true), Field::new("meta", DataType::Utf8, true), Field::new("rating", DataType::Float64, true), Field::new("lixcol_entity_id", DataType::Utf8, false), Field::new("lixcol_metadata", DataType::Utf8, true), Field::new("lixcol_global", DataType::Boolean, false), Field::new("lixcol_untracked", DataType::Boolean, false), ]; let mut columns = vec![ string_column(vec![Some("hello")]), Arc::new(Int64Array::from(vec![7])) as ArrayRef, Arc::new(BooleanArray::from(vec![true])) as ArrayRef, string_column(vec![Some("{\"x\":1}")]), Arc::new(Float64Array::from(vec![4.5])) as ArrayRef, string_column(vec![Some("[\"entity-1\"]")]), string_column(vec![Some("{\"source\":\"entity\"}")]), Arc::new(BooleanArray::from(vec![global])) as ArrayRef, Arc::new(BooleanArray::from(vec![false])) as ArrayRef, ]; if include_version { fields.push(Field::new("lixcol_version_id", DataType::Utf8, false)); columns.push(string_column(vec![Some("version-a")])); } RecordBatch::try_new(Arc::new(Schema::new(fields)), columns) .expect("entity insert batch should build") } fn primary_key_entity_insert_batch(include_entity_id: bool) -> RecordBatch { let mut fields = vec![ Field::new("id", DataType::Utf8, false), Field::new("body", DataType::Utf8, true), Field::new("lixcol_version_id", DataType::Utf8, false), ]; let mut columns = vec![ string_column(vec![Some("message-1")]), string_column(vec![Some("hello")]), string_column(vec![Some("version-a")]), ]; if include_entity_id { fields.push(Field::new("lixcol_entity_id", DataType::Utf8, false)); columns.push(string_column(vec![Some("[\"message-1\"]")])); } RecordBatch::try_new(Arc::new(Schema::new(fields)), columns) .expect("primary-key entity insert batch should build") } #[test] fn excludes_non_entity_builtin_session_surfaces() { assert!(!schema_exposed_as_entity_surface("lix_active_account")); assert!(schema_exposed_as_entity_surface("project_message")); } #[test] fn derives_entity_surface_spec_from_schema_definition() { let spec = derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "type": "object", "properties": { "body": { "type": "string" }, "rating": { "type": "number" }, "meta": { "type": "object" }, "lixcol_entity_id": { "type": "string" } } })) .expect("schema should derive entity surface spec"); assert_eq!(spec.schema_key, "project_message"); assert_eq!( spec.visible_column_names().collect::>(), vec!["body", "meta", "rating"] ); assert_eq!( spec.visible_column("body").map(|column| column.column_type), Some(EntityColumnType::String) ); assert_eq!( spec.visible_column("rating") .map(|column| column.column_type), Some(EntityColumnType::Number) ); assert_eq!( spec.visible_column("meta").map(|column| column.column_type), Some(EntityColumnType::Json) ); assert!(!spec.is_visible_column("lixcol_entity_id")); } #[test] fn entity_surface_spec_rejects_properties_without_projection_type() { let error = derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "kind": {} }, "required": ["id", "kind"], "additionalProperties": false })) .expect_err("unprojectable property should be rejected"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("property '/kind'"), "error should identify the property: {error:?}" ); } #[test] fn by_version_schema_includes_version_system_column() { let spec = derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "type": "object", "properties": { "body": { "type": "string" } } })) .expect("schema should derive entity surface spec"); let schema = entity_surface_schema(&spec, EntityProviderVariant::ByVersion); assert!(schema.field_with_name("body").is_ok()); assert!(schema.field_with_name("lixcol_entity_id").is_ok()); assert!(schema.field_with_name("lixcol_version_id").is_ok()); } #[test] fn active_schema_excludes_version_system_column() { let spec = derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "type": "object", "properties": { "body": { "type": "string" } } })) .expect("schema should derive entity surface spec"); let schema = entity_surface_schema(&spec, EntityProviderVariant::Active); assert!(schema.field_with_name("body").is_ok()); assert!(schema.field_with_name("lixcol_entity_id").is_ok()); assert!(schema.field_with_name("lixcol_version_id").is_err()); } #[test] fn insert_schema_allows_defaulted_identity_columns_to_be_omitted() { let spec = derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()" }, "body": { "type": "string" } } })) .expect("schema should derive entity surface spec"); let schema = entity_surface_schema(&spec, EntityProviderVariant::Active); assert!( schema .field_with_name("id") .expect("id field") .is_nullable(), "defaulted primary-key property should be nullable at SQL input" ); assert!( schema .field_with_name("lixcol_entity_id") .expect("entity id field") .is_nullable(), "opaque identity projection should be nullable for normal primary-key inserts" ); } #[test] fn record_batch_projects_payload_and_system_columns() { let spec = Arc::new( derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "type": "object", "properties": { "body": { "type": "string" }, "rating": { "type": "number" }, "count": { "type": "integer" }, "enabled": { "type": "boolean" }, "meta": { "type": "object" } } })) .expect("schema should derive entity surface spec"), ); let schema = entity_surface_schema(&spec, EntityProviderVariant::ByVersion); let batch = entity_record_batch(&spec, schema, &[live_row()]).expect("entity batch should build"); assert_eq!(batch.num_rows(), 1); assert_eq!( batch .column_by_name("body") .expect("body column") .as_any() .downcast_ref::() .expect("body is string") .value(0), "hello" ); assert_eq!( batch .column_by_name("rating") .expect("rating column") .as_any() .downcast_ref::() .expect("rating is f64") .value(0), 4.5 ); assert_eq!( batch .column_by_name("count") .expect("count column") .as_any() .downcast_ref::() .expect("count is i64") .value(0), 7 ); assert_eq!( batch .column_by_name("lixcol_entity_id") .expect("entity id column") .as_any() .downcast_ref::() .expect("entity id is string") .value(0), "[\"entity-1\"]" ); assert_eq!( batch .column_by_name("lixcol_version_id") .expect("version id column") .as_any() .downcast_ref::() .expect("version id is string") .value(0), "version-a" ); } #[tokio::test] async fn provider_registers_as_table_provider() { let spec = Arc::new( derive_entity_surface_spec_from_schema(&json!({ "x-lix-key": "project_message", "type": "object", "properties": { "body": { "type": "string" } } })) .expect("schema should derive entity surface spec"), ); let provider = super::EntityProvider::by_version( spec, Arc::new(EmptyLiveStateReader) as Arc, empty_version_ref(), ); assert!(provider.schema.field_with_name("lixcol_version_id").is_ok()); } #[test] fn primary_key_filters_route_entity_ids_for_string_primary_key() { let spec = entity_insert_spec_with_primary_key(); let filters = vec![ eq_filter("id", "entity-a"), Expr::InList(InList::new( Box::new(column("id")), vec![string_literal("entity-b"), string_literal("entity-a")], false, )), ]; let entity_ids = super::entity_ids_from_primary_key_filters(&spec, &filters) .expect("primary-key filters should analyze") .expect("primary-key filters should produce a constraint"); assert_eq!( entity_ids, vec![crate::entity_identity::EntityIdentity::single("entity-a")] ); } #[test] fn primary_key_filter_analyzer_models_boolean_predicates() { let spec = entity_insert_spec_with_primary_key(); let analyzer = super::EntityPrimaryKeyFilterAnalyzer::new(&spec); let disjunction = Expr::BinaryExpr(BinaryExpr::new( Box::new(eq_filter("id", "entity-a")), Operator::Or, Box::new(eq_filter("id", "entity-b")), )); let contradiction = Expr::BinaryExpr(BinaryExpr::new( Box::new(eq_filter("id", "entity-a")), Operator::And, Box::new(eq_filter("id", "entity-b")), )); let disjunction_ids = analyzer .analyze(&disjunction) .expect("OR should analyze") .expect("OR should produce an entity-id set"); let contradiction_ids = analyzer .analyze(&contradiction) .expect("AND should analyze") .expect("AND should produce an entity-id set"); assert_eq!( disjunction_ids.into_iter().collect::>(), vec![ crate::entity_identity::EntityIdentity::single("entity-a"), crate::entity_identity::EntityIdentity::single("entity-b"), ] ); assert!(contradiction_ids.is_empty()); } #[test] fn primary_key_filters_ignore_non_key_and_negated_predicates() { let spec = entity_insert_spec_with_primary_key(); let filters = vec![ eq_filter("body", "hello"), Expr::InList(InList::new( Box::new(column("id")), vec![string_literal("entity-a")], true, )), ]; assert!(super::entity_ids_from_primary_key_filters(&spec, &filters) .expect("ignored filters should analyze") .unwrap_or_default() .is_empty()); } #[test] fn decodes_by_version_entity_insert_into_lix_state_write_row() { let spec = entity_insert_spec(); let rows = entity_lix_state_write_rows_from_batch( &spec, &entity_insert_batch(true, false), &InsertColumnIntents::all_explicit(), None, ) .expect("entity batch should decode"); assert_eq!(rows.len(), 1); assert_eq!( rows[0].entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single("entity-1")) ); assert_eq!(rows[0].schema_key, "project_message"); assert_eq!(rows[0].version_id, "version-a"); assert_eq!( rows[0].metadata.as_ref(), Some(&TransactionJson::from_value_for_test( json!({"source": "entity"}) )) ); assert!(!rows[0].global); assert_eq!( rows[0].snapshot.as_ref().expect("snapshot_content"), &json!({ "body": "hello", "count": 7, "enabled": true, "meta": {"x": 1}, "rating": 4.5 }) ); } #[test] fn primary_key_entity_insert_stages_partial_row_for_normalization() { let spec = entity_insert_spec_with_primary_key(); let rows = entity_lix_state_write_rows_from_batch( &spec, &primary_key_entity_insert_batch(false), &InsertColumnIntents::all_explicit(), None, ) .expect("entity batch should decode"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, None); assert_eq!( rows[0].snapshot.as_ref().expect("snapshot_content"), &json!({ "body": "hello", "id": "message-1" }) ); } #[test] fn primary_key_entity_insert_preserves_explicit_opaque_projection_for_normalization() { let spec = entity_insert_spec_with_primary_key(); let rows = entity_lix_state_write_rows_from_batch( &spec, &primary_key_entity_insert_batch(true), &InsertColumnIntents::all_explicit(), None, ) .expect("primary-key entity insert should stage explicit lixcol_entity_id"); assert_eq!(rows.len(), 1); assert_eq!( rows[0].entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single("message-1")) ); } #[test] fn active_entity_insert_defaults_version_id() { let spec = entity_insert_spec(); let rows = entity_lix_state_write_rows_from_batch( &spec, &entity_insert_batch(false, false), &InsertColumnIntents::all_explicit(), Some("version-active"), ) .expect("active entity batch should decode"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].version_id, "version-active"); assert!(!rows[0].global); } #[test] fn by_version_entity_insert_requires_version_id_for_non_global_rows() { let spec = entity_insert_spec(); let error = entity_lix_state_write_rows_from_batch( &spec, &entity_insert_batch(false, false), &InsertColumnIntents::all_explicit(), None, ) .expect_err("by-version entity insert should require version id"); assert!( error.to_string().contains("requires lixcol_version_id"), "unexpected error: {error}" ); } #[test] fn by_version_entity_insert_global_row_uses_global_version() { let spec = entity_insert_spec(); let rows = entity_lix_state_write_rows_from_batch( &spec, &entity_insert_batch(false, true), &InsertColumnIntents::all_explicit(), None, ) .expect("global entity batch should decode"); assert_eq!(rows.len(), 1); assert!(rows[0].global); assert_eq!(rows[0].version_id, crate::GLOBAL_VERSION_ID); } #[test] fn entity_insert_rejects_global_with_non_global_version_id() { let spec = entity_insert_spec(); let error = entity_lix_state_write_rows_from_batch( &spec, &entity_insert_batch(true, true), &InsertColumnIntents::all_explicit(), None, ) .expect_err("global entity write should reject conflicting version id"); assert!( error .to_string() .contains("cannot set lixcol_global=true with non-global lixcol_version_id"), "unexpected error: {error}" ); } #[tokio::test] async fn entity_insert_sink_stages_decoded_lix_state_rows() { let spec = entity_insert_spec(); let mut write_context = CapturingWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let batch = entity_insert_batch(true, false); let sink = EntityInsertSink::new( Arc::clone(&spec), batch.schema(), InsertColumnIntents::all_explicit(), write_ctx, super::VersionBinding::explicit(), ); let count = sink .write_batches(vec![batch], &Arc::new(TaskContext::default())) .await .expect("entity sink should stage write"); assert_eq!(count, 1); assert_eq!( write_context.writes.as_slice(), &[TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows: vec![TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("entity-1")), schema_key: "project_message".to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_for_test( json!({"body":"hello","count":7,"enabled":true,"meta":{"x":1},"rating":4.5}) )), metadata: Some(TransactionJson::from_value_for_test( json!({"source": "entity"}) )), origin: None, created_at: None, updated_at: None, global: false, change_id: None, commit_id: None, untracked: false, version_id: "version-a".to_string(), }] }] ); } } ================================================ FILE: packages/engine/src/sql2/error.rs ================================================ use datafusion::error::DataFusionError; use crate::LixError; pub(crate) fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { if let Some(error) = lix_error_from_datafusion_error(&error) { return error; } classify_datafusion_error(&error) } pub(crate) fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { DataFusionError::External(Box::new(error)) } fn lix_error_from_datafusion_error(error: &DataFusionError) -> Option { match error { DataFusionError::External(error) => error .downcast_ref::() .cloned() .map(normalize_external_sql_error), DataFusionError::Context(_, error) | DataFusionError::Diagnostic(_, error) => { lix_error_from_datafusion_error(error) } DataFusionError::Shared(error) => lix_error_from_datafusion_error(error), DataFusionError::Collection(errors) => { errors.iter().find_map(lix_error_from_datafusion_error) } _ => None, } } fn normalize_external_sql_error(error: LixError) -> LixError { let lower = error.message.to_ascii_lowercase(); if (error.code.starts_with("LIX_ERROR_PATH_") && error.code != "LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT") || error.code == LixError::CODE_INVALID_JSON_PATH || (error.code == LixError::CODE_TYPE_MISMATCH && lower.contains("cannot store blob values directly")) || (error.code == LixError::CODE_SCHEMA_DEFINITION && lower.contains("system schema")) { return LixError { code: LixError::CODE_INVALID_PARAM.to_string(), ..error }; } error } fn classify_datafusion_error(error: &DataFusionError) -> LixError { let message = format!("sql2 DataFusion error: {error}"); let lower = message.to_ascii_lowercase(); if looks_like_json_udf_miss(&lower) { return LixError::new(LixError::CODE_UDF_NOT_FOUND, message) .with_hint("Use lix_json_get(json, key_or_index, ...) for JSON values or lix_json_get_text(json, key_or_index, ...) for text."); } if looks_like_unsupported_dialect(&lower) { return LixError::new(LixError::CODE_DIALECT_UNSUPPORTED, message) .with_hint("Lix SQL uses DataFusion syntax. Use lix_json_get(...) or lix_json_get_text(...) for JSON access, and numbered placeholders like $1, $2, ..."); } if looks_like_unsupported_runtime_plan(&lower) { return LixError::new(LixError::CODE_UNSUPPORTED_SQL_RUNTIME_PLAN, message) .with_hint("This SQL feature currently plans to a physical operator that is not supported by this engine runtime. Rewrite the query to avoid the unsupported operator, or run it on a runtime that supports the full physical plan."); } if lower.contains("uses variadic path segments") { return LixError::new(LixError::CODE_INVALID_JSON_PATH, message) .with_hint("Pass path segments as separate arguments, for example lix_json_get_text(document, 'user', 'name'), not '$.user.name' or '/user/name'."); } if lower.contains("failed to parse placeholder id") || lower.contains("placeholder") || lower.contains("bind") { return LixError::new(LixError::CODE_PARSE_ERROR, message).with_hint( "Use numbered placeholders like $1, $2, ...; '?' placeholders are not supported.", ); } if lower.contains("requires start_commit_id") || lower.contains("history filter") || lower.contains("history table") { return LixError::new(LixError::CODE_HISTORY_FILTER_REQUIRED, message) .with_hint("Add a commit/version range predicate before querying history tables."); } if lower.contains("table not found") || (lower.contains("table") && lower.contains("not found")) || lower.contains("no table named") || lower.contains("failed to resolve table") || lower.contains("could not find table") || (lower.contains("relation") && lower.contains("not found")) { return LixError::new(LixError::CODE_TABLE_NOT_FOUND, message) .with_hint("Use information_schema.tables to inspect available Lix SQL tables."); } if (lower.contains("column") || lower.contains("field")) && (lower.contains("not found") || lower.contains("does not exist") || lower.contains("no field named")) { return LixError::new(LixError::CODE_COLUMN_NOT_FOUND, message); } if lower.contains("schema validation") { return LixError::new(LixError::CODE_SCHEMA_VALIDATION, message); } if lower.contains("schema definition") { return LixError::new(LixError::CODE_SCHEMA_DEFINITION, message); } if lower.contains("unsupported sql type json") { return LixError::new(LixError::CODE_DIALECT_UNSUPPORTED, message) .with_hint("Declare JSON/object columns through lix.registerSchema(...) or lix_registered_schema; SQL type JSON is not supported."); } if looks_like_type_mismatch(&lower) { if lower.contains("encountered non utf-8 data") { return LixError::new( LixError::CODE_TYPE_MISMATCH, "Lix SQL string functions require valid UTF-8 text; blob data could not be decoded as UTF-8", ) .with_hint( "Pass text to string functions. Raw blob parameters stay binary and are not implicitly decoded as UTF-8.", ); } return LixError::new(LixError::CODE_TYPE_MISMATCH, message) .with_hint("Check the SQL function argument types. JSON text can be converted with lix_json(...); JSON fields can be read with lix_json_get(...) or lix_json_get_text(...)."); } if matches!( error, DataFusionError::Plan(_) | DataFusionError::SchemaError(_, _) ) { return LixError::new(LixError::CODE_PARSE_ERROR, message); } if lower.contains("constraint") || lower.contains("not null") || lower.contains("non-nullable") || lower.contains("unique") || lower.contains("duplicate") || lower.contains("primary key") || lower.contains("foreign key") { return LixError::new(LixError::CODE_CONSTRAINT_VIOLATION, message); } match error { DataFusionError::SQL(_, _) => LixError::new(LixError::CODE_PARSE_ERROR, message), DataFusionError::NotImplemented(_) => { LixError::new(LixError::CODE_DIALECT_UNSUPPORTED, message) } DataFusionError::Plan(_) | DataFusionError::SchemaError(_, _) => { LixError::new(LixError::CODE_PARSE_ERROR, message) } DataFusionError::IoError(_) | DataFusionError::ObjectStore(_) => { LixError::new(LixError::CODE_STORAGE_ERROR, message) } DataFusionError::Internal(_) => LixError::new(LixError::CODE_INTERNAL_ERROR, message), _ => LixError::new(LixError::CODE_UNKNOWN, message), } } fn looks_like_json_udf_miss(lower: &str) -> bool { let json_function_guess = [ "json_extract", "json_get", "json_get_string", "json_get_text", "json_extract_string", "json_extract_text", ] .iter() .any(|name| lower.contains(name)); json_function_guess && (lower.contains("function") || lower.contains("udf") || lower.contains("not found") || lower.contains("does not exist") || lower.contains("did you mean")) } fn looks_like_unsupported_dialect(lower: &str) -> bool { lower.contains("->>") || lower.contains("operator does not exist") || lower.contains("unsupported sql type json") || lower.contains("sqlite_master") || lower.contains("returning") } fn looks_like_unsupported_runtime_plan(lower: &str) -> bool { lower.contains("sql physical operator") && lower.contains("is not supported by the webassembly runtime yet") } fn looks_like_type_mismatch(lower: &str) -> bool { (lower.contains("type") || lower.contains("signature") || lower.contains("coerc") || lower.contains("argument") || lower.contains("convert")) && (lower.contains("mismatch") || lower.contains("incompatible") || lower.contains("expected") || lower.contains("cannot") || lower.contains("invalid")) } ================================================ FILE: packages/engine/src/sql2/execute.rs ================================================ use datafusion::arrow::datatypes::Field; use datafusion::arrow::record_batch::RecordBatch; use datafusion::common::metadata::{FieldMetadata, ScalarAndMetadata}; use datafusion::common::{ParamValues, ScalarValue}; use datafusion::logical_expr::{Expr, LogicalPlan, WriteOp}; use datafusion::prelude::SessionContext; use datafusion::sql::parser::Statement as DataFusionStatement; use serde_json::{json, Value as JsonValue}; use std::collections::{BTreeMap, BTreeSet, HashSet}; use crate::schema::schema_key_from_definition; use crate::{LixError, LixNotice, SqlQueryResult, Value}; use super::predicate_typecheck::validate_json_predicate_expr_with_dfschema; use super::result_metadata::{field_is_json, LIX_VALUE_TYPE_JSON, LIX_VALUE_TYPE_METADATA_KEY}; use super::session::{build_read_session, build_write_session, new_sql_session_context}; use super::write_normalization::{ is_binary_type, lix_file_data_type_lix_error, logical_expr_is_binary_or_null, }; use super::{SqlExecutionContext, SqlStatementKind, SqlWriteExecutionContext}; #[allow(dead_code)] pub(crate) struct SqlLogicalPlan { session: SessionContext, plan: LogicalPlan, kind: SqlStatementKind, notices: Vec, strict_binary_params: BTreeSet, } impl SqlLogicalPlan { #[allow(dead_code)] pub(crate) fn kind(&self) -> SqlStatementKind { self.kind } #[allow(dead_code)] pub(crate) fn is_write(&self) -> bool { self.kind == SqlStatementKind::Write } } /// Minimal top-level sql2 entrypoint. /// /// The final implementation will build the DataFusion session from the /// execution context and source rows from `live_state()`. /// /// `catalog()` is intentionally omitted from the MVP boundary for now. #[allow(dead_code)] pub(crate) async fn execute_sql( ctx: &dyn SqlExecutionContext, sql: &str, params: &[Value], ) -> Result { let plan = create_logical_plan(ctx, sql).await?; execute_logical_plan(plan, params).await } pub(crate) async fn create_logical_plan( ctx: &dyn SqlExecutionContext, sql: &str, ) -> Result { super::validate_supported_statement_ast(sql)?; super::udfs::validate_public_udf_calls(sql)?; validate_public_read_sql_surface(sql)?; let session = build_read_session(ctx).await?; let plan = session .state() .create_logical_plan(sql) .await .map_err(datafusion_error_to_lix_error)?; validate_supported_logical_plan(&plan)?; validate_json_predicates_in_logical_plan(&plan)?; let kind = classify_logical_plan(&plan); let notices = history_filter_notices(&plan); Ok(SqlLogicalPlan { session, plan, kind, notices, strict_binary_params: BTreeSet::new(), }) } #[allow(dead_code)] pub(crate) async fn create_write_logical_plan( ctx: &mut dyn SqlWriteExecutionContext, sql: &str, ) -> Result { super::udfs::validate_public_udf_calls(sql)?; let visible_schemas = ctx.list_visible_schemas()?; super::public_bind::validate_public_dml_sql(sql, &visible_schemas)?; let statement = parse_datafusion_statement(sql)?; super::validate_supported_datafusion_statement_ast(&statement)?; reject_read_only_history_view_dml_from_statement(&statement, &visible_schemas)?; let session = build_write_session(ctx).await?; let plan = create_logical_plan_from_statement(&session, statement).await?; validate_supported_logical_plan(&plan)?; super::public_bind::validate_public_dml_plan(&plan, &visible_schemas)?; validate_json_predicates_in_logical_plan(&plan)?; let strict_binary_params = validate_strict_lix_file_data_writes(&plan)?; let kind = classify_logical_plan(&plan); Ok(SqlLogicalPlan { session, plan, kind, notices: Vec::new(), strict_binary_params, }) } fn validate_public_read_sql_surface(sql: &str) -> Result<(), LixError> { let normalized = sql.to_ascii_lowercase(); if normalized.contains("lower(path)") { return Err(LixError::new( LixError::CODE_UNSUPPORTED_SQL, "public column 'path' must be compared directly to a literal or parameter", )); } if normalized.contains("lixcol_version_id") && (normalized.contains("= lower(") || normalized.contains(" in (lower(")) { return Err(LixError::new( LixError::CODE_UNSUPPORTED_SQL, "public column 'lixcol_version_id' must be compared directly to a literal or parameter", )); } Ok(()) } fn parse_datafusion_statement(sql: &str) -> Result { let session = new_sql_session_context(); let dialect = session.state().config_options().sql_parser.dialect; session .state() .sql_to_statement(sql, &dialect) .map_err(datafusion_error_to_lix_error) } async fn create_logical_plan_from_statement( session: &SessionContext, statement: DataFusionStatement, ) -> Result { session .state() .statement_to_plan(statement) .await .map_err(datafusion_error_to_lix_error) } fn validate_json_predicates_in_logical_plan(plan: &LogicalPlan) -> Result<(), LixError> { match plan { LogicalPlan::Filter(filter) => { validate_json_predicate_expr_with_dfschema(filter.input.schema(), &filter.predicate)?; } LogicalPlan::TableScan(scan) => { for filter in &scan.filters { validate_json_predicate_expr_with_dfschema(scan.projected_schema.as_ref(), filter)?; } } _ => {} } for input in plan.inputs() { validate_json_predicates_in_logical_plan(input)?; } Ok(()) } fn validate_strict_lix_file_data_writes(plan: &LogicalPlan) -> Result, LixError> { let mut strict_binary_params = BTreeSet::new(); let LogicalPlan::Dml(dml) = plan else { return Ok(strict_binary_params); }; if dml.table_name.table() != "lix_file" || !matches!(dml.op, WriteOp::Insert(_) | WriteOp::Update) { return Ok(strict_binary_params); } reject_non_binary_lix_file_data_write(&dml.input, &mut strict_binary_params)?; Ok(strict_binary_params) } fn reject_non_binary_lix_file_data_write( input: &LogicalPlan, strict_binary_params: &mut BTreeSet, ) -> Result<(), LixError> { let LogicalPlan::Projection(projection) = input else { return Ok(()); }; let Some(data_expr) = projection.expr.iter().find_map(|expr| match expr { Expr::Alias(alias) if alias.name == "data" => Some(alias.expr.as_ref()), _ => None, }) else { return Ok(()); }; validate_lix_file_data_expr(data_expr, strict_binary_params)?; let Expr::Column(column) = data_expr else { return Ok(()); }; let LogicalPlan::Values(values) = projection.input.as_ref() else { return Ok(()); }; let Ok(column_index) = values.schema.index_of_column(column) else { return Ok(()); }; for row in &values.values { if let Some(value_expr) = row.get(column_index) { validate_lix_file_data_expr(value_expr, strict_binary_params)?; } } Ok(()) } fn validate_lix_file_data_expr( expr: &Expr, strict_binary_params: &mut BTreeSet, ) -> Result<(), LixError> { match expr { Expr::Cast(cast) if is_binary_type(&cast.data_type) => { if collect_placeholder_param(&cast.expr, strict_binary_params)? { return Ok(()); } if !logical_expr_is_binary_or_null(&cast.expr) { return Err(lix_file_data_type_lix_error()); } } Expr::Placeholder(_) => { collect_placeholder_param(expr, strict_binary_params)?; } Expr::Alias(alias) => validate_lix_file_data_expr(&alias.expr, strict_binary_params)?, _ => {} } Ok(()) } fn collect_placeholder_param( expr: &Expr, strict_binary_params: &mut BTreeSet, ) -> Result { match expr { Expr::Placeholder(placeholder) => { let index = placeholder_index(&placeholder.id)?; strict_binary_params.insert(index); Ok(true) } Expr::Alias(alias) => collect_placeholder_param(&alias.expr, strict_binary_params), _ => Ok(false), } } fn placeholder_index(id: &str) -> Result { id.strip_prefix('$') .and_then(|raw| raw.parse::().ok()) .filter(|index| *index > 0) .ok_or_else(|| { LixError::new( LixError::CODE_PARSE_ERROR, format!("unsupported SQL parameter placeholder '{id}'"), ) .with_hint("Use numbered placeholders like $1, $2, ...") }) } pub(crate) async fn execute_logical_plan( plan: SqlLogicalPlan, params: &[Value], ) -> Result { let SqlLogicalPlan { session, plan, kind: _, notices, strict_binary_params, } = plan; validate_parameter_count(&plan, params.len())?; validate_strict_binary_params(&strict_binary_params, params)?; let mut dataframe = session .execute_logical_plan(plan) .await .map_err(datafusion_error_to_lix_error)?; if !params.is_empty() { dataframe = dataframe .with_param_values(ParamValues::List( params.iter().map(scalar_value_from_lix_value).collect(), )) .map_err(datafusion_error_to_lix_error)?; } let result_fields = dataframe .schema() .fields() .iter() .map(|field| field.as_ref().clone()) .collect::>(); let batches = super::runtime::collect_dataframe(dataframe) .await .map_err(datafusion_error_to_lix_error)?; let mut result = query_result_from_batches(&result_fields, &batches)?; result.notices = notices; Ok(result) } fn validate_strict_binary_params( strict_binary_params: &BTreeSet, params: &[Value], ) -> Result<(), LixError> { for index in strict_binary_params { let Some(value) = params.get(index - 1) else { continue; }; if !matches!(value, Value::Blob(_)) { return Err(lix_file_data_type_lix_error()); } } Ok(()) } fn validate_parameter_count(plan: &LogicalPlan, param_count: usize) -> Result<(), LixError> { let parameter_names = plan .get_parameter_names() .map_err(datafusion_error_to_lix_error)?; let expected_count = expected_positional_parameter_count(¶meter_names)?; if param_count == expected_count { return Ok(()); } Err(LixError::new( LixError::CODE_INVALID_PARAM, format!( "SQL expected {expected_count} parameter(s), but {param_count} parameter(s) were provided" ), ) .with_details(json!({ "operation": "execute", "expected_param_count": expected_count, "provided_param_count": param_count, "placeholders": sorted_parameter_names(¶meter_names), }))) } fn expected_positional_parameter_count( parameter_names: &HashSet, ) -> Result { let mut max_index = 0usize; for name in parameter_names { let Some(index) = name .strip_prefix('$') .and_then(|raw| raw.parse::().ok()) else { return Err(LixError::new( LixError::CODE_PARSE_ERROR, format!("unsupported SQL parameter placeholder '{name}'"), ) .with_hint("Use numbered placeholders like $1, $2, ...") .with_details(json!({ "operation": "execute", "placeholder": name, }))); }; if index == 0 { return Err(LixError::new( LixError::CODE_PARSE_ERROR, "SQL parameter placeholders are 1-indexed", ) .with_hint("Use numbered placeholders like $1, $2, ...") .with_details(json!({ "operation": "execute", "placeholder": name, }))); } max_index = max_index.max(index); } Ok(max_index) } fn sorted_parameter_names(parameter_names: &HashSet) -> Vec { let mut names = parameter_names.iter().cloned().collect::>(); names.sort(); names } fn reject_read_only_history_view_dml_from_statement( statement: &DataFusionStatement, visible_schemas: &[JsonValue], ) -> Result<(), LixError> { let target_names = super::datafusion_statement_dml_target_table_names(statement); for target_name in target_names { if is_history_view_name(&target_name, visible_schemas)? { return Err(read_only_history_view_error(&target_name)); } } Ok(()) } fn is_history_view_name(table_name: &str, visible_schemas: &[JsonValue]) -> Result { if matches!( table_name, "lix_state_history" | "lix_file_history" | "lix_directory_history" ) { return Ok(true); } for schema in visible_schemas { let schema_key = schema_key_from_definition(schema)?; if table_name == format!("{}_history", schema_key.schema_key) { return Ok(true); } } Ok(false) } fn read_only_history_view_error(view_name: &str) -> LixError { LixError::new( LixError::CODE_READ_ONLY, format!("DML cannot write read-only history view '{view_name}'"), ) .with_hint( "History views are query-only; write to the live surface such as lix_state, lix_file, lix_directory, or the typed entity table.", ) } fn classify_logical_plan(plan: &LogicalPlan) -> SqlStatementKind { match plan { LogicalPlan::Dml(_) => SqlStatementKind::Write, LogicalPlan::Ddl(_) | LogicalPlan::Statement(_) | LogicalPlan::Copy(_) => { SqlStatementKind::Other } _ => SqlStatementKind::Read, } } fn validate_supported_logical_plan(plan: &LogicalPlan) -> Result<(), LixError> { match plan { LogicalPlan::Ddl(_) => { return Err(LixError::new( LixError::CODE_UNSUPPORTED_SQL, "DDL statements are not supported by Lix SQL", ) .with_hint( "Use Lix entity surfaces such as lix_registered_schema, lix_version, lix_file, and lix_key_value instead of CREATE/DROP statements.", )); } LogicalPlan::Statement(_) => { return Err(LixError::new( LixError::CODE_UNSUPPORTED_SQL, "SQL utility statements are not supported by Lix SQL", )); } LogicalPlan::Copy(_) => { return Err(LixError::new( LixError::CODE_UNSUPPORTED_SQL, "COPY statements are not supported by Lix SQL", )); } LogicalPlan::RecursiveQuery(_) => { return Err(LixError::new( LixError::CODE_UNSUPPORTED_SQL, "recursive CTEs are not supported by Lix SQL", ) .with_hint( "Use explicit commit graph surfaces such as lix_commit, lix_commit_edge, and lix_state_history instead of WITH RECURSIVE.", )); } _ => {} } for input in plan.inputs() { validate_supported_logical_plan(input)?; } Ok(()) } fn scalar_value_from_lix_value(value: &Value) -> ScalarAndMetadata { match value { Value::Null => ScalarValue::Null.into(), Value::Boolean(value) => ScalarValue::Boolean(Some(*value)).into(), Value::Integer(value) => ScalarValue::Int64(Some(*value)).into(), Value::Real(value) => ScalarValue::Float64(Some(*value)).into(), Value::Text(value) => ScalarValue::Utf8(Some(value.clone())).into(), Value::Json(value) => ScalarAndMetadata::new( ScalarValue::Utf8(Some(value.to_string())), Some(json_field_metadata()), ), Value::Blob(value) => ScalarValue::Binary(Some(value.clone())).into(), } } fn json_field_metadata() -> FieldMetadata { FieldMetadata::new(BTreeMap::from([( LIX_VALUE_TYPE_METADATA_KEY.to_string(), LIX_VALUE_TYPE_JSON.to_string(), )])) } fn datafusion_error_to_lix_error(error: datafusion::error::DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn query_result_from_batches( result_fields: &[Field], batches: &[RecordBatch], ) -> Result { let result_columns = result_fields .iter() .map(|field| field.name().to_string()) .collect::>(); let mut rows = Vec::>::new(); for batch in batches { for row_index in 0..batch.num_rows() { let mut row = Vec::::with_capacity(batch.num_columns()); for (column_index, array) in batch.columns().iter().enumerate() { let scalar = ScalarValue::try_from_array(array.as_ref(), row_index) .map_err(datafusion_error_to_lix_error)?; let field = result_fields.get(column_index); row.push(scalar_value_to_lix_value(&scalar, field)?); } rows.push(row); } } Ok(SqlQueryResult { rows, columns: result_columns.to_vec(), notices: Vec::new(), }) } fn history_filter_notices(plan: &LogicalPlan) -> Vec { let mut observations = Vec::new(); collect_notice_observations(plan, &Vec::new(), &mut observations); let mut notices = Vec::new(); let mut emitted_codes = HashSet::::new(); for observation in observations { for rule in HISTORY_NOTICE_RULES { if observation.table_name != rule.table_name { continue; } if !observation.references_any(rule.payload_columns) || observation.references_any(rule.identity_columns) { continue; } let code = format!("LIX_HISTORY_NON_IDENTITY_FILTER:{}", rule.table_name); if emitted_codes.insert(code) { notices.push(history_non_identity_filter_notice(rule.table_name)); } } } notices } #[derive(Debug)] struct NoticeObservation { table_name: String, filter_columns: HashSet, } impl NoticeObservation { fn references_any(&self, columns: &[&str]) -> bool { columns .iter() .any(|column| self.filter_columns.contains(*column)) } } struct HistoryNoticeRule { table_name: &'static str, payload_columns: &'static [&'static str], identity_columns: &'static [&'static str], } const HISTORY_NOTICE_RULES: &[HistoryNoticeRule] = &[ HistoryNoticeRule { table_name: "lix_file_history", payload_columns: &["path", "directory_id", "name", "hidden", "data"], identity_columns: &["id", "lixcol_entity_id"], }, HistoryNoticeRule { table_name: "lix_directory_history", payload_columns: &["path", "parent_id", "name", "hidden"], identity_columns: &["id", "lixcol_entity_id"], }, ]; fn collect_notice_observations( plan: &LogicalPlan, active_filter_columns: &Vec>, observations: &mut Vec, ) { match plan { LogicalPlan::Filter(filter) => { let mut next_filters = active_filter_columns.clone(); next_filters.push(expr_column_names(&filter.predicate)); collect_notice_observations(&filter.input, &next_filters, observations); } LogicalPlan::TableScan(scan) => { let mut filter_columns = HashSet::new(); for columns in active_filter_columns { filter_columns.extend(columns.iter().cloned()); } for filter in &scan.filters { filter_columns.extend(expr_column_names(filter)); } if !filter_columns.is_empty() { observations.push(NoticeObservation { table_name: table_reference_name(&scan.table_name), filter_columns, }); } } other => { for input in other.inputs() { collect_notice_observations(input, active_filter_columns, observations); } } } } fn expr_column_names(expr: &Expr) -> HashSet { expr.column_refs() .iter() .map(|column| column.name.clone()) .collect() } fn table_reference_name(table: &datafusion::common::TableReference) -> String { match table { datafusion::common::TableReference::Bare { table } => table.to_string(), datafusion::common::TableReference::Partial { table, .. } => table.to_string(), datafusion::common::TableReference::Full { table, .. } => table.to_string(), } } fn history_non_identity_filter_notice(view_name: &str) -> LixNotice { LixNotice { code: "LIX_HISTORY_NON_IDENTITY_FILTER".to_string(), message: format!("{view_name} was filtered without an identity predicate."), hint: Some( "Filter by id or lixcol_entity_id to include tombstones and renamed history." .to_string(), ), } } fn scalar_value_to_lix_value( value: &ScalarValue, field: Option<&Field>, ) -> Result { match value { ScalarValue::Null => Ok(Value::Null), ScalarValue::Boolean(Some(value)) => Ok(Value::Boolean(*value)), ScalarValue::Boolean(None) => Ok(Value::Null), ScalarValue::Int8(Some(value)) => Ok(Value::Integer(i64::from(*value))), ScalarValue::Int8(None) => Ok(Value::Null), ScalarValue::Int16(Some(value)) => Ok(Value::Integer(i64::from(*value))), ScalarValue::Int16(None) => Ok(Value::Null), ScalarValue::Int32(Some(value)) => Ok(Value::Integer(i64::from(*value))), ScalarValue::Int32(None) => Ok(Value::Null), ScalarValue::Int64(Some(value)) => Ok(Value::Integer(*value)), ScalarValue::Int64(None) => Ok(Value::Null), ScalarValue::UInt8(Some(value)) => Ok(Value::Integer(i64::from(*value))), ScalarValue::UInt8(None) => Ok(Value::Null), ScalarValue::UInt16(Some(value)) => Ok(Value::Integer(i64::from(*value))), ScalarValue::UInt16(None) => Ok(Value::Null), ScalarValue::UInt32(Some(value)) => Ok(Value::Integer(i64::from(*value))), ScalarValue::UInt32(None) => Ok(Value::Null), ScalarValue::UInt64(Some(value)) => match i64::try_from(*value) { Ok(value) => Ok(Value::Integer(value)), Err(_) => Ok(Value::Text(value.to_string())), }, ScalarValue::UInt64(None) => Ok(Value::Null), ScalarValue::Float32(Some(value)) => Ok(Value::Real(f64::from(*value))), ScalarValue::Float32(None) => Ok(Value::Null), ScalarValue::Float64(Some(value)) => Ok(Value::Real(*value)), ScalarValue::Float64(None) => Ok(Value::Null), ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)) => string_scalar_to_lix_value(value, field), ScalarValue::Utf8(None) | ScalarValue::Utf8View(None) | ScalarValue::LargeUtf8(None) => { Ok(Value::Null) } ScalarValue::Binary(Some(value)) | ScalarValue::LargeBinary(Some(value)) => { Ok(Value::Blob(value.clone())) } ScalarValue::Binary(None) | ScalarValue::LargeBinary(None) => Ok(Value::Null), other => Ok(Value::Text(other.to_string())), } } fn string_scalar_to_lix_value(value: &str, field: Option<&Field>) -> Result { if field.is_some_and(field_is_json) { return serde_json::from_str::(value) .map(Value::Json) .map_err(|error| { LixError::new( "LIX_ERROR_INVALID_JSON", format!( "column '{}' is marked as JSON but contains invalid JSON: {error}", field .map(|field| field.name().as_str()) .unwrap_or("") ), ) }); } Ok(Value::Text(value.to_string())) } #[cfg(test)] mod tests { use std::sync::{Arc, Mutex}; use async_trait::async_trait; use serde_json::json; use serde_json::Value as JsonValue; use super::{ create_write_logical_plan, execute_logical_plan, execute_sql, SqlExecutionContext, SqlWriteExecutionContext, }; use crate::binary_cas::BlobDataReader; use crate::commit_graph::{ CommitGraphChangeHistoryEntry, CommitGraphChangeHistoryRequest, CommitGraphCommit, CommitGraphEdge, CommitGraphReader, ReachableCommitGraphCommit, }; use crate::commit_store::CommitStoreContext; use crate::functions::{ FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider, }; use crate::json_store::JsonStoreContext; use crate::live_state::{ LiveStateContext, LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::sql2::{CommitStoreQuerySource, SqlCommitStoreQuerySource}; use crate::storage::{ KvEntryPage, KvExistsBatch, KvGetRequest, KvKeyPage, KvScanRequest, KvValueBatch, KvValuePage, StorageContext, StorageReadScope, StorageReadTransaction, StorageReader, StorageWriteSet, }; use crate::tracked_state::TrackedStateContext; use crate::transaction::prepare_version_ref_row; use crate::transaction::types::{ TransactionWrite, TransactionWriteOutcome, TransactionWriteRow, }; use crate::untracked_state::UntrackedStateContext; use crate::version::VersionRefReader; use crate::{Engine, ExecuteResult, SessionContext}; use crate::{LixError, Value}; struct DummyBlobReader; struct DummyLiveStateReader; struct RowsLiveStateReader { rows: Vec, } struct BackendBlobReader(StorageContext); struct DummyCommitGraphReader; struct DummyVersionRefReader; struct TestReadTransaction(StorageContext); fn test_read_scope( storage: StorageContext, ) -> StorageReadScope> { StorageReadScope::new(Box::new(TestReadTransaction(storage))) } #[async_trait] impl StorageReader for TestReadTransaction { async fn get_values(&mut self, request: KvGetRequest) -> Result { self.0.get_values(request).await } async fn exists_many(&mut self, request: KvGetRequest) -> Result { self.0.exists_many(request).await } async fn scan_keys(&mut self, request: KvScanRequest) -> Result { self.0.scan_keys(request).await } async fn scan_values(&mut self, request: KvScanRequest) -> Result { self.0.scan_values(request).await } async fn scan_entries(&mut self, request: KvScanRequest) -> Result { self.0.scan_entries(request).await } } #[async_trait] impl StorageReadTransaction for TestReadTransaction { async fn rollback(self: Box) -> Result<(), LixError> { Ok(()) } } #[allow(dead_code)] fn test_functions() -> FunctionProviderHandle { SharedFunctionProvider::new( Box::new(SystemFunctionProvider) as Box ) } #[derive(Default)] struct CapturingStagedWrites { deltas: Vec, } #[derive(Clone)] struct CapturedStageWrite { rows: Vec, } impl CapturedStageWrite { fn pending_write_overlay(&self) -> Result { Ok(CapturedStageOverlay { rows: self.rows.clone(), }) } } struct CapturedStageOverlay { rows: Vec, } impl CapturedStageOverlay { fn visible_semantic_rows( &self, include_tombstones: bool, schema_key: &str, ) -> Vec { self.visible_all_semantic_rows() .into_iter() .filter(|row| row.schema_key == schema_key) .filter(|row| include_tombstones || !row.tombstone) .collect() } fn visible_all_semantic_rows(&self) -> Vec { self.rows .iter() .cloned() .map(CapturedStageRow::from) .collect() } } struct CapturedStageRow { entity_id: String, schema_key: String, version_id: String, file_id: Option, snapshot_content: Option, metadata: Option, global: bool, untracked: bool, tombstone: bool, } impl From for CapturedStageRow { fn from(row: TransactionWriteRow) -> Self { Self { entity_id: row .entity_id .expect("captured staged row should carry entity_id") .as_json_array_text() .expect("captured staged row should project entity_id"), schema_key: row.schema_key, version_id: row.version_id, file_id: row.file_id, global: row.global, untracked: row.untracked, tombstone: row.snapshot.is_none(), snapshot_content: row.snapshot.map(|snapshot| snapshot.to_string()), metadata: row.metadata.map(|metadata| metadata.to_string()), } } } struct DummySqlExecutionContext<'a> { active_version_id: &'a str, blob_reader: Arc, live_state: Arc, schema_definitions: Vec, } impl<'a> SqlExecutionContext for DummySqlExecutionContext<'a> { fn active_version_id(&self) -> &str { self.active_version_id } fn live_state(&self) -> Arc { Arc::clone(&self.live_state) } fn functions(&self) -> FunctionProviderHandle { test_functions() } fn blob_reader(&self) -> Arc { Arc::clone(&self.blob_reader) } fn commit_store_query_source(&self) -> SqlCommitStoreQuerySource { let base_scope = test_read_scope(StorageContext::new(Arc::new( crate::backend::testing::UnitTestBackend::new(), ))); let read_scope = StorageReadScope::new(base_scope.store()); CommitStoreQuerySource { commit_store_reader: Arc::new(CommitStoreContext::new().reader(read_scope.store())), json_reader: JsonStoreContext::new().reader(read_scope.store()), } } fn commit_graph(&self) -> Box { Box::new(DummyCommitGraphReader) } fn version_ref(&self) -> Arc { Arc::new(DummyVersionRefReader) } fn list_visible_schemas(&self) -> Result, LixError> { Ok(self.schema_definitions.clone()) } } struct DummySqlWriteExecutionContext<'a> { active_version_id: &'a str, blob_reader: Arc, live_state: Arc, staged_writes: Arc>, schema_definitions: Vec, } #[async_trait] impl SqlWriteExecutionContext for DummySqlWriteExecutionContext<'_> { fn active_version_id(&self) -> &str { self.active_version_id } fn functions(&self) -> FunctionProviderHandle { test_functions() } fn list_visible_schemas(&self) -> Result, LixError> { Ok(self.schema_definitions.clone()) } async fn load_bytes_many( &mut self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { self.blob_reader.load_bytes_many(hashes).await } async fn scan_live_state( &mut self, request: &LiveStateScanRequest, ) -> Result, LixError> { self.live_state.scan_rows(request).await } async fn load_version_head( &mut self, version_id: &str, ) -> Result, LixError> { Ok(Some(format!("commit-{version_id}"))) } async fn stage_write( &mut self, write: TransactionWrite, ) -> Result { let count = match &write { TransactionWrite::Rows { rows, .. } => rows.len() as u64, TransactionWrite::RowsWithFileData { count, .. } => *count, TransactionWrite::AdoptedChanges { changes } => changes.len() as u64, }; let rows = match write { TransactionWrite::Rows { rows, .. } => rows, TransactionWrite::RowsWithFileData { rows, .. } => rows, TransactionWrite::AdoptedChanges { .. } => Vec::new(), }; self.staged_writes .lock() .expect("staged writes lock") .deltas .push(CapturedStageWrite { rows }); Ok(TransactionWriteOutcome { count }) } } async fn execute_write_sql( ctx: &mut dyn SqlWriteExecutionContext, sql: &str, params: &[Value], ) -> Result { let plan = create_write_logical_plan(ctx, sql).await?; execute_logical_plan(plan, params).await } #[async_trait] impl VersionRefReader for DummyVersionRefReader { async fn load_head( &self, _version_id: &str, ) -> Result, LixError> { Ok(None) } async fn scan_heads(&self) -> Result, LixError> { Ok(Vec::new()) } } #[async_trait] impl CommitGraphReader for DummyCommitGraphReader { async fn load_commit( &mut self, _commit_id: &str, ) -> Result, LixError> { Ok(None) } async fn all_commits(&mut self) -> Result, LixError> { Ok(Vec::new()) } async fn reachable_commits( &mut self, _head_commit_id: &str, ) -> Result, LixError> { Ok(Vec::new()) } async fn best_common_ancestors( &mut self, _left_commit_id: &str, _right_commit_id: &str, ) -> Result, LixError> { Ok(Vec::new()) } async fn merge_base( &mut self, _left_commit_id: &str, _right_commit_id: &str, ) -> Result { Err(LixError::new( "LIX_ERROR_UNKNOWN", "dummy commit graph reader cannot resolve merge base", )) } fn commit_edges(&self, _commits: &[CommitGraphCommit]) -> Vec { Vec::new() } async fn change_history_from_commit( &mut self, _start_commit_id: &str, _request: &CommitGraphChangeHistoryRequest, ) -> Result, LixError> { Ok(Vec::new()) } } #[async_trait] impl LiveStateReader for DummyLiveStateReader { async fn scan_rows( &self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(vec![]) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } #[async_trait] impl LiveStateReader for RowsLiveStateReader { async fn scan_rows( &self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } #[async_trait] impl BlobDataReader for DummyBlobReader { async fn load_bytes_many( &self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { Ok(crate::binary_cas::BlobBytesBatch::new(vec![ None; hashes.len() ])) } } #[async_trait] impl BlobDataReader for BackendBlobReader { async fn load_bytes_many( &self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { let binary_cas = crate::binary_cas::BinaryCasContext::new(); let reader = binary_cas.reader(self.0.clone()); reader.load_bytes_many(hashes).await } } fn live_lix_state_row(entity_id: &str, metadata: Option<&str>) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: "lix_key_value".to_string(), file_id: None, snapshot_content: Some("{\"key\":\"hello\",\"value\":\"world\"}".to_string()), metadata: metadata.map(str::to_string), deleted: false, version_id: "version-a".to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } fn live_entity_row(entity_id: &str, version_id: &str, value: &str) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: "test_state_schema".to_string(), file_id: None, snapshot_content: Some(format!("{{\"value\":\"{value}\"}}")), metadata: Some(json!({ "source": entity_id }).to_string()), deleted: false, version_id: version_id.to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } fn live_directory_row( entity_id: &str, version_id: &str, parent_id: Option<&str>, name: &str, hidden: bool, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: "lix_directory_descriptor".to_string(), file_id: None, snapshot_content: Some( json!({ "id": entity_id, "parent_id": parent_id, "name": name, "hidden": hidden }) .to_string(), ), metadata: Some(json!({ "source": entity_id }).to_string()), deleted: false, version_id: version_id.to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } fn live_file_row( entity_id: &str, version_id: &str, directory_id: Option<&str>, name: &str, hidden: bool, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: "lix_file_descriptor".to_string(), file_id: None, snapshot_content: Some( json!({ "id": entity_id, "directory_id": directory_id, "name": name, "hidden": hidden }) .to_string(), ), metadata: Some(json!({ "source": entity_id }).to_string()), deleted: false, version_id: version_id.to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } #[tokio::test] async fn sql_execution_context_exposes_live_state_and_blob_reader() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let ctx = DummySqlExecutionContext { active_version_id: "version-a", blob_reader: Arc::clone(&blob_reader), live_state: Arc::clone(&live_state) as Arc, schema_definitions: vec![], }; let actual = ctx.live_state(); let expected = live_state as Arc; assert_eq!(ctx.active_version_id(), "version-a"); assert!(Arc::ptr_eq(&actual, &expected)); assert!(Arc::ptr_eq(&ctx.blob_reader(), &blob_reader)); } #[tokio::test] async fn execute_sql_uses_execution_context_boundary() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let ctx = DummySqlExecutionContext { active_version_id: "version-a", blob_reader, live_state, schema_definitions: vec![], }; let result = execute_sql(&ctx, "SELECT 1", &[]) .await .expect("sql2 execute should support literal-only queries"); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); } #[tokio::test] async fn execute_sql_collects_union_all_partitions() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let ctx = DummySqlExecutionContext { active_version_id: "version-a", blob_reader, live_state, schema_definitions: vec![], }; let result = execute_sql(&ctx, "SELECT 1 UNION ALL SELECT 2", &[]) .await .expect("sql2 execute should collect UNION ALL partitions"); assert_eq!( result.rows, vec![vec![Value::Integer(1)], vec![Value::Integer(2)]] ); } #[tokio::test] async fn execute_sql_rejects_extra_parameters() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let ctx = DummySqlExecutionContext { active_version_id: "version-a", blob_reader, live_state, schema_definitions: vec![], }; let error = execute_sql( &ctx, "SELECT $1 AS value", &[Value::Integer(1), Value::Integer(2)], ) .await .expect_err("extra params should fail instead of being ignored"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); assert_eq!( error.message, "SQL expected 1 parameter(s), but 2 parameter(s) were provided" ); assert_eq!( error.details, Some(json!({ "operation": "execute", "expected_param_count": 1, "provided_param_count": 2, "placeholders": ["$1"], })) ); } #[tokio::test] async fn execute_sql_exposes_datafusion_information_schema() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let ctx = DummySqlExecutionContext { active_version_id: "version-a", blob_reader, live_state, schema_definitions: vec![], }; let information_schema_result = execute_sql( &ctx, "SELECT table_name FROM information_schema.tables WHERE table_name = 'lix_state'", &[], ) .await .expect("information_schema.tables should be enabled"); assert_eq!( information_schema_result.rows, vec![vec![Value::Text("lix_state".to_string())]] ); let tables_result = execute_sql( &ctx, "SELECT table_name FROM information_schema.tables", &[], ) .await .expect("information_schema.tables should list registered tables"); assert!(tables_result.rows.iter().any(|row| { row.iter() .any(|value| matches!(value, Value::Text(value) if value == "lix_state")) })); } async fn setup_engine_history_fixture() -> Result<(SessionContext, String), LixError> { let backend = crate::backend::testing::UnitTestBackend::new(); let init_receipt = Engine::initialize(Box::new(backend.clone())).await?; let engine = Engine::new(Box::new(backend)).await?; let session = engine.open_session(init_receipt.main_version_id).await?; session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"test_state_schema\",\"type\":\"object\",\"properties\":{\"value\":{\"type\":\"string\"},\"count\":{\"type\":\"integer\"}},\"required\":[\"value\",\"count\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await?; session .execute( "INSERT INTO test_state_schema \ (lixcol_entity_id, value, count, lixcol_metadata, lixcol_untracked) \ VALUES (lix_json('[\"entity-history\"]'), 'A', 7, '{\"source\":\"history\"}', false)", &[], ) .await?; session .execute( "INSERT INTO lix_directory (id, path, hidden) \ VALUES ('dir-docs', '/docs/', false)", &[], ) .await?; session .execute( "INSERT INTO lix_file (id, path, data, hidden) \ VALUES ('file-a', '/docs/readme.md', X'68656C6C6F', false)", &[], ) .await?; let active_version_id = session.active_version_id().await?; let head_commit_id = engine .load_version_head_commit_id(&active_version_id) .await? .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "history fixture expected the session version to have a head commit", ) })?; Ok((session, head_commit_id)) } #[tokio::test] async fn lix_file_path_predicates_canonicalize_bound_values_like_writes() { let backend = crate::backend::testing::UnitTestBackend::new(); let init_receipt = Engine::initialize(Box::new(backend.clone())) .await .expect("engine should initialize"); let engine = Engine::new(Box::new(backend)) .await .expect("engine should open"); let session = engine .open_session(init_receipt.main_version_id) .await .expect("session should open"); session .execute( "INSERT INTO lix_file (id, path, data) VALUES ('file-nfc', $1, X'41')", &[Value::Text("/Cafe\u{301}.txt".to_string())], ) .await .expect("NFD path insert should canonicalize"); let nfd_result = session .execute( "SELECT id FROM lix_file WHERE path = $1", &[Value::Text("/Cafe\u{301}.txt".to_string())], ) .await .expect("NFD path predicate should canonicalize"); assert_eq!( rows_from_execute_result(nfd_result).1, vec![vec![Value::Text("file-nfc".to_string())]] ); let percent_result = session .execute( "SELECT id FROM lix_file WHERE path = '/%43afe%CC%81.txt'", &[], ) .await .expect("percent-encoded path predicate should canonicalize"); assert_eq!( rows_from_execute_result(percent_result).1, vec![vec![Value::Text("file-nfc".to_string())]] ); let reversed_result = session .execute( "SELECT id FROM lix_file WHERE $1 = path", &[Value::Text("/Cafe\u{301}.txt".to_string())], ) .await .expect("reversed path predicate should canonicalize"); assert_eq!( rows_from_execute_result(reversed_result).1, vec![vec![Value::Text("file-nfc".to_string())]] ); let or_result = session .execute( "SELECT id FROM lix_file WHERE path = $1 OR id = 'missing'", &[Value::Text("/Cafe\u{301}.txt".to_string())], ) .await .expect("OR path predicate should canonicalize"); assert_eq!( rows_from_execute_result(or_result).1, vec![vec![Value::Text("file-nfc".to_string())]] ); let not_result = session .execute( "SELECT id FROM lix_file WHERE NOT (path = $1)", &[Value::Text("/Cafe\u{301}.txt".to_string())], ) .await .expect("NOT path predicate should canonicalize"); assert!(rows_from_execute_result(not_result).1.is_empty()); let not_in_result = session .execute( "SELECT id FROM lix_file WHERE path NOT IN ($1)", &[Value::Text("/%43afe%CC%81.txt".to_string())], ) .await .expect("NOT IN path predicate should canonicalize"); assert!(rows_from_execute_result(not_in_result).1.is_empty()); let update_result = session .execute( "UPDATE lix_file SET hidden = true WHERE path = $1 OR id = 'missing'", &[Value::Text("/Cafe\u{301}.txt".to_string())], ) .await .expect("update predicate should canonicalize through OR"); assert_eq!(update_result.rows_affected(), 1); let delete_result = session .execute( "DELETE FROM lix_file WHERE path = $1", &[Value::Text("/%43afe%CC%81.txt".to_string())], ) .await .expect("delete predicate should canonicalize"); assert_eq!(delete_result.rows_affected(), 1); } #[tokio::test] async fn lix_file_path_predicates_reject_non_literal_path_values() { let backend = crate::backend::testing::UnitTestBackend::new(); let init_receipt = Engine::initialize(Box::new(backend.clone())) .await .expect("engine should initialize"); let engine = Engine::new(Box::new(backend)) .await .expect("engine should open"); let session = engine .open_session(init_receipt.main_version_id) .await .expect("session should open"); session .execute( "INSERT INTO lix_file (id, path, data) VALUES ('file-nfc', $1, X'41')", &[Value::Text("/Cafe\u{301}.txt".to_string())], ) .await .expect("NFD path insert should canonicalize"); let error = session .execute("SELECT id FROM lix_file WHERE path = id", &[]) .await .expect_err("computed path predicate values should be rejected"); assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL); assert!( error .message .contains("filesystem path predicates only support literal path values"), "{error:?}" ); } #[tokio::test] async fn lix_directory_path_predicates_canonicalize_bound_values_like_writes() { let backend = crate::backend::testing::UnitTestBackend::new(); let init_receipt = Engine::initialize(Box::new(backend.clone())) .await .expect("engine should initialize"); let engine = Engine::new(Box::new(backend)) .await .expect("engine should open"); let session = engine .open_session(init_receipt.main_version_id) .await .expect("session should open"); session .execute( "INSERT INTO lix_directory (id, path) VALUES ('dir-nfc', $1)", &[Value::Text("/Cafe\u{301}/".to_string())], ) .await .expect("NFD directory path insert should canonicalize"); let result = session .execute( "SELECT id FROM lix_directory WHERE path IN ($1)", &[Value::Text("/%43afe%CC%81/".to_string())], ) .await .expect("directory path predicate should canonicalize"); assert_eq!( rows_from_execute_result(result).1, vec![vec![Value::Text("dir-nfc".to_string())]] ); let or_result = session .execute( "SELECT id FROM lix_directory WHERE id = 'missing' OR path = $1", &[Value::Text("/Cafe\u{301}/".to_string())], ) .await .expect("directory OR path predicate should canonicalize"); assert_eq!( rows_from_execute_result(or_result).1, vec![vec![Value::Text("dir-nfc".to_string())]] ); let not_in_result = session .execute( "SELECT id FROM lix_directory WHERE path NOT IN ($1)", &[Value::Text("/%43afe%CC%81/".to_string())], ) .await .expect("directory NOT IN path predicate should canonicalize"); assert!(rows_from_execute_result(not_in_result).1.is_empty()); } #[tokio::test] async fn lix_directory_path_predicates_reject_non_literal_path_values() { let backend = crate::backend::testing::UnitTestBackend::new(); let init_receipt = Engine::initialize(Box::new(backend.clone())) .await .expect("engine should initialize"); let engine = Engine::new(Box::new(backend)) .await .expect("engine should open"); let session = engine .open_session(init_receipt.main_version_id) .await .expect("session should open"); session .execute( "INSERT INTO lix_directory (id, path) VALUES ('dir-nfc', $1)", &[Value::Text("/Cafe\u{301}/".to_string())], ) .await .expect("NFD directory path insert should canonicalize"); let error = session .execute("SELECT id FROM lix_directory WHERE path IN (id)", &[]) .await .expect_err("computed directory path predicate values should be rejected"); assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL); assert!( error .message .contains("filesystem path predicates only support literal path values"), "{error:?}" ); } fn rows_from_execute_result(result: ExecuteResult) -> (Vec, Vec>) { let rows = result; ( rows.columns().to_vec(), rows.rows() .iter() .map(|row| row.values().to_vec()) .collect(), ) } #[tokio::test] async fn execute_sql_reads_lix_state_history_from_history_context() { let (session, head_commit_id) = setup_engine_history_fixture() .await .expect("history fixture should initialize"); let result = session .execute( &format!( "SELECT entity_id, snapshot_content, metadata, depth, start_commit_id \ FROM lix_state_history \ WHERE schema_key = 'test_state_schema' \ AND entity_id = lix_json('[\"entity-history\"]') \ AND start_commit_id = '{head_commit_id}' \ AND depth >= 0" ), &[], ) .await .expect("sql2 execute should read lix_state_history through real engine context"); let (columns, rows) = rows_from_execute_result(result); assert_eq!( columns, vec![ "entity_id", "snapshot_content", "metadata", "depth", "start_commit_id" ] ); assert_eq!(rows.len(), 1); assert_eq!(rows[0][0], Value::Json(json!(["entity-history"]))); assert_eq!(rows[0][1], Value::Json(json!({"count": 7, "value": "A"}))); assert_eq!(rows[0][2], Value::Json(json!({"source": "history"}))); assert!(matches!(rows[0][3], Value::Integer(_))); assert_eq!(rows[0][4], Value::Text(head_commit_id.clone())); } #[tokio::test] async fn execute_sql_reads_entity_history_view_from_history_context() { let (session, head_commit_id) = setup_engine_history_fixture() .await .expect("history fixture should initialize"); let result = session .execute( &format!( "SELECT value, count, lixcol_entity_id, lixcol_start_commit_id, lixcol_depth \ FROM test_state_schema_history \ WHERE lixcol_start_commit_id = '{head_commit_id}' \ AND lixcol_entity_id = lix_json('[\"entity-history\"]')" ), &[], ) .await .expect("sql2 execute should read entity history through real engine context"); let (columns, rows) = rows_from_execute_result(result); assert_eq!( columns, vec![ "value", "count", "lixcol_entity_id", "lixcol_start_commit_id", "lixcol_depth", ] ); assert_eq!(rows.len(), 1); assert_eq!(rows[0][0], Value::Text("A".to_string())); assert_eq!(rows[0][1], Value::Integer(7)); assert_eq!(rows[0][2], Value::Json(json!(["entity-history"]))); assert_eq!(rows[0][3], Value::Text(head_commit_id)); assert!(matches!(rows[0][4], Value::Integer(_))); } #[tokio::test] async fn execute_sql_reads_directory_history_view_from_history_context() { let (session, head_commit_id) = setup_engine_history_fixture() .await .expect("history fixture should initialize"); let result = session .execute( &format!( "SELECT id, parent_id, name, path, hidden, lixcol_start_commit_id, lixcol_depth \ FROM lix_directory_history \ WHERE id = 'dir-docs' AND lixcol_start_commit_id = '{head_commit_id}'" ), &[], ) .await .expect("sql2 execute should read directory history through real engine context"); assert!( result.notices().is_empty(), "identity-filtered directory history should not emit soft notices" ); let (columns, rows) = rows_from_execute_result(result); assert_eq!( columns, vec![ "id", "parent_id", "name", "path", "hidden", "lixcol_start_commit_id", "lixcol_depth", ] ); assert_eq!(rows.len(), 1); assert_eq!(rows[0][0], Value::Text("dir-docs".to_string())); assert_eq!(rows[0][1], Value::Null); assert_eq!(rows[0][2], Value::Text("docs".to_string())); assert_eq!(rows[0][3], Value::Text("/docs/".to_string())); assert_eq!(rows[0][4], Value::Boolean(false)); assert_eq!(rows[0][5], Value::Text(head_commit_id.clone())); assert!(matches!(rows[0][6], Value::Integer(_))); let name_filtered_result = session .execute( &format!( "SELECT id \ FROM lix_directory_history \ WHERE name = 'docs' \ AND lixcol_start_commit_id = '{head_commit_id}'" ), &[], ) .await .expect("sql2 execute should attach notices to name-filtered directory history reads"); assert_eq!(name_filtered_result.notices().len(), 1); assert_eq!( name_filtered_result.notices()[0].code, "LIX_HISTORY_NON_IDENTITY_FILTER" ); } #[tokio::test] async fn execute_sql_reads_file_history_view_from_history_context() { let (session, head_commit_id) = setup_engine_history_fixture() .await .expect("history fixture should initialize"); let result = session .execute( &format!( "SELECT id, path, data, hidden, lixcol_start_commit_id, lixcol_depth \ FROM lix_file_history \ WHERE id = 'file-a' \ AND lixcol_start_commit_id = '{head_commit_id}' \ AND data IS NOT NULL \ ORDER BY lixcol_depth", ), &[], ) .await .expect("sql2 execute should read file history through real engine context"); assert!( result.notices().is_empty(), "identity-filtered file history should not emit soft notices" ); let (columns, rows) = rows_from_execute_result(result); assert_eq!( columns, vec![ "id", "path", "data", "hidden", "lixcol_start_commit_id", "lixcol_depth", ] ); assert_eq!(rows.len(), 1); assert_eq!(rows[0][0], Value::Text("file-a".to_string())); assert_eq!(rows[0][1], Value::Text("/docs/readme.md".to_string())); assert_eq!(rows[0][2], Value::Blob(b"hello".to_vec())); assert_eq!(rows[0][3], Value::Boolean(false)); assert_eq!(rows[0][4], Value::Text(head_commit_id.clone())); assert!(matches!(rows[0][5], Value::Integer(_))); let path_filtered_result = session .execute( &format!( "SELECT id \ FROM lix_file_history \ WHERE path = '/docs/readme.md' \ AND lixcol_start_commit_id = '{head_commit_id}'" ), &[], ) .await .expect("sql2 execute should attach notices to path-filtered file history reads"); assert_eq!(path_filtered_result.notices().len(), 1); assert_eq!( path_filtered_result.notices()[0].code, "LIX_HISTORY_NON_IDENTITY_FILTER" ); } #[tokio::test] async fn execute_sql_rejects_writes_to_history_views_before_planning() { for sql in [ "DELETE FROM lix_state_history", "DELETE FROM LIX_STATE_HISTORY", "DELETE FROM main.LIX_STATE_HISTORY", "EXPLAIN DELETE FROM lix_state_history", ] { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes, schema_definitions: vec![], }; let error = execute_write_sql(&mut ctx, sql, &[]) .await .expect_err("history views are read-only"); assert_eq!(error.code, LixError::CODE_READ_ONLY, "{sql}"); assert_eq!( error.message, "DML cannot write read-only history view 'lix_state_history'", "{sql}" ); } } #[tokio::test] async fn execute_sql_insert_into_lix_state_values_stages_write() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, metadata, global, untracked\ ) VALUES (\ lix_json('[\"entity-1\"]'), 'lix_key_value', NULL, '{\"key\":\"hello\",\"value\":\"world\"}', '{\"source\":\"sql\"}', false, false\ )", &[], ) .await .expect("INSERT INTO lix_state VALUES should stage write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_key_value"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"entity-1\"]"); assert_eq!(rows[0].version_id, "version-a"); assert!(!rows[0].global); assert!(!rows[0].untracked); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"key\":\"hello\",\"value\":\"world\"}") ); assert_eq!(rows[0].metadata.as_deref(), Some("{\"source\":\"sql\"}")); } #[tokio::test] async fn execute_sql_insert_into_lix_state_defaults_global_and_untracked_to_false() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, metadata\ ) VALUES (\ lix_json('[\"entity-defaults\"]'), 'lix_key_value', NULL, '{\"key\":\"hello\",\"value\":\"defaults\"}', NULL\ )", &[], ) .await .expect("INSERT INTO lix_state should default bookkeeping flags"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_key_value"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"entity-defaults\"]"); assert_eq!(rows[0].version_id, "version-a"); assert!(!rows[0].global); assert!(!rows[0].untracked); } #[tokio::test] async fn execute_sql_insert_into_lix_state_select_stages_write() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, metadata, global, untracked\ ) \ SELECT \ lix_json('[\"entity-from-select\"]') AS entity_id, \ 'lix_key_value' AS schema_key, \ NULL AS file_id, \ '{\"key\":\"hello\",\"value\":\"from-select\"}' AS snapshot_content, \ '{\"source\":\"select\"}' AS metadata, \ false AS global, \ false AS untracked", &[], ) .await .expect("INSERT INTO lix_state SELECT should stage write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_key_value"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"entity-from-select\"]"); assert_eq!(rows[0].version_id, "version-a"); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"key\":\"hello\",\"value\":\"from-select\"}") ); assert_eq!(rows[0].metadata.as_deref(), Some("{\"source\":\"select\"}")); } #[tokio::test] async fn execute_sql_insert_into_entity_by_version_stages_write() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![json!({ "x-lix-key": "test_state_schema", "type": "object", "properties": { "value": { "type": "string" } } })], }; let result = execute_write_sql( &mut ctx, "INSERT INTO test_state_schema_by_version (\ lixcol_entity_id, lixcol_version_id, value\ ) VALUES (lix_json('[\"entity-c\"]'), 'version-b', 'C')", &[], ) .await .expect("INSERT INTO entity by-version surface should stage write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "test_state_schema"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"entity-c\"]"); assert_eq!(rows[0].version_id, "version-b"); assert!(!rows[0].global); assert!(!rows[0].untracked); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"C\"}") ); } #[tokio::test] async fn execute_sql_insert_into_active_entity_defaults_active_version() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![json!({ "x-lix-key": "test_state_schema", "type": "object", "properties": { "value": { "type": "string" } } })], }; let result = execute_write_sql( &mut ctx, "INSERT INTO test_state_schema (lixcol_entity_id, value) \ VALUES (lix_json('[\"entity-c\"]'), 'C')", &[], ) .await .expect("INSERT INTO active entity surface should stage write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "test_state_schema"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"entity-c\"]"); assert_eq!(rows[0].version_id, "version-a"); assert!(!rows[0].global); assert!(!rows[0].untracked); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"C\"}") ); } #[tokio::test] async fn execute_sql_insert_into_directory_by_version_stages_write() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "INSERT INTO lix_directory_by_version (\ id, parent_id, name, hidden, lixcol_version_id\ ) VALUES ('dir-docs', NULL, 'docs', false, 'version-b')", &[], ) .await .expect("INSERT INTO lix_directory_by_version should stage write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_directory_descriptor"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"dir-docs\"]"); assert_eq!(rows[0].version_id, "version-b"); assert!(!rows[0].global); assert!(!rows[0].untracked); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"hidden\":false,\"id\":\"dir-docs\",\"name\":\"docs\",\"parent_id\":null}") ); } #[tokio::test] async fn execute_sql_insert_into_active_directory_defaults_active_version() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "INSERT INTO lix_directory (id, parent_id, name, hidden) \ VALUES ('dir-docs', NULL, 'docs', false)", &[], ) .await .expect("INSERT INTO lix_directory should stage write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_directory_descriptor"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"dir-docs\"]"); assert_eq!(rows[0].version_id, "version-a"); assert!(!rows[0].global); assert!(!rows[0].untracked); } #[tokio::test] async fn execute_sql_update_directory_stages_rewritten_descriptor() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_directory_row("dir-docs", "version-a", None, "docs", false), live_directory_row("dir-guides", "version-a", Some("dir-docs"), "guides", false), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "UPDATE lix_directory \ SET hidden = true, lixcol_metadata = '{\"source\":\"directory-update\"}' \ WHERE id = 'dir-docs'", &[], ) .await .expect("UPDATE lix_directory should stage rewritten descriptor"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_directory_descriptor"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"dir-docs\"]"); assert_eq!(rows[0].version_id, "version-a"); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"hidden\":true,\"id\":\"dir-docs\",\"name\":\"docs\",\"parent_id\":null}") ); assert_eq!( rows[0].metadata.as_deref(), Some("{\"source\":\"directory-update\"}") ); } #[tokio::test] async fn execute_sql_update_directory_rejects_path_assignment() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![live_directory_row( "dir-docs", "version-a", None, "docs", false, )], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let error = execute_write_sql( &mut ctx, "UPDATE lix_directory SET path = '/renamed/' WHERE id = 'dir-docs'", &[], ) .await .expect_err("path should remain read-only"); assert!( error.message.contains("read-only column 'path'"), "unexpected error: {error:?}" ); assert!(staged_writes .lock() .expect("staged writes lock") .deltas .is_empty()); } #[tokio::test] async fn execute_sql_delete_directory_by_version_stages_tombstone() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_directory_row("dir-docs", "version-a", None, "docs", false), live_directory_row("dir-guides", "version-b", Some("dir-docs"), "guides", false), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "DELETE FROM lix_directory_by_version \ WHERE id = 'dir-guides' AND lixcol_version_id = 'version-b'", &[], ) .await .expect("DELETE lix_directory_by_version should stage tombstone"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_all_semantic_rows(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"dir-guides\"]"); assert_eq!(rows[0].version_id, "version-b"); assert!(rows[0].tombstone); assert_eq!(rows[0].snapshot_content, None); } #[tokio::test] async fn execute_sql_insert_into_file_by_version_stages_descriptor_write() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "INSERT INTO lix_file_by_version (\ id, directory_id, name, hidden, lixcol_version_id\ ) VALUES ('file-readme', 'dir-docs', 'readme.md', false, 'version-b')", &[], ) .await .expect("INSERT INTO lix_file_by_version should stage descriptor write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_file_descriptor"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"file-readme\"]"); assert_eq!(rows[0].version_id, "version-b"); assert!(!rows[0].global); assert!(!rows[0].untracked); let snapshot: JsonValue = serde_json::from_str(rows[0].snapshot_content.as_deref().unwrap()) .expect("descriptor snapshot JSON"); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["directory_id"], "dir-docs"); assert_eq!(snapshot["name"], "readme.md"); assert_eq!(snapshot["hidden"], false); } #[tokio::test] async fn execute_sql_insert_into_active_file_defaults_active_version() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "INSERT INTO lix_file (id, directory_id, name, hidden) \ VALUES ('file-readme', 'dir-docs', 'readme.md', false)", &[], ) .await .expect("INSERT INTO lix_file should stage descriptor write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_file_descriptor"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"file-readme\"]"); assert_eq!(rows[0].version_id, "version-a"); assert!(!rows[0].global); assert!(!rows[0].untracked); } #[tokio::test] async fn execute_sql_insert_into_file_with_data_stages_blob_ref() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(DummyLiveStateReader); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "INSERT INTO lix_file_by_version (\ id, directory_id, name, hidden, data, lixcol_version_id\ ) VALUES ('file-readme', 'dir-docs', 'readme.md', false, X'4142', 'version-b')", &[], ) .await .expect("INSERT INTO lix_file_by_version should stage descriptor and data writes"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let descriptor_rows = overlay.visible_semantic_rows(false, "lix_file_descriptor"); assert_eq!(descriptor_rows.len(), 1); assert_eq!(descriptor_rows[0].entity_id, "[\"file-readme\"]"); let blob_ref_rows = overlay.visible_semantic_rows(false, "lix_binary_blob_ref"); assert_eq!(blob_ref_rows.len(), 1); assert_eq!(blob_ref_rows[0].entity_id, "[\"file-readme\"]"); assert_eq!(blob_ref_rows[0].file_id.as_deref(), Some("file-readme")); assert_eq!(blob_ref_rows[0].version_id, "version-b"); let snapshot: JsonValue = serde_json::from_str(blob_ref_rows[0].snapshot_content.as_deref().unwrap()) .expect("blob ref snapshot JSON"); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["size_bytes"], 2); assert!(snapshot["blob_hash"] .as_str() .is_some_and(|value| !value.is_empty())); } #[tokio::test] async fn execute_sql_update_file_stages_rewritten_descriptor() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_directory_row("dir-docs", "version-a", None, "docs", false), live_file_row( "file-readme", "version-a", Some("dir-docs"), "readme.md", false, ), live_file_row( "file-guide", "version-a", Some("dir-docs"), "guide.md", false, ), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "UPDATE lix_file \ SET name = 'readme-updated.txt', hidden = true, lixcol_metadata = '{\"source\":\"file-update\"}' \ WHERE id = 'file-readme'", &[], ) .await .expect("UPDATE lix_file should stage rewritten descriptor"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_file_descriptor"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"file-readme\"]"); assert_eq!(rows[0].version_id, "version-a"); let snapshot: JsonValue = serde_json::from_str(rows[0].snapshot_content.as_deref().unwrap()) .expect("descriptor snapshot JSON"); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["directory_id"], "dir-docs"); assert_eq!(snapshot["name"], "readme-updated.txt"); assert_eq!(snapshot["hidden"], true); assert_eq!( rows[0].metadata.as_deref(), Some("{\"source\":\"file-update\"}") ); } #[tokio::test] async fn execute_sql_update_file_stages_data_blob_ref() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_directory_row("dir-docs", "version-a", None, "docs", false), live_file_row( "file-readme", "version-a", Some("dir-docs"), "readme.md", false, ), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "UPDATE lix_file SET data = X'4142' WHERE id = 'file-readme'", &[], ) .await .expect("UPDATE lix_file should stage data write"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); assert!(overlay .visible_semantic_rows(false, "lix_file_descriptor") .is_empty()); let blob_ref_rows = overlay.visible_semantic_rows(false, "lix_binary_blob_ref"); assert_eq!(blob_ref_rows.len(), 1); assert_eq!(blob_ref_rows[0].entity_id, "[\"file-readme\"]"); let snapshot: JsonValue = serde_json::from_str(blob_ref_rows[0].snapshot_content.as_deref().unwrap()) .expect("blob ref snapshot JSON"); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["size_bytes"], 2); } #[tokio::test] async fn execute_sql_update_file_stages_path_assignment() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_directory_row("dir-docs", "version-a", None, "docs", false), live_file_row( "file-readme", "version-a", Some("dir-docs"), "readme.md", false, ), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "UPDATE lix_file SET path = '/docs/renamed.md' WHERE id = 'file-readme'", &[], ) .await .expect("path update should stage descriptor rewrite"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_file_descriptor"); assert_eq!(rows.len(), 1); let snapshot: JsonValue = serde_json::from_str(rows[0].snapshot_content.as_deref().unwrap()) .expect("descriptor snapshot JSON"); assert_eq!(snapshot["directory_id"], "dir-docs"); assert_eq!(snapshot["name"], "renamed.md"); } #[tokio::test] async fn execute_sql_delete_file_by_version_stages_descriptor_tombstone() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_directory_row("dir-docs", "version-a", None, "docs", false), live_directory_row("dir-docs", "version-b", None, "docs", false), live_file_row( "file-readme", "version-a", Some("dir-docs"), "readme.md", false, ), live_file_row( "file-guide", "version-b", Some("dir-docs"), "guide.md", false, ), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "DELETE FROM lix_file_by_version \ WHERE id = 'file-guide' AND lixcol_version_id = 'version-b'", &[], ) .await .expect("DELETE lix_file_by_version should stage descriptor tombstone"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_all_semantic_rows(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"file-guide\"]"); assert_eq!(rows[0].version_id, "version-b"); assert!(rows[0].tombstone); assert_eq!(rows[0].snapshot_content, None); } #[tokio::test] async fn execute_sql_update_entity_surface_stages_rewritten_snapshot() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_entity_row("entity-a", "version-a", "A"), live_entity_row("entity-b", "version-a", "B"), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![json!({ "x-lix-key": "test_state_schema", "type": "object", "properties": { "value": { "type": "string" } } })], }; let result = execute_write_sql( &mut ctx, "UPDATE test_state_schema \ SET value = 'updated', lixcol_metadata = '{\"source\":\"entity-update\"}' \ WHERE value = 'A'", &[], ) .await .expect("UPDATE entity surface should stage rewritten row"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "test_state_schema"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"entity-a\"]"); assert_eq!(rows[0].version_id, "version-a"); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"value\":\"updated\"}") ); assert_eq!( rows[0].metadata.as_deref(), Some("{\"source\":\"entity-update\"}") ); } #[tokio::test] async fn execute_sql_delete_entity_by_version_stages_tombstone() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_entity_row("entity-a", "version-a", "A"), live_entity_row("entity-b", "version-b", "B"), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![json!({ "x-lix-key": "test_state_schema", "type": "object", "properties": { "value": { "type": "string" } } })], }; let result = execute_write_sql( &mut ctx, "DELETE FROM test_state_schema_by_version \ WHERE lixcol_version_id = 'version-b'", &[], ) .await .expect("DELETE entity by-version surface should stage tombstone"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_all_semantic_rows(); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"entity-b\"]"); assert_eq!(rows[0].version_id, "version-b"); assert!(rows[0].tombstone); assert_eq!(rows[0].snapshot_content, None); } #[tokio::test] async fn execute_sql_update_lix_state_stages_rewritten_rows() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_lix_state_row("entity-1", Some("{\"source\":\"match\"}")), live_lix_state_row("entity-2", Some("{\"source\":\"skip\"}")), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql( &mut ctx, "UPDATE lix_state \ SET snapshot_content = '{\"key\":\"hello\",\"value\":\"updated\"}', \ metadata = '{\"schema_key\":\"lix_key_value\"}' \ WHERE metadata = lix_json('{\"source\":\"match\"}')", &[], ) .await .expect("UPDATE lix_state should stage rewritten rows"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(1)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_semantic_rows(false, "lix_key_value"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].entity_id, "[\"entity-1\"]"); assert_eq!(rows[0].version_id, "version-a"); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"key\":\"hello\",\"value\":\"updated\"}") ); assert_eq!( rows[0].metadata.as_deref(), Some("{\"schema_key\":\"lix_key_value\"}") ); } #[tokio::test] async fn execute_sql_delete_lix_state_without_where_stages_all_rows() { let blob_reader: Arc = Arc::new(DummyBlobReader); let live_state = Arc::new(RowsLiveStateReader { rows: vec![ live_lix_state_row("entity-1", Some("{\"source\":\"one\"}")), live_lix_state_row("entity-2", Some("{\"source\":\"two\"}")), ], }); let staged_writes = Arc::new(Mutex::new(CapturingStagedWrites::default())); let mut ctx = DummySqlWriteExecutionContext { active_version_id: "version-a", blob_reader, live_state, staged_writes: Arc::clone(&staged_writes), schema_definitions: vec![], }; let result = execute_write_sql(&mut ctx, "DELETE FROM lix_state", &[]) .await .expect("DELETE FROM lix_state should follow DataFusion delete-all semantics"); assert_eq!(result.columns, vec!["count"]); assert_eq!(result.rows, vec![vec![Value::Integer(2)]]); let staged_writes = staged_writes.lock().expect("staged writes lock"); assert_eq!(staged_writes.deltas.len(), 1); let overlay = staged_writes.deltas[0] .pending_write_overlay() .expect("staged delta should expose pending overlay"); let rows = overlay.visible_all_semantic_rows(); assert_eq!(rows.len(), 2); assert!(rows.iter().all(|row| row.tombstone)); assert!(rows.iter().all(|row| row.snapshot_content.is_none())); assert!(rows.iter().any(|row| row.entity_id == "[\"entity-1\"]")); assert!(rows.iter().any(|row| row.entity_id == "[\"entity-2\"]")); } struct BackendSqlExecutionContext<'a> { active_version_id: &'a str, storage: StorageContext, blob_reader: Arc, live_state: Arc, schema_definitions: Vec, } impl SqlExecutionContext for BackendSqlExecutionContext<'_> { fn active_version_id(&self) -> &str { self.active_version_id } fn live_state(&self) -> Arc { Arc::clone(&self.live_state) } fn functions(&self) -> FunctionProviderHandle { test_functions() } fn blob_reader(&self) -> Arc { Arc::clone(&self.blob_reader) } fn commit_store_query_source(&self) -> SqlCommitStoreQuerySource { let base_scope = test_read_scope(self.storage.clone()); let read_scope = StorageReadScope::new(base_scope.store()); CommitStoreQuerySource { commit_store_reader: Arc::new(CommitStoreContext::new().reader(read_scope.store())), json_reader: JsonStoreContext::new().reader(read_scope.store()), } } fn commit_graph(&self) -> Box { Box::new(DummyCommitGraphReader) } fn version_ref(&self) -> Arc { Arc::new( crate::version::VersionContext::new(Arc::new(UntrackedStateContext::new())) .ref_reader(self.storage.clone()), ) } fn list_visible_schemas(&self) -> Result, LixError> { Ok(self.schema_definitions.clone()) } } async fn setup_sql2_state_fixture( ) -> Result<(crate::backend::testing::UnitTestBackend, JsonValue), crate::LixError> { let backend = crate::backend::testing::UnitTestBackend::new(); let init_receipt = Engine::initialize(Box::new(backend.clone())).await?; let storage = crate::storage::StorageContext::new(std::sync::Arc::new(backend.clone())); { let mut transaction = storage.begin_write_transaction().await?; let version_ctx = crate::version::VersionContext::new(Arc::new( crate::untracked_state::UntrackedStateContext::new(), )); let mut writes = StorageWriteSet::new(); let canonical_rows = vec![ prepare_version_ref_row( "version-a", &init_receipt.initial_commit_id, "1970-01-01T00:00:00.000Z", )?, prepare_version_ref_row( "version-b", &init_receipt.initial_commit_id, "1970-01-01T00:00:00.000Z", )?, ]; let rows = canonical_rows .into_iter() .map(|prepared| prepared.row) .collect::>(); version_ctx.stage_canonical_ref_rows(&mut writes, &rows)?; writes.apply(&mut transaction.as_mut()).await?; transaction.commit().await?; } let engine = Engine::new(Box::new(backend.clone())).await?; let session_a = engine.open_session("version-a").await?; let session_b = engine.open_session("version-b").await?; let schema_definition = json!({ "x-lix-key": "test_state_schema", "type": "object", "properties": { "value": { "type": "string" } }, "required": ["value"], "additionalProperties": false }); session_a .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"test_state_schema\",\"type\":\"object\",\"properties\":{\"value\":{\"type\":\"string\"}},\"required\":[\"value\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await?; session_b .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"test_state_schema\",\"type\":\"object\",\"properties\":{\"value\":{\"type\":\"string\"}},\"required\":[\"value\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await?; session_a .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"entity-a\"]'), 'test_state_schema', NULL, '{\"value\":\"A\"}', false, false\ )", &[], ) .await?; session_b .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"entity-b\"]'), 'test_state_schema', NULL, '{\"value\":\"B\"}', false, false\ )", &[], ) .await?; session_a .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"dir-docs\"]'), 'lix_directory_descriptor', NULL, '{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\",\"hidden\":false}', false, false\ )", &[], ) .await?; session_a .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('file-a', '/docs/readme.md', X'4142')", &[], ) .await?; Ok((backend, schema_definition)) } fn test_live_state_context() -> LiveStateContext { LiveStateContext::new( TrackedStateContext::new(), UntrackedStateContext::new(), crate::commit_graph::CommitGraphContext::new(), ) } fn run_async_test_with_large_stack( test: impl FnOnce() -> futures_util::future::LocalBoxFuture<'static, ()> + Send + 'static, ) { std::thread::Builder::new() .name("sql2-execute-test".to_string()) .stack_size(32 * 1024 * 1024) .spawn(move || { tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("test runtime should build") .block_on(test()); }) .expect("test thread should spawn") .join() .expect("test thread should join"); } #[test] fn execute_sql_reads_lix_state_by_version() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT entity_id, version_id, snapshot_content, commit_id \ FROM lix_state_by_version \ WHERE version_id = 'version-b' AND schema_key = 'test_state_schema'", &[], ) .await .expect("sql2 execute should read lix_state_by_version"); assert_eq!( result.columns, vec!["entity_id", "version_id", "snapshot_content", "commit_id"] ); assert_eq!(result.rows.len(), 1); assert_eq!(result.rows[0][0], Value::Json(json!(["entity-b"]))); assert_eq!(result.rows[0][1], Value::Text("version-b".to_string())); assert_eq!(result.rows[0][2], Value::Json(json!({"value": "B"}))); match &result.rows[0][3] { Value::Text(commit_id) => assert!(!commit_id.is_empty()), other => panic!("expected non-null commit_id text, got {other:?}"), } }) }); } #[test] fn execute_sql_supports_broad_lix_state_by_version_reads() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT entity_id FROM lix_state_by_version WHERE schema_key = 'test_state_schema'", &[], ) .await .expect("broad by-version read should succeed"); assert!( result.rows.iter().any(|row| row[0] == Value::Json(json!(["entity-a"]))) && result.rows.iter().any(|row| row[0] == Value::Json(json!(["entity-b"]))), "expected broad by-version read to include rows from multiple visible versions: {:?}", result.rows ); }) }); } #[test] fn execute_sql_reads_lix_state_from_active_version() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT entity_id, snapshot_content \ FROM lix_state \ WHERE schema_key = 'test_state_schema'", &[], ) .await .expect("sql2 execute should read lix_state"); assert_eq!(result.columns, vec!["entity_id", "snapshot_content"]); assert_eq!(result.rows.len(), 1); assert_eq!(result.rows[0][0], Value::Json(json!(["entity-a"]))); assert_eq!(result.rows[0][1], Value::Json(json!({"value": "A"}))); }) }); } #[test] fn execute_sql_reads_entity_view_from_active_version() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT value, lixcol_entity_id \ FROM test_state_schema", &[], ) .await .expect("sql2 execute should read entity view"); assert_eq!(result.columns, vec!["value", "lixcol_entity_id"]); assert_eq!(result.rows.len(), 1); assert_eq!(result.rows[0][0], Value::Text("A".to_string())); assert_eq!(result.rows[0][1], Value::Json(json!(["entity-a"]))); }) }); } #[test] fn execute_sql_reads_entity_by_version_view() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT value, lixcol_version_id \ FROM test_state_schema_by_version \ WHERE lixcol_version_id = 'version-b'", &[], ) .await .expect("sql2 execute should read entity by-version view"); assert_eq!(result.columns, vec!["value", "lixcol_version_id"]); assert_eq!(result.rows.len(), 1); assert_eq!(result.rows[0][0], Value::Text("B".to_string())); assert_eq!(result.rows[0][1], Value::Text("version-b".to_string())); }) }); } #[test] fn execute_sql_reads_lix_directory_by_version_view() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT path, name, lixcol_version_id \ FROM lix_directory_by_version \ WHERE id = 'dir-docs' AND lixcol_version_id = 'version-a'", &[], ) .await .expect("sql2 execute should read lix_directory_by_version"); assert_eq!(result.columns, vec!["path", "name", "lixcol_version_id"]); assert_eq!(result.rows.len(), 1); assert_eq!(result.rows[0][0], Value::Text("/docs/".to_string())); assert_eq!(result.rows[0][1], Value::Text("docs".to_string())); assert_eq!(result.rows[0][2], Value::Text("version-a".to_string())); }) }); } #[test] fn execute_sql_reads_lix_directory_from_active_version() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT path, name \ FROM lix_directory \ WHERE id = 'dir-docs'", &[], ) .await .expect("sql2 execute should read lix_directory"); assert_eq!(result.columns, vec!["path", "name"]); assert_eq!(result.rows.len(), 1); assert_eq!(result.rows[0][0], Value::Text("/docs/".to_string())); assert_eq!(result.rows[0][1], Value::Text("docs".to_string())); }) }); } #[test] fn execute_sql_reads_lix_file_by_version_view() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT path, name, data, lixcol_version_id \ FROM lix_file_by_version \ WHERE id = 'file-a' AND lixcol_version_id = 'version-a'", &[], ) .await .expect("sql2 execute should read lix_file_by_version"); assert_eq!( result.columns, vec!["path", "name", "data", "lixcol_version_id"] ); assert_eq!(result.rows.len(), 1); assert_eq!( result.rows[0][0], Value::Text("/docs/readme.md".to_string()) ); assert_eq!(result.rows[0][1], Value::Text("readme.md".to_string())); assert_eq!(result.rows[0][2], Value::Blob(vec![0x41, 0x42])); assert_eq!(result.rows[0][3], Value::Text("version-a".to_string())); }) }); } #[test] fn execute_sql_reads_lix_file_from_active_version() { run_async_test_with_large_stack(|| { Box::pin(async move { let (backend, schema_definition) = setup_sql2_state_fixture() .await .expect("fixture should initialize"); let backend = Arc::new(backend); let backend_ref: Arc = backend; let storage = StorageContext::new(Arc::clone(&backend_ref)); let blob_reader: Arc = Arc::new(BackendBlobReader(storage.clone())); let ctx = BackendSqlExecutionContext { active_version_id: "version-a", storage: storage.clone(), blob_reader: Arc::clone(&blob_reader), live_state: Arc::new(test_live_state_context().reader(storage.clone())), schema_definitions: vec![schema_definition], }; let result = execute_sql( &ctx, "SELECT path, name, data \ FROM lix_file \ WHERE id = 'file-a'", &[], ) .await .expect("sql2 execute should read lix_file"); assert_eq!(result.columns, vec!["path", "name", "data"]); assert_eq!(result.rows.len(), 1); assert_eq!( result.rows[0][0], Value::Text("/docs/readme.md".to_string()) ); assert_eq!(result.rows[0][1], Value::Text("readme.md".to_string())); assert_eq!(result.rows[0][2], Value::Blob(vec![0x41, 0x42])); }) }); } } ================================================ FILE: packages/engine/src/sql2/file_history_provider.rs ================================================ use std::any::Any; use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BinaryArray, BooleanArray, Int64Array, StringArray}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions}; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{DataFusionError, Result}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::{Expr, TableProviderFilterPushDown}; use datafusion::physical_expr::EquivalenceProperties; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use futures_util::stream; use serde::Deserialize; use tokio::sync::Mutex; use crate::binary_cas::{BlobDataReader, BlobHash}; use crate::commit_graph::CommitGraphReader; use crate::serialize_row_metadata; use crate::LixError; use super::history_projection::{tombstone_identity_column_value, HistoryIdentityProjection}; use super::history_route::{ history_descriptor_event_matches, load_history_entries, parse_history_filter, HistoryColumnStyle, HistoryEntry, HistoryRoute, HistoryViewDescriptor, HISTORY_COL_CHANGE_ID, HISTORY_COL_COMMIT_CREATED_AT, HISTORY_COL_DEPTH, HISTORY_COL_ENTITY_ID, HISTORY_COL_FILE_ID, HISTORY_COL_METADATA, HISTORY_COL_OBSERVED_COMMIT_ID, HISTORY_COL_SCHEMA_KEY, HISTORY_COL_SNAPSHOT_CONTENT, HISTORY_COL_START_COMMIT_ID, }; use super::result_metadata::json_field; use super::SqlCommitStoreQuerySource; use crate::commit_store::MaterializedChange; const FILE_DESCRIPTOR_SCHEMA_KEY: &str = "lix_file_descriptor"; const DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = "lix_directory_descriptor"; const BLOB_REF_SCHEMA_KEY: &str = "lix_binary_blob_ref"; pub(crate) async fn register_lix_file_history_provider( session: &datafusion::prelude::SessionContext, commit_graph: Box, query_source: SqlCommitStoreQuerySource, blob_reader: Arc, ) -> Result<(), LixError> { session .register_table( "lix_file_history", Arc::new(LixFileHistoryProvider::new( Arc::new(Mutex::new(commit_graph)), query_source, blob_reader, )), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } struct LixFileHistoryProvider { schema: SchemaRef, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, blob_reader: Arc, } impl std::fmt::Debug for LixFileHistoryProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixFileHistoryProvider").finish() } } impl LixFileHistoryProvider { fn new( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, blob_reader: Arc, ) -> Self { Self { schema: lix_file_history_schema(), commit_graph, query_source, blob_reader, } } } #[async_trait] impl TableProvider for LixFileHistoryProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::View } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { Ok(filters .iter() .map(|filter| { if parse_history_filter(filter, HistoryColumnStyle::Prefixed).is_some() { TableProviderFilterPushDown::Exact } else { TableProviderFilterPushDown::Unsupported } }) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, filters: &[Expr], limit: Option, ) -> Result> { let schema = projected_schema(&self.schema, projection)?; let needs_data = projection.is_none_or(|projection| { projection.iter().any(|index| { self.schema .field(*index) .name() .as_str() .eq_ignore_ascii_case("data") }) }); Ok(Arc::new(LixFileHistoryScanExec::new( Arc::clone(&self.commit_graph), self.query_source.clone(), Arc::clone(&self.blob_reader), schema, needs_data, HistoryRoute::from_filters(filters, HistoryColumnStyle::Prefixed), limit, ))) } } struct LixFileHistoryScanExec { commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, blob_reader: Arc, schema: SchemaRef, needs_data: bool, route: HistoryRoute, limit: Option, properties: Arc, } impl std::fmt::Debug for LixFileHistoryScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixFileHistoryScanExec") .field("route", &self.route) .field("limit", &self.limit) .finish() } } impl LixFileHistoryScanExec { fn new( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, blob_reader: Arc, schema: SchemaRef, needs_data: bool, route: HistoryRoute, limit: Option, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&schema)), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { commit_graph, query_source, blob_reader, schema, needs_data, route, limit, properties: Arc::new(properties), } } } impl DisplayAs for LixFileHistoryScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => write!( f, "LixFileHistoryScanExec(route={:?}, limit={:?})", self.route, self.limit ), DisplayFormatType::TreeRender => write!(f, "LixFileHistoryScanExec"), } } } impl ExecutionPlan for LixFileHistoryScanExec { fn name(&self) -> &str { "LixFileHistoryScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixFileHistoryScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixFileHistoryScanExec only exposes one partition, got {partition}" ))); } let commit_graph = Arc::clone(&self.commit_graph); let query_source = self.query_source.clone(); let blob_reader = Arc::clone(&self.blob_reader); let schema = Arc::clone(&self.schema); let stream_schema = Arc::clone(&schema); let route = self.route.clone(); let limit = self.limit; let needs_data = self.needs_data; let fut = async move { let mut rows = load_file_history_rows( commit_graph, query_source, &blob_reader, &route, needs_data, ) .await .map_err(lix_error_to_datafusion_error)?; if let Some(limit) = limit { rows.truncate(limit); } file_history_record_batch(&stream_schema, &rows).map_err(lix_error_to_datafusion_error) }; Ok(Box::pin(RecordBatchStreamAdapter::new( schema, stream::once(fut), ))) } } #[derive(Debug, Clone)] struct FileHistoryDescriptorRecord { id: String, directory_id: Option, name: Option, hidden: Option, entry: HistoryEntry, } #[derive(Debug, Clone)] struct FileHistoryDirectoryRecord { id: String, parent_id: Option, name: String, entry: HistoryEntry, } #[derive(Debug, Clone)] struct FileHistoryBlobRecord { file_id: String, blob_hash: Option, entry: HistoryEntry, } #[derive(Debug, Clone)] struct FileHistoryEvent { file_id: String, start_commit_id: String, depth: u32, priority: u8, change: MaterializedChange, observed_commit_id: String, commit_created_at: String, } #[derive(Debug, Clone)] struct FileHistoryOutputRow { entity_id: String, id: String, path: Option, directory_id: Option, name: Option, hidden: Option, data: Option>, descriptor_change: MaterializedChange, event: FileHistoryEvent, } #[derive(Debug, Deserialize)] struct FileDescriptorSnapshot { id: String, directory_id: Option, name: String, hidden: bool, } #[derive(Debug, Deserialize)] struct DirectoryDescriptorSnapshot { id: String, parent_id: Option, name: String, } #[derive(Debug, Deserialize)] struct BlobRefSnapshot { id: String, blob_hash: String, } async fn load_file_history_rows( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, blob_reader: &Arc, route: &HistoryRoute, needs_data: bool, ) -> Result, LixError> { let event_route = route.traversal_only(); let event_entries = load_history_entries( HistoryViewDescriptor { view_name: "lix_file_history", start_commit_column: HISTORY_COL_START_COMMIT_ID, }, Arc::clone(&commit_graph), query_source.json_reader.clone(), &event_route, vec![ FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), BLOB_REF_SCHEMA_KEY.to_string(), ], ) .await?; let context_route = route.starts_only(); let context_entries = load_history_entries( HistoryViewDescriptor { view_name: "lix_file_history", start_commit_column: HISTORY_COL_START_COMMIT_ID, }, commit_graph, query_source.json_reader, &context_route, vec![ FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), BLOB_REF_SCHEMA_KEY.to_string(), ], ) .await?; let event_descriptors = parse_file_history_descriptors(&event_entries)?; let event_directories = parse_file_history_directories(&event_entries)?; let event_blobs = parse_file_history_blobs(&event_entries)?; let descriptors = parse_file_history_descriptors(&context_entries)?; let directories = parse_file_history_directories(&context_entries)?; let blobs = parse_file_history_blobs(&context_entries)?; let events = file_history_events( &event_descriptors, &event_directories, &event_blobs, &descriptors, ); let mut output = Vec::new(); for event in events { let Some(descriptor) = nearest_file_descriptor(&descriptors, &event) else { continue; }; let blob = nearest_blob_ref(&blobs, &event); let data = if needs_data { match blob.and_then(|blob| blob.blob_hash.as_deref()) { Some(blob_hash) => load_single_blob_bytes(blob_reader, blob_hash).await?, None => None, } } else { None }; let path = resolve_file_history_path(descriptor, &directories, event.depth); let id = tombstone_identity_column_value( "id", &descriptor.id, HistoryIdentityProjection::SingleColumn { column: "id" }, )? .and_then(|value| value.as_str().map(ToOwned::to_owned)) .unwrap_or_else(|| descriptor.id.clone()); output.push(FileHistoryOutputRow { entity_id: descriptor.id.clone(), id, path, directory_id: descriptor.directory_id.clone(), name: descriptor.name.clone(), hidden: descriptor.hidden, data, descriptor_change: descriptor.entry.change.clone(), event, }); } output.retain(|row| { let entity_id = entity_id_json_array(&row.entity_id).ok(); route.matches_surface_row( FILE_DESCRIPTOR_SCHEMA_KEY, entity_id.as_deref().unwrap_or(&row.entity_id), Some(&row.entity_id), row.event.depth, ) }); output.sort_by(|left, right| { left.entity_id .cmp(&right.entity_id) .then(left.event.start_commit_id.cmp(&right.event.start_commit_id)) .then(left.event.depth.cmp(&right.event.depth)) .then( left.event .observed_commit_id .cmp(&right.event.observed_commit_id), ) .then(left.event.change.id.cmp(&right.event.change.id)) }); Ok(output) } async fn load_single_blob_bytes( blob_reader: &Arc, blob_hash: &str, ) -> Result>, LixError> { let hash = BlobHash::from_hex(blob_hash)?; Ok(blob_reader .load_bytes_many(&[hash]) .await? .into_vec() .into_iter() .next() .flatten()) } fn file_history_events( event_descriptors: &[FileHistoryDescriptorRecord], event_directories: &[FileHistoryDirectoryRecord], event_blobs: &[FileHistoryBlobRecord], context_descriptors: &[FileHistoryDescriptorRecord], ) -> Vec { let mut descriptor_ids_by_start = BTreeSet::<(String, String)>::new(); let mut directory_ids_by_file_start = BTreeMap::<(String, String), BTreeSet>::new(); for descriptor in context_descriptors { let key = ( descriptor.id.clone(), descriptor.entry.start_commit_id.clone(), ); descriptor_ids_by_start.insert(key.clone()); if let Some(directory_id) = &descriptor.directory_id { directory_ids_by_file_start .entry(key) .or_default() .insert(directory_id.clone()); } } let mut candidates = Vec::new(); for descriptor in event_descriptors { candidates.push(file_history_event_from_entry( descriptor.id.clone(), &descriptor.entry, 1, )); } for directory in event_directories { for ((file_id, start_commit_id), directory_ids) in &directory_ids_by_file_start { if start_commit_id == &directory.entry.start_commit_id && directory_ids.contains(&directory.id) { candidates.push(file_history_event_from_entry( file_id.clone(), &directory.entry, 2, )); } } } for blob in event_blobs { if descriptor_ids_by_start .contains(&(blob.file_id.clone(), blob.entry.start_commit_id.clone())) { candidates.push(file_history_event_from_entry( blob.file_id.clone(), &blob.entry, 3, )); } } candidates.sort_by(|left, right| { left.file_id .cmp(&right.file_id) .then(left.start_commit_id.cmp(&right.start_commit_id)) .then(left.depth.cmp(&right.depth)) .then(left.priority.cmp(&right.priority)) .then(left.change.id.cmp(&right.change.id)) }); candidates.dedup_by(|left, right| { left.file_id == right.file_id && left.start_commit_id == right.start_commit_id && left.depth == right.depth }); candidates } fn file_history_event_from_entry( file_id: String, entry: &HistoryEntry, priority: u8, ) -> FileHistoryEvent { FileHistoryEvent { file_id, start_commit_id: entry.start_commit_id.clone(), depth: entry.depth, priority, change: entry.change.clone(), observed_commit_id: entry.observed_commit_id.clone(), commit_created_at: entry.commit_created_at.clone(), } } fn parse_file_history_descriptors( entries: &[HistoryEntry], ) -> Result, LixError> { entries .iter() .filter(|entry| entry.change.schema_key == FILE_DESCRIPTOR_SCHEMA_KEY) .map(|entry| { let Some(snapshot_content) = entry.change.snapshot_content.as_deref() else { return Ok(FileHistoryDescriptorRecord { id: entry.change.entity_id.as_single_string_owned()?, directory_id: None, name: None, hidden: None, entry: entry.clone(), }); }; let snapshot: FileDescriptorSnapshot = serde_json::from_str(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_file_descriptor history snapshot JSON: {error}"), ) })?; Ok(FileHistoryDescriptorRecord { id: snapshot.id, directory_id: snapshot.directory_id, name: Some(snapshot.name), hidden: Some(snapshot.hidden), entry: entry.clone(), }) }) .collect() } fn parse_file_history_directories( entries: &[HistoryEntry], ) -> Result, LixError> { entries .iter() .filter(|entry| entry.change.schema_key == DIRECTORY_DESCRIPTOR_SCHEMA_KEY) .filter_map(|entry| { let snapshot_content = entry.change.snapshot_content.clone()?; Some((entry, snapshot_content)) }) .map(|(entry, snapshot_content)| { let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(&snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_directory_descriptor history snapshot JSON: {error}"), ) })?; Ok(FileHistoryDirectoryRecord { id: snapshot.id, parent_id: snapshot.parent_id, name: snapshot.name, entry: entry.clone(), }) }) .collect() } fn parse_file_history_blobs( entries: &[HistoryEntry], ) -> Result, LixError> { entries .iter() .filter(|entry| entry.change.schema_key == BLOB_REF_SCHEMA_KEY) .map(|entry| { let Some(snapshot_content) = entry.change.snapshot_content.as_deref() else { return Ok(FileHistoryBlobRecord { file_id: entry.change.file_id.clone().unwrap_or_else(|| { entry .change .entity_id .as_single_string_owned() .expect("canonical change entity identity should project") }), blob_hash: None, entry: entry.clone(), }); }; let snapshot: BlobRefSnapshot = serde_json::from_str(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_binary_blob_ref history snapshot JSON: {error}"), ) })?; Ok(FileHistoryBlobRecord { file_id: entry.change.file_id.clone().unwrap_or(snapshot.id), blob_hash: Some(snapshot.blob_hash), entry: entry.clone(), }) }) .collect() } fn nearest_file_descriptor<'a>( descriptors: &'a [FileHistoryDescriptorRecord], event: &FileHistoryEvent, ) -> Option<&'a FileHistoryDescriptorRecord> { descriptors .iter() .filter(|descriptor| { let exact_descriptor_event = history_descriptor_event_matches(&descriptor.entry, event.depth, &event.change.id); (exact_descriptor_event || descriptor.name.is_some()) && descriptor.id == event.file_id && descriptor.entry.start_commit_id == event.start_commit_id && descriptor.entry.depth >= event.depth }) .min_by(|left, right| { left.entry .depth .cmp(&right.entry.depth) .then(left.entry.change.id.cmp(&right.entry.change.id)) }) } fn nearest_blob_ref<'a>( blobs: &'a [FileHistoryBlobRecord], event: &FileHistoryEvent, ) -> Option<&'a FileHistoryBlobRecord> { blobs .iter() .filter(|blob| { blob.file_id == event.file_id && blob.entry.start_commit_id == event.start_commit_id && blob.entry.depth >= event.depth }) .min_by(|left, right| { left.entry .depth .cmp(&right.entry.depth) .then(left.entry.change.id.cmp(&right.entry.change.id)) }) } fn resolve_file_history_path( descriptor: &FileHistoryDescriptorRecord, directories: &[FileHistoryDirectoryRecord], target_depth: u32, ) -> Option { let name = descriptor.name.as_ref()?; let Some(directory_id) = descriptor.directory_id.as_deref() else { return Some(format!("/{name}")); }; let directory_path = resolve_directory_history_path( directory_id, &descriptor.entry.start_commit_id, target_depth, directories, &mut BTreeMap::new(), &mut BTreeSet::new(), )?; Some(format!("{directory_path}{name}")) } fn resolve_directory_history_path( directory_id: &str, start_commit_id: &str, target_depth: u32, directories: &[FileHistoryDirectoryRecord], cache: &mut BTreeMap>, visiting: &mut BTreeSet, ) -> Option { if let Some(path) = cache.get(directory_id) { return path.clone(); } if !visiting.insert(directory_id.to_string()) { cache.insert(directory_id.to_string(), None); return None; } let directory = directories .iter() .filter(|directory| { directory.id == directory_id && directory.entry.start_commit_id == start_commit_id && directory.entry.depth >= target_depth }) .min_by(|left, right| { left.entry .depth .cmp(&right.entry.depth) .then(left.entry.change.id.cmp(&right.entry.change.id)) })?; let path = match directory.parent_id.as_deref() { Some(parent_id) => { let parent_path = resolve_directory_history_path( parent_id, start_commit_id, target_depth, directories, cache, visiting, )?; format!("{parent_path}{}/", directory.name) } None => format!("/{}/", directory.name), }; visiting.remove(directory_id); cache.insert(directory_id.to_string(), Some(path.clone())); Some(path) } fn file_history_record_batch( schema: &SchemaRef, rows: &[FileHistoryOutputRow], ) -> Result { let columns = schema .fields() .iter() .map(|field| file_history_column_array(field.name(), rows)) .collect::, _>>()?; let options = RecordBatchOptions::new().with_row_count(Some(rows.len())); RecordBatch::try_new_with_options(Arc::clone(schema), columns, &options).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("sql2 failed to build lix_file_history record batch: {error}"), ) }) } fn file_history_column_array( column_name: &str, rows: &[FileHistoryOutputRow], ) -> Result { Ok(match column_name { "id" => string_array(rows.iter().map(|row| Some(row.id.as_str()))), "path" => string_array(rows.iter().map(|row| row.path.as_deref())), "directory_id" => string_array(rows.iter().map(|row| row.directory_id.as_deref())), "name" => string_array(rows.iter().map(|row| row.name.as_deref())), "hidden" => Arc::new(BooleanArray::from( rows.iter().map(|row| row.hidden).collect::>(), )) as ArrayRef, "data" => Arc::new(BinaryArray::from( rows.iter() .map(|row| row.data.as_deref()) .collect::>(), )) as ArrayRef, HISTORY_COL_ENTITY_ID => Arc::new(StringArray::from( rows.iter() .map(|row| entity_id_json_array(&row.entity_id).map(Some)) .collect::, _>>()?, )) as ArrayRef, HISTORY_COL_SCHEMA_KEY => { string_array(rows.iter().map(|_| Some(FILE_DESCRIPTOR_SCHEMA_KEY))) } HISTORY_COL_FILE_ID => string_array(rows.iter().map(|row| Some(row.entity_id.as_str()))), HISTORY_COL_CHANGE_ID => { string_array(rows.iter().map(|row| Some(row.event.change.id.as_str()))) } HISTORY_COL_SNAPSHOT_CONTENT => string_array( rows.iter() .map(|row| row.descriptor_change.snapshot_content.as_deref()), ), HISTORY_COL_METADATA => Arc::new(StringArray::from( rows.iter() .map(|row| { row.descriptor_change .metadata .as_ref() .map(serialize_row_metadata) }) .collect::>(), )), HISTORY_COL_OBSERVED_COMMIT_ID => string_array( rows.iter() .map(|row| Some(row.event.observed_commit_id.as_str())), ), HISTORY_COL_COMMIT_CREATED_AT => string_array( rows.iter() .map(|row| Some(row.event.commit_created_at.as_str())), ), HISTORY_COL_START_COMMIT_ID => string_array( rows.iter() .map(|row| Some(row.event.start_commit_id.as_str())), ), HISTORY_COL_DEPTH => Arc::new(Int64Array::from( rows.iter() .map(|row| i64::from(row.event.depth)) .collect::>(), )) as ArrayRef, other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "sql2 lix_file_history provider does not support projected column '{other}'" ), )) } }) } fn lix_file_history_schema() -> SchemaRef { Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("path", DataType::Utf8, true), Field::new("directory_id", DataType::Utf8, true), Field::new("name", DataType::Utf8, true), Field::new("hidden", DataType::Boolean, true), Field::new("data", DataType::Binary, true), json_field(HISTORY_COL_ENTITY_ID, false), Field::new(HISTORY_COL_SCHEMA_KEY, DataType::Utf8, false), Field::new(HISTORY_COL_FILE_ID, DataType::Utf8, true), json_field(HISTORY_COL_SNAPSHOT_CONTENT, true), Field::new(HISTORY_COL_CHANGE_ID, DataType::Utf8, false), json_field(HISTORY_COL_METADATA, true), Field::new(HISTORY_COL_OBSERVED_COMMIT_ID, DataType::Utf8, false), Field::new(HISTORY_COL_COMMIT_CREATED_AT, DataType::Utf8, false), Field::new(HISTORY_COL_START_COMMIT_ID, DataType::Utf8, false), Field::new(HISTORY_COL_DEPTH, DataType::Int64, false), ])) } fn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec>) -> Result { let Some(projection) = projection else { return Ok(Arc::clone(base_schema)); }; Ok(Arc::new(base_schema.project(projection)?)) } fn string_array<'a>(values: impl Iterator>) -> ArrayRef { Arc::new(StringArray::from(values.collect::>())) as ArrayRef } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn entity_id_json_array(entity_id: &str) -> Result { serde_json::to_string(&[entity_id]).map_err(|error| { LixError::unknown(format!( "failed to encode history entity id as JSON: {error}" )) }) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } ================================================ FILE: packages/engine/src/sql2/file_provider.rs ================================================ use std::any::Any; use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ ArrayRef, BinaryArray, BooleanArray, RecordBatchOptions, StringArray, UInt64Array, }; use datafusion::arrow::compute::{and, filter_record_batch}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::RecordBatch; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, ScalarValue}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::dml::InsertOp; use datafusion::logical_expr::expr::InList; use datafusion::logical_expr::{BinaryExpr, Expr, Operator, TableProviderFilterPushDown}; use datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr}; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use datafusion::prelude::SessionContext; use futures_util::{stream, TryStreamExt}; use serde::Deserialize; use crate::binary_cas::{BlobDataReader, BlobHash}; use crate::entity_identity::EntityIdentity; use crate::functions::FunctionProviderHandle; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{ LiveStateFilter, LiveStateProjection, LiveStateReader, LiveStateScanRequest, }; use crate::sql2::dml::{InsertExec, InsertSink}; use crate::sql2::filesystem_predicates::{ canonicalize_filesystem_path_filters, FilesystemPathKind, }; use crate::sql2::predicate_typecheck::validate_json_predicate_filters; use crate::sql2::version_scope::{ explicit_version_ids_from_dml_filters, resolve_provider_version_ids, resolve_write_version_scope, VersionBinding, }; use crate::sql2::write_normalization::{ is_binary_type, lix_file_data_type_error, lix_file_data_type_error_with_value, logical_expr_is_binary_or_null, reject_non_binary_casts_for_insert_column, scalar_is_binary_or_null, InsertCell, InsertColumnIntents, SqlCell, UpdateAssignmentValues, UpdateCell, }; use crate::transaction::types::{TransactionJson, TransactionWriteRow}; use crate::version::VersionRefReader; use crate::{parse_row_metadata_value, serialize_row_metadata, LixError}; const FILE_DESCRIPTOR_SCHEMA_KEY: &str = "lix_file_descriptor"; const BLOB_REF_SCHEMA_KEY: &str = "lix_binary_blob_ref"; const DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = "lix_directory_descriptor"; use super::filesystem_planner::{ blob_ref_row, directory_path_resolvers_from_state_rows, file_descriptor_row, file_descriptor_write_row, filesystem_storage_scope_key, plan_file_delete, plan_file_path_update, BlobRefRowInput, DirectoryPathResolver, FileDeleteInput, FileDescriptorRowInput, FileDescriptorWriteIntent, FilePathWriteInput, FilesystemDeletePlan, FilesystemRowContext, }; use super::result_metadata::json_field; use crate::sql2::{ SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader, }; use crate::transaction::types::{ LogicalPrimaryKey, TransactionFileData, TransactionWrite, TransactionWriteMode, TransactionWriteOperation, TransactionWriteOrigin, }; pub(crate) async fn register_lix_file_providers( session: &SessionContext, active_version_id: &str, live_state: Arc, version_ref: Arc, blob_reader: Arc, functions: FunctionProviderHandle, ) -> Result<(), LixError> { session .register_table( "lix_file_by_version", Arc::new(LixFileProvider::by_version( Arc::clone(&live_state), Arc::clone(&version_ref), Arc::clone(&blob_reader), functions.clone(), )), ) .map_err(datafusion_error_to_lix_error)?; session .register_table( "lix_file", Arc::new(LixFileProvider::active_version( active_version_id, live_state, version_ref, Arc::clone(&blob_reader), functions, )), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } pub(crate) async fn register_lix_file_write_providers( session: &SessionContext, write_ctx: SqlWriteContext, ) -> Result<(), LixError> { session .register_table( "lix_file_by_version", Arc::new(LixFileProvider::by_version_with_write(write_ctx.clone())), ) .map_err(datafusion_error_to_lix_error)?; session .register_table( "lix_file", Arc::new(LixFileProvider::active_version_with_write(write_ctx)), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } pub(crate) struct LixFileProvider { schema: SchemaRef, live_state: Arc, version_ref: Arc, blob_reader: Arc, write_access: WriteAccess, functions: FunctionProviderHandle, version_binding: VersionBinding, } impl std::fmt::Debug for LixFileProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixFileProvider").finish() } } impl LixFileProvider { pub(crate) fn active_version( active_version_id: impl Into, live_state: Arc, version_ref: Arc, blob_reader: Arc, functions: FunctionProviderHandle, ) -> Self { Self { schema: lix_file_schema(), live_state, version_ref, blob_reader, write_access: WriteAccess::read_only(), functions, version_binding: VersionBinding::active(active_version_id), } } pub(crate) fn active_version_with_write(write_ctx: SqlWriteContext) -> Self { let active_version_id = write_ctx.active_version_id(); let functions = write_ctx.functions(); let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); let blob_reader = write_ctx.blob_reader(); Self { schema: lix_file_schema(), live_state, version_ref, blob_reader, write_access: WriteAccess::write(write_ctx), functions, version_binding: VersionBinding::active(active_version_id), } } pub(crate) fn by_version( live_state: Arc, version_ref: Arc, blob_reader: Arc, functions: FunctionProviderHandle, ) -> Self { Self { schema: lix_file_by_version_schema(), live_state, version_ref, blob_reader, write_access: WriteAccess::read_only(), functions, version_binding: VersionBinding::explicit(), } } pub(crate) fn by_version_with_write(write_ctx: SqlWriteContext) -> Self { let functions = write_ctx.functions(); let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); let blob_reader = write_ctx.blob_reader(); Self { schema: lix_file_by_version_schema(), live_state, version_ref, blob_reader, write_access: WriteAccess::write(write_ctx), functions, version_binding: VersionBinding::explicit(), } } } #[async_trait] impl TableProvider for LixFileProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::Base } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { let analyzer = LixFileIdFilterAnalyzer; Ok(filters .iter() .map(|filter| { if ExactStringColumnFilterAnalyzer::new("lixcol_version_id").supports(filter) || analyzer.supports(filter) || contains_column(filter, "path") { TableProviderFilterPushDown::Exact } else { TableProviderFilterPushDown::Unsupported } }) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, filters: &[Expr], limit: Option, ) -> Result> { let projected_schema = projected_schema(&self.schema, projection)?; let scan_limit = if filters.is_empty() { limit } else { None }; let mut request = lix_file_scan_request( self.version_binding.active_version_id(), Some(projected_schema.as_ref()), scan_limit, ); if self.write_access.is_write() && matches!(self.version_binding, VersionBinding::Explicit) { request.filter.version_ids = explicit_version_ids_from_dml_filters(filters); if request.filter.version_ids.is_empty() { return Err(DataFusionError::Plan( "DELETE FROM lix_file_by_version requires an explicit lixcol_version_id predicate" .to_string(), )); } } request.filter.version_ids = resolve_provider_version_ids( self.version_ref.as_ref(), &self.version_binding, request.filter.version_ids, ) .await .map_err(lix_error_to_datafusion_error)?; let filters = canonicalize_filesystem_path_filters(filters, FilesystemPathKind::File)?; let target_file_ids = file_id_constraint_from_filters(&filters)?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, _state.execution_props())) .collect::>>()?; Ok(Arc::new(LixFileScanExec::new( Arc::clone(&self.live_state), Arc::clone(&self.blob_reader), Arc::clone(&self.schema), projected_schema, projection.cloned(), request, target_file_ids, physical_filters, limit, ))) } async fn insert_into( &self, _state: &dyn Session, input: Arc, insert_op: InsertOp, ) -> Result> { if insert_op != InsertOp::Append { return not_impl_err!("{insert_op} not implemented for lix_file yet"); } let write_ctx = self.write_access.require_write("INSERT into lix_file")?; let insert_column_intents = InsertColumnIntents::from_input(&input); let include_data_writes = insert_column_intents.includes_column("data"); if include_data_writes { reject_non_binary_casts_for_insert_column(&input, "data", "INSERT into lix_file")?; } let sink = LixFileInsertSink::new( input.schema(), write_ctx.clone(), self.functions.clone(), self.version_binding.clone(), include_data_writes, ); Ok(Arc::new(InsertExec::new(input, Arc::new(sink)))) } async fn delete_from( &self, state: &dyn Session, filters: Vec, ) -> Result> { let write_ctx = self.write_access.require_write("DELETE FROM lix_file")?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; let filters = canonicalize_filesystem_path_filters(&filters, FilesystemPathKind::File)?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; let target_file_ids = file_id_constraint_from_filters(&filters)?; let mut request = lix_file_scan_request(self.version_binding.active_version_id(), None, None); if matches!(self.version_binding, VersionBinding::Explicit) { request.filter.version_ids = explicit_version_ids_from_dml_filters(&filters); if request.filter.version_ids.is_empty() { return Err(DataFusionError::Plan( "DELETE FROM lix_file_by_version requires an explicit lixcol_version_id predicate" .to_string(), )); } } Ok(Arc::new(LixFileDeleteExec::new( Arc::clone(&self.blob_reader), write_ctx.clone(), Arc::clone(&self.schema), self.version_binding.clone(), request, target_file_ids, physical_filters, ))) } async fn update( &self, state: &dyn Session, assignments: Vec<(String, Expr)>, filters: Vec, ) -> Result> { let write_ctx = self.write_access.require_write("UPDATE lix_file")?; validate_lix_file_update_assignments(&self.schema, &assignments)?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; let physical_assignments = assignments .iter() .map(|(column_name, expr)| { Ok(( column_name.clone(), create_physical_expr(expr, &df_schema, state.execution_props())?, )) }) .collect::>>()?; let filters = canonicalize_filesystem_path_filters(&filters, FilesystemPathKind::File)?; let target_file_ids = file_id_constraint_from_filters(&filters)?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; let request = lix_file_scan_request(self.version_binding.active_version_id(), None, None); Ok(Arc::new(LixFileUpdateExec::new( Arc::clone(&self.blob_reader), write_ctx.clone(), Arc::clone(&self.schema), self.version_binding.clone(), self.functions.clone(), request, target_file_ids, physical_assignments, physical_filters, ))) } } #[allow(dead_code)] struct LixFileInsertSink { write_ctx: SqlWriteContext, functions: FunctionProviderHandle, version_binding: VersionBinding, surface_name: &'static str, include_data_writes: bool, } impl std::fmt::Debug for LixFileInsertSink { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixFileInsertSink").finish() } } impl LixFileInsertSink { fn new( _schema: SchemaRef, write_ctx: SqlWriteContext, functions: FunctionProviderHandle, version_binding: VersionBinding, include_data_writes: bool, ) -> Self { let surface_name = lix_file_surface_name(&version_binding); Self { write_ctx, functions, version_binding, surface_name, include_data_writes, } } } impl DisplayAs for LixFileInsertSink { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixFileInsertSink") } DisplayFormatType::TreeRender => write!(f, "LixFileInsertSink"), } } } #[async_trait] impl InsertSink for LixFileInsertSink { async fn write_batches( &self, batches: Vec, _context: &Arc, ) -> Result { let mut staged = LixFileStagedBatch::default(); let mut path_resolvers = None; for batch in batches { if path_resolvers.is_none() { path_resolvers = Some( file_path_resolvers_from_live_state( Arc::new(WriteContextLiveStateReader::new(self.write_ctx.clone())), self.version_binding.active_version_id(), ) .await .map_err(lix_error_to_datafusion_error)?, ); } if record_batch_has_non_null_column(&batch, "path")? { staged.extend(lix_file_insert_stage_from_batch_with_path_resolvers( &batch, self.version_binding.active_version_id(), self.surface_name, path_resolvers .as_mut() .expect("path resolver should be initialized"), &mut || self.functions.call_uuid_v7(), self.include_data_writes, )?); } else { staged.extend( lix_file_insert_stage_from_batch_with_id_generator_and_path_resolvers( &batch, self.version_binding.active_version_id(), self.surface_name, path_resolvers .as_mut() .expect("path resolver should be initialized"), &mut || self.functions.call_uuid_v7(), self.include_data_writes, )?, ); } } if !staged.state_rows.is_empty() || !staged.file_data_writes.is_empty() { let intent = if staged.file_data_writes.is_empty() { TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows: staged.state_rows, } } else { TransactionWrite::RowsWithFileData { mode: TransactionWriteMode::Insert, rows: staged.state_rows, file_data: staged.file_data_writes, count: staged.count, } }; self.write_ctx .stage_write(intent) .await .map_err(lix_error_to_datafusion_error)?; } Ok(staged.count) } } fn lix_file_surface_name(version_binding: &VersionBinding) -> &'static str { match version_binding { VersionBinding::Active { .. } => "lix_file", VersionBinding::Explicit => "lix_file_by_version", } } #[allow(dead_code)] struct LixFileDeleteExec { blob_reader: Arc, write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, target_file_ids: FileIdConstraint, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for LixFileDeleteExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixFileDeleteExec").finish() } } impl LixFileDeleteExec { fn new( blob_reader: Arc, write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, request: LiveStateScanRequest, target_file_ids: FileIdConstraint, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { blob_reader, write_ctx, table_schema, version_binding, request, target_file_ids, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for LixFileDeleteExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixFileDeleteExec(filters={})", self.filters.len()) } DisplayFormatType::TreeRender => write!(f, "LixFileDeleteExec"), } } } impl ExecutionPlan for LixFileDeleteExec { fn name(&self) -> &str { "LixFileDeleteExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixFileDeleteExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixFileDeleteExec only exposes one partition, got {partition}" ))); } let blob_reader = Arc::clone(&self.blob_reader); let write_ctx = self.write_ctx.clone(); let table_schema = Arc::clone(&self.table_schema); let version_binding = self.version_binding.clone(); let request = self.request.clone(); let target_file_ids = self.target_file_ids.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = scan_lix_file_live_rows( Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())), &request, &target_file_ids, ) .await .map_err(lix_error_to_datafusion_error)?; let blob_ref_file_ids = blob_ref_file_ids_from_live_rows(&rows).map_err(lix_error_to_datafusion_error)?; let source_batch = lix_file_record_batch(&table_schema, &blob_reader, rows) .await .map_err(lix_error_to_datafusion_error)?; let matched_batch = filter_lix_file_batch(source_batch, &filters)?; let staged = lix_file_delete_stage_from_batch( &matched_batch, version_binding.active_version_id(), &blob_ref_file_ids, )?; let count = staged.count; if count > 0 { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: staged.state_rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } #[allow(dead_code)] struct LixFileUpdateExec { blob_reader: Arc, write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, functions: FunctionProviderHandle, request: LiveStateScanRequest, target_file_ids: FileIdConstraint, assignments: Vec<(String, Arc)>, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for LixFileUpdateExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixFileUpdateExec").finish() } } impl LixFileUpdateExec { fn new( blob_reader: Arc, write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: VersionBinding, functions: FunctionProviderHandle, request: LiveStateScanRequest, target_file_ids: FileIdConstraint, assignments: Vec<(String, Arc)>, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { blob_reader, write_ctx, table_schema, version_binding, functions, request, target_file_ids, assignments, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for LixFileUpdateExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!( f, "LixFileUpdateExec(assignments={}, filters={})", self.assignments.len(), self.filters.len() ) } DisplayFormatType::TreeRender => write!(f, "LixFileUpdateExec"), } } } impl ExecutionPlan for LixFileUpdateExec { fn name(&self) -> &str { "LixFileUpdateExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixFileUpdateExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixFileUpdateExec only exposes one partition, got {partition}" ))); } let blob_reader = Arc::clone(&self.blob_reader); let write_ctx = self.write_ctx.clone(); let table_schema = Arc::clone(&self.table_schema); let version_binding = self.version_binding.clone(); let functions = self.functions.clone(); let request = self.request.clone(); let target_file_ids = self.target_file_ids.clone(); let assignments = self.assignments.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = scan_lix_file_live_rows( Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())), &request, &target_file_ids, ) .await .map_err(lix_error_to_datafusion_error)?; let source_batch = lix_file_record_batch(&table_schema, &blob_reader, rows) .await .map_err(lix_error_to_datafusion_error)?; let matched_batch = filter_lix_file_batch(source_batch, &filters)?; let assignment_values = UpdateAssignmentValues::evaluate(&matched_batch, &assignments)?; let update_columns = LixFileUpdateColumns::from_assignments(&assignments); let mut path_resolvers = None; if update_columns.path || update_columns.descriptor { path_resolvers = Some( file_path_resolvers_from_live_state( Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())), version_binding.active_version_id(), ) .await .map_err(lix_error_to_datafusion_error)?, ); } let staged = lix_file_update_stage_from_batch( &matched_batch, &assignment_values, version_binding.active_version_id(), update_columns, path_resolvers.as_mut(), &mut || functions.call_uuid_v7(), )?; let count = staged.count; if count > 0 { let intent = if staged.file_data_writes.is_empty() { TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: staged.state_rows, } } else { TransactionWrite::RowsWithFileData { mode: TransactionWriteMode::Replace, rows: staged.state_rows, file_data: staged.file_data_writes, count, } }; write_ctx .stage_write(intent) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } struct LixFileScanExec { live_state: Arc, blob_reader: Arc, batch_schema: SchemaRef, output_schema: SchemaRef, projection: Option>, request: LiveStateScanRequest, target_file_ids: FileIdConstraint, filters: Vec>, limit: Option, properties: Arc, } impl std::fmt::Debug for LixFileScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixFileScanExec").finish() } } impl LixFileScanExec { fn new( live_state: Arc, blob_reader: Arc, batch_schema: SchemaRef, output_schema: SchemaRef, projection: Option>, request: LiveStateScanRequest, target_file_ids: FileIdConstraint, filters: Vec>, limit: Option, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(output_schema.clone()), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { live_state, blob_reader, batch_schema, output_schema, projection, request, target_file_ids, filters, limit, properties: Arc::new(properties), } } } impl DisplayAs for LixFileScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixFileScanExec(limit={:?})", self.limit) } DisplayFormatType::TreeRender => write!(f, "LixFileScanExec"), } } } impl ExecutionPlan for LixFileScanExec { fn name(&self) -> &str { "LixFileScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixFileScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixFileScanExec only supports partition 0, got {partition}" ))); } let live_state = Arc::clone(&self.live_state); let blob_reader = Arc::clone(&self.blob_reader); let request = self.request.clone(); let target_file_ids = self.target_file_ids.clone(); let filters = self.filters.clone(); let limit = self.limit; let output_schema = Arc::clone(&self.output_schema); let batch_schema = Arc::clone(&self.batch_schema); let projection = self.projection.clone(); let fut = async move { let rows = scan_lix_file_live_rows(live_state, &request, &target_file_ids) .await .map_err(|error| { DataFusionError::Execution(format!("sql2 lix_file scan failed: {error}")) })?; let batch = lix_file_record_batch(&batch_schema, &blob_reader, rows) .await .map_err(|error| { DataFusionError::Execution(format!("sql2 lix_file batch build failed: {error}")) })?; let filtered = filter_lix_file_batch(batch, &filters)?; let projected = match projection { Some(indices) => filtered.project(&indices).map_err(DataFusionError::from), None => Ok(filtered), }?; match limit { Some(limit) => Ok(projected.slice(0, limit.min(projected.num_rows()))), None => Ok(projected), } }; Ok(Box::pin(RecordBatchStreamAdapter::new( output_schema, stream::once(fut).map_ok(|batch| batch), ))) } } #[derive(Debug, Clone)] struct FileDescriptorRecord { id: String, directory_id: Option, name: String, hidden: bool, live: MaterializedLiveStateRow, } #[derive(Debug, Clone)] struct BlobRefRecord { blob_hash: String, } #[derive(Debug, Clone)] struct DirectoryDescriptorRecord { id: String, parent_id: Option, name: String, version_id: String, } #[derive(Debug, Deserialize)] struct FileDescriptorSnapshot { id: String, directory_id: Option, name: String, hidden: bool, } #[derive(Debug, Deserialize)] struct BlobRefSnapshot { id: String, blob_hash: String, } #[derive(Debug, Deserialize)] struct DirectoryDescriptorSnapshot { id: String, parent_id: Option, name: String, } #[derive(Debug, Default)] struct LixFileStagedBatch { state_rows: Vec, file_data_writes: Vec, count: u64, } impl LixFileStagedBatch { fn extend(&mut self, other: LixFileStagedBatch) { self.state_rows.extend(other.state_rows); self.file_data_writes.extend(other.file_data_writes); self.count += other.count; } fn extend_filesystem_plan(&mut self, plan: super::filesystem_planner::FilesystemWritePlan) { self.state_rows.extend(plan.rows); self.file_data_writes.extend(plan.file_data); self.count += plan.count; } fn extend_filesystem_delete_plan(&mut self, plan: FilesystemDeletePlan) { self.state_rows.extend(plan.rows); self.count += plan.count; } } #[cfg(test)] fn lix_file_write_rows_from_batch( batch: &RecordBatch, version_binding: Option<&str>, ) -> Result> { Ok(lix_file_insert_stage_from_batch(batch, version_binding)?.state_rows) } fn lix_file_delete_stage_from_batch( batch: &RecordBatch, version_binding: Option<&str>, blob_ref_file_ids: &BTreeSet, ) -> Result { let mut staged = LixFileStagedBatch::default(); for row_index in 0..batch.num_rows() { let file_id = required_string_value(batch, row_index, "id")?; let context = file_row_context_from_batch(batch, row_index, version_binding)?; staged.extend_filesystem_delete_plan(plan_file_delete(FileDeleteInput { file_id: file_id.clone(), has_blob_ref: blob_ref_file_ids.contains(&file_id), context, })); } Ok(staged) } fn blob_ref_file_ids_from_live_rows( rows: &[MaterializedLiveStateRow], ) -> std::result::Result, LixError> { let mut file_ids = BTreeSet::new(); for row in rows { if row.schema_key != BLOB_REF_SCHEMA_KEY { continue; } let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot: BlobRefSnapshot = serde_json::from_str(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_binary_blob_ref snapshot JSON: {error}"), ) })?; file_ids.insert(snapshot.id); } Ok(file_ids) } #[cfg(test)] fn lix_file_insert_stage_from_batch( batch: &RecordBatch, version_binding: Option<&str>, ) -> Result { lix_file_stage_from_batch_with_options(batch, version_binding, "lix_file", true, true, true) } fn lix_file_insert_stage_from_batch_with_id_generator_and_path_resolvers( batch: &RecordBatch, version_binding: Option<&str>, surface_name: &str, path_resolvers: &mut BTreeMap, generate_id: &mut dyn FnMut() -> String, include_data_writes: bool, ) -> Result { lix_file_stage_from_batch_with_options_and_path_resolvers( batch, version_binding, surface_name, true, true, include_data_writes, Some(path_resolvers), Some(generate_id), ) } fn lix_file_insert_stage_from_batch_with_path_resolvers( batch: &RecordBatch, version_binding: Option<&str>, surface_name: &str, path_resolvers: &mut BTreeMap, generate_directory_id: &mut dyn FnMut() -> String, include_data_writes: bool, ) -> Result { lix_file_stage_from_batch_with_options_and_path_resolvers( batch, version_binding, surface_name, true, true, include_data_writes, Some(path_resolvers), Some(generate_directory_id), ) } fn lix_file_existing_update_stage_from_batch( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, version_binding: Option<&str>, include_descriptor_writes: bool, include_data_writes: bool, path_resolvers: Option<&mut BTreeMap>, ) -> Result { let mut staged = LixFileStagedBatch::default(); let mut path_resolvers = path_resolvers; for row_index in 0..batch.num_rows() { let id = required_string_value(batch, row_index, "id")?; let hidden = update_optional_bool_value(batch, assignment_values, row_index, "hidden")? .unwrap_or(false); let context = file_row_context_from_update(batch, assignment_values, row_index, version_binding)?; if include_descriptor_writes { let directory_id = update_optional_string_value(batch, assignment_values, row_index, "directory_id")?; let name = update_required_string_value(batch, assignment_values, row_index, "name")?; if let Some(path_resolvers) = path_resolvers.as_deref_mut() { let resolver = path_resolvers .entry(file_path_resolver_key(&context)) .or_insert_with(DirectoryPathResolver::default); resolver .reserve_file(directory_id.clone(), name.clone(), id.clone()) .map_err(lix_error_to_datafusion_error)?; } staged .state_rows .push(file_descriptor_row(FileDescriptorRowInput { id: id.clone(), directory_id, name, hidden, context: context.clone(), })); } if include_data_writes { let data = update_required_binary_value(batch, assignment_values, row_index, "data")?; stage_lix_file_data_write(&mut staged, id, data, context, None)?; } staged.count = staged .count .checked_add(1) .ok_or_else(|| DataFusionError::Execution("lix_file row count overflow".into()))?; } Ok(staged) } #[derive(Debug, Clone, Copy)] struct LixFileUpdateColumns { path: bool, data: bool, descriptor: bool, } impl LixFileUpdateColumns { fn from_assignments(assignments: &[(String, Arc)]) -> Self { let path = assignments .iter() .any(|(column_name, _)| column_name == "path"); let data = assignments .iter() .any(|(column_name, _)| column_name == "data"); let descriptor = assignments .iter() .any(|(column_name, _)| column_name != "path" && column_name != "data"); Self { path, data, descriptor, } } } fn lix_file_update_stage_from_batch( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, version_binding: Option<&str>, update_columns: LixFileUpdateColumns, path_resolvers: Option<&mut BTreeMap>, generate_directory_id: &mut dyn FnMut() -> String, ) -> Result { if update_columns.path || update_columns.descriptor { let Some(path_resolvers) = path_resolvers else { return Err(DataFusionError::Execution( "UPDATE lix_file requires filesystem path resolver".to_string(), )); }; return if update_columns.path { lix_file_path_update_stage_from_batch( batch, assignment_values, version_binding, update_columns, path_resolvers, generate_directory_id, ) } else { lix_file_existing_update_stage_from_batch( batch, assignment_values, version_binding, update_columns.descriptor, update_columns.data, Some(path_resolvers), ) }; } lix_file_existing_update_stage_from_batch( batch, assignment_values, version_binding, update_columns.descriptor, update_columns.data, None, ) } fn lix_file_path_update_stage_from_batch( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, version_binding: Option<&str>, update_columns: LixFileUpdateColumns, path_resolvers: &mut BTreeMap, generate_directory_id: &mut dyn FnMut() -> String, ) -> Result { let mut staged = LixFileStagedBatch::default(); for row_index in 0..batch.num_rows() { let id = required_string_value(batch, row_index, "id")?; let path = update_required_string_value(batch, assignment_values, row_index, "path")?; let hidden = update_optional_bool_value(batch, assignment_values, row_index, "hidden")? .unwrap_or(false); let context = file_row_context_from_update(batch, assignment_values, row_index, version_binding)?; let assigned_data = if update_columns.data { Some(update_required_binary_value( batch, assignment_values, row_index, "data", )?) } else { None }; let resolver = path_resolvers .entry(file_path_resolver_key(&context)) .or_insert_with(DirectoryPathResolver::default); let plan = plan_file_path_update( resolver, id.clone(), path, hidden, None, context.clone(), generate_directory_id, ) .map_err(lix_error_to_datafusion_error)?; staged.extend_filesystem_plan(plan); if let Some(data) = assigned_data { stage_lix_file_data_write(&mut staged, id, data, context, None)?; } } Ok(staged) } #[cfg(test)] fn lix_file_stage_from_batch_with_options( batch: &RecordBatch, version_binding: Option<&str>, surface_name: &str, reject_read_only_fields: bool, include_descriptor_writes: bool, include_data_writes: bool, ) -> Result { lix_file_stage_from_batch_with_options_and_path_resolvers( batch, version_binding, surface_name, reject_read_only_fields, include_descriptor_writes, include_data_writes, None, None, ) } fn lix_file_stage_from_batch_with_options_and_path_resolvers( batch: &RecordBatch, version_binding: Option<&str>, surface_name: &str, reject_read_only_fields: bool, include_descriptor_writes: bool, include_data_writes: bool, mut path_resolvers: Option<&mut BTreeMap>, mut generate_directory_id: Option<&mut dyn FnMut() -> String>, ) -> Result { let mut staged = LixFileStagedBatch::default(); for row_index in 0..batch.num_rows() { if reject_read_only_fields { reject_read_only_lix_file_insert_field(batch, row_index, "lixcol_entity_id")?; reject_read_only_lix_file_insert_field(batch, row_index, "lixcol_schema_key")?; reject_read_only_lix_file_insert_field(batch, row_index, "lixcol_change_id")?; reject_read_only_lix_file_insert_field(batch, row_index, "lixcol_created_at")?; reject_read_only_lix_file_insert_field(batch, row_index, "lixcol_updated_at")?; reject_read_only_lix_file_insert_field(batch, row_index, "lixcol_commit_id")?; } let path = optional_string_value(batch, row_index, "path")?; let id = optional_string_value(batch, row_index, "id")?; let hidden = optional_bool_value(batch, row_index, "hidden")?; let context = file_row_context_from_batch(batch, row_index, version_binding)?; let data = if include_data_writes { insert_optional_binary_value(batch, row_index, "data")? } else { None }; if let Some(path) = path { reject_read_only_lix_file_insert_field(batch, row_index, "directory_id")?; reject_read_only_lix_file_insert_field(batch, row_index, "name")?; let Some(path_resolvers) = path_resolvers.as_deref_mut() else { return Err(DataFusionError::Execution( "INSERT into lix_file with path requires directory path resolver".to_string(), )); }; let resolver = path_resolvers .entry(file_path_resolver_key(&context)) .or_insert_with(DirectoryPathResolver::default); let Some(generate_directory_id) = generate_directory_id.as_deref_mut() else { return Err(DataFusionError::Execution( "INSERT into lix_file with path requires directory id generator".to_string(), )); }; let file_id = id.unwrap_or_else(|| generate_directory_id()); let mut plan = super::filesystem_planner::plan_file_path_write( resolver, FilePathWriteInput { id: Some(file_id.clone()), path, data, hidden, context, }, generate_directory_id, ) .map_err(lix_error_to_datafusion_error)?; attach_lix_file_insert_origin(&mut plan.rows, surface_name, &file_id); staged.extend_filesystem_plan(plan); continue; } let directory_id = optional_string_value(batch, row_index, "directory_id")?; let name = required_string_value(batch, row_index, "name")?; let id = if data.is_some() { match id { Some(id) => Some(id), None => { let Some(generate_id) = generate_directory_id.as_deref_mut() else { return Err(DataFusionError::Execution( "INSERT into lix_file with data requires id generator".to_string(), )); }; Some(generate_id()) } } } else { id }; if include_descriptor_writes { if let Some(path_resolvers) = path_resolvers.as_deref_mut() { if let Some(file_id) = id.as_ref() { let resolver = path_resolvers .entry(file_path_resolver_key(&context)) .or_insert_with(DirectoryPathResolver::default); resolver .reserve_file(directory_id.clone(), name.clone(), file_id.clone()) .map_err(lix_error_to_datafusion_error)?; } } let mut row = file_descriptor_write_row(FileDescriptorWriteIntent { id: id.clone(), directory_id: directory_id.clone(), name: name.clone(), hidden, context: context.clone(), }); if let Some(file_id) = id.as_ref() { row.origin = Some(lix_file_insert_origin(surface_name, file_id)); } staged.state_rows.push(row); } if let (Some(id), Some(data)) = (id, data) { let origin = Some(lix_file_insert_origin(surface_name, &id)); stage_lix_file_data_write(&mut staged, id, data, context, origin)?; } staged.count = staged .count .checked_add(1) .ok_or_else(|| DataFusionError::Execution("lix_file row count overflow".into()))?; } Ok(staged) } fn stage_lix_file_data_write( staged: &mut LixFileStagedBatch, file_id: String, data: Vec, context: FilesystemRowContext, origin: Option, ) -> Result<()> { let mut row = blob_ref_row(BlobRefRowInput { file_id: file_id.clone(), data: data.clone(), context: FilesystemRowContext { file_id: None, metadata: None, ..context.clone() }, }) .map_err(lix_error_to_datafusion_error)?; row.origin = origin; staged.state_rows.push(row); staged.file_data_writes.push(TransactionFileData { file_id, version_id: context.version_id, untracked: context.untracked, data, }); Ok(()) } fn attach_lix_file_insert_origin( rows: &mut [TransactionWriteRow], surface_name: &str, file_id: &str, ) { let origin = lix_file_insert_origin(surface_name, file_id); for row in rows { if row.schema_key == FILE_DESCRIPTOR_SCHEMA_KEY || row.schema_key == BLOB_REF_SCHEMA_KEY { row.origin = Some(origin.clone()); } } } fn lix_file_insert_origin(surface_name: &str, file_id: &str) -> TransactionWriteOrigin { TransactionWriteOrigin { surface: surface_name.to_string(), operation: TransactionWriteOperation::Insert, primary_key: Some(LogicalPrimaryKey { columns: vec!["id".to_string()], values: vec![file_id.to_string()], }), } } fn file_row_context_from_batch( batch: &RecordBatch, row_index: usize, version_binding: Option<&str>, ) -> Result { let explicit_version_id = optional_string_value(batch, row_index, "lixcol_version_id")?; let scope = resolve_write_version_scope( optional_bool_value(batch, row_index, "lixcol_global")?, explicit_version_id, version_binding, "INSERT into lix_file_by_version", "lix_file", )?; Ok(FilesystemRowContext { version_id: scope.version_id, global: scope.global, untracked: optional_bool_value(batch, row_index, "lixcol_untracked")?.unwrap_or(false), file_id: optional_string_value(batch, row_index, "lixcol_file_id")?, metadata: optional_metadata_value(batch, row_index, "lixcol_metadata", "lix_file")?, }) } fn file_row_context_from_update( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, version_binding: Option<&str>, ) -> Result { let explicit_version_id = optional_string_value(batch, row_index, "lixcol_version_id")?; let scope = resolve_write_version_scope( optional_bool_value(batch, row_index, "lixcol_global")?, explicit_version_id, version_binding, "UPDATE into lix_file_by_version", "lix_file", )?; Ok(FilesystemRowContext { version_id: scope.version_id, global: scope.global, untracked: optional_bool_value(batch, row_index, "lixcol_untracked")?.unwrap_or(false), file_id: optional_string_value(batch, row_index, "lixcol_file_id")?, metadata: update_optional_metadata_value( batch, assignment_values, row_index, "lixcol_metadata", "lix_file", )?, }) } fn file_path_resolver_key(context: &FilesystemRowContext) -> String { filesystem_storage_scope_key( &context.version_id, context.global, context.untracked, context.file_id.as_deref(), ) } async fn file_path_resolvers_from_live_state( live_state: Arc, version_binding: Option<&str>, ) -> std::result::Result, LixError> { let rows = live_state .scan_rows(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![ DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), ], version_ids: version_binding .map(|version_id| vec![version_id.to_string()]) .unwrap_or_default(), ..Default::default() }, ..Default::default() }) .await?; let mut resolvers = directory_path_resolvers_from_state_rows(rows)?; if let Some(version_id) = version_binding { let key = filesystem_storage_scope_key(version_id, false, false, None); resolvers .entry(key) .or_insert_with(DirectoryPathResolver::default); } Ok(resolvers) } async fn lix_file_record_batch( schema: &SchemaRef, blob_reader: &Arc, rows: Vec, ) -> Result { let projected_columns = schema .fields() .iter() .map(|field| field.name().as_str()) .collect::>(); let needs_data = projected_columns .iter() .any(|column_name| *column_name == "data"); let mut file_rows = BTreeMap::<(String, String), FileDescriptorRecord>::new(); let mut blob_rows = BTreeMap::<(String, String), BlobRefRecord>::new(); let mut directory_rows = Vec::::new(); for row in rows { match row.schema_key.as_str() { FILE_DESCRIPTOR_SCHEMA_KEY => { let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot: FileDescriptorSnapshot = serde_json::from_str(snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_file_descriptor snapshot JSON: {error}"), ) })?; file_rows.insert( (row.version_id.clone(), snapshot.id.clone()), FileDescriptorRecord { id: snapshot.id, directory_id: snapshot.directory_id, name: snapshot.name, hidden: snapshot.hidden, live: row, }, ); } BLOB_REF_SCHEMA_KEY => { let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot: BlobRefSnapshot = serde_json::from_str(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_binary_blob_ref snapshot JSON: {error}"), ) })?; blob_rows.insert( (row.version_id.clone(), snapshot.id.clone()), BlobRefRecord { blob_hash: snapshot.blob_hash, }, ); } DIRECTORY_DESCRIPTOR_SCHEMA_KEY => { let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_directory_descriptor snapshot JSON: {error}"), ) })?; directory_rows.push(DirectoryDescriptorRecord { id: snapshot.id, parent_id: snapshot.parent_id, name: snapshot.name, version_id: row.version_id, }); } _ => {} } } let directory_paths = derive_directory_paths(&directory_rows)?; let mut ids = Vec::new(); let mut paths = Vec::new(); let mut directory_ids = Vec::new(); let mut names = Vec::new(); let mut hiddens = Vec::new(); let mut data_values = Vec::new(); let mut entity_ids = Vec::new(); let mut schema_keys = Vec::new(); let mut file_ids = Vec::new(); let mut globals = Vec::new(); let mut change_ids = Vec::new(); let mut created_ats = Vec::new(); let mut updated_ats = Vec::new(); let mut commit_ids = Vec::new(); let mut untracked_values = Vec::new(); let mut metadata_values = Vec::new(); let mut version_ids = Vec::new(); for ((version_id, _), file) in file_rows { let directory_path = match file.directory_id.as_ref() { Some(directory_id) => { let key = (version_id.clone(), directory_id.clone()); let Some(path) = directory_paths.get(&key).cloned() else { return Err(LixError::new( LixError::CODE_FOREIGN_KEY, format!( "lix_file_descriptor '{}' references missing directory_id '{}' in version '{}'", file.id, directory_id, version_id ), )); }; Some(path) } None => None, }; let path = match directory_path { Some(directory_path) => format!("{directory_path}{}", file.name), None => format!("/{}", file.name), }; let data = if needs_data { match blob_rows.get(&(version_id.clone(), file.id.clone())) { Some(blob_ref) => load_single_blob_bytes(blob_reader, &blob_ref.blob_hash).await?, None => None, } } else { None }; ids.push(Some(file.id)); paths.push(Some(path)); directory_ids.push(file.directory_id); names.push(Some(file.name)); hiddens.push(Some(file.hidden)); data_values.push(data); entity_ids.push(Some(file.live.entity_id.as_json_array_text()?)); schema_keys.push(Some(file.live.schema_key)); file_ids.push(file.live.file_id); globals.push(Some(file.live.global)); change_ids.push(file.live.change_id); created_ats.push(file.live.created_at); updated_ats.push(file.live.updated_at); commit_ids.push(file.live.commit_id); untracked_values.push(Some(file.live.untracked)); metadata_values.push(file.live.metadata.as_ref().map(serialize_row_metadata)); version_ids.push(Some(version_id)); } let mut columns = Vec::::with_capacity(schema.fields().len()); for field in schema.fields() { let array: ArrayRef = match field.name().as_str() { "id" => Arc::new(StringArray::from(ids.clone())), "path" => Arc::new(StringArray::from(paths.clone())), "directory_id" => Arc::new(StringArray::from(directory_ids.clone())), "name" => Arc::new(StringArray::from(names.clone())), "hidden" => Arc::new(BooleanArray::from(hiddens.clone())), "data" => Arc::new(BinaryArray::from( data_values .iter() .map(|value| value.as_deref()) .collect::>(), )), "lixcol_entity_id" => Arc::new(StringArray::from(entity_ids.clone())), "lixcol_schema_key" => Arc::new(StringArray::from(schema_keys.clone())), "lixcol_file_id" => Arc::new(StringArray::from(file_ids.clone())), "lixcol_global" => Arc::new(BooleanArray::from(globals.clone())), "lixcol_change_id" => Arc::new(StringArray::from(change_ids.clone())), "lixcol_created_at" => Arc::new(StringArray::from(created_ats.clone())), "lixcol_updated_at" => Arc::new(StringArray::from(updated_ats.clone())), "lixcol_commit_id" => Arc::new(StringArray::from(commit_ids.clone())), "lixcol_untracked" => Arc::new(BooleanArray::from(untracked_values.clone())), "lixcol_metadata" => Arc::new(StringArray::from(metadata_values.clone())), "lixcol_version_id" => Arc::new(StringArray::from(version_ids.clone())), other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("sql2 lix_file provider does not support projected column '{other}'"), )) } }; columns.push(array); } let options = RecordBatchOptions::new().with_row_count(Some(ids.len())); RecordBatch::try_new_with_options(Arc::clone(schema), columns, &options).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("sql2 failed to build lix_file record batch: {error}"), ) }) } async fn load_single_blob_bytes( blob_reader: &Arc, blob_hash: &str, ) -> Result>, LixError> { let hash = BlobHash::from_hex(blob_hash)?; Ok(blob_reader .load_bytes_many(&[hash]) .await? .into_vec() .into_iter() .next() .flatten()) } fn derive_directory_paths( rows: &[DirectoryDescriptorRecord], ) -> Result, LixError> { let mut by_version = BTreeMap::>::new(); for row in rows { by_version .entry(row.version_id.clone()) .or_default() .insert(row.id.clone(), row); } let mut paths = BTreeMap::<(String, String), String>::new(); for (version_id, records) in by_version { for directory_id in records.keys() { derive_directory_path_for( &version_id, directory_id, &records, &mut paths, &mut BTreeSet::new(), )?; } } Ok(paths) } fn derive_directory_path_for( version_id: &str, directory_id: &str, records: &BTreeMap, paths: &mut BTreeMap<(String, String), String>, visiting: &mut BTreeSet, ) -> Result, LixError> { if let Some(path) = paths.get(&(version_id.to_string(), directory_id.to_string())) { return Ok(Some(path.clone())); } if !visiting.insert(directory_id.to_string()) { return Err(directory_parent_cycle_error(version_id, directory_id)); } let Some(row) = records.get(directory_id) else { visiting.remove(directory_id); return Ok(None); }; let path = match row.parent_id.as_deref() { Some(parent_id) => { let Some(parent_path) = derive_directory_path_for(version_id, parent_id, records, paths, visiting)? else { visiting.remove(directory_id); return Ok(None); }; format!("{parent_path}{}/", row.name) } None => format!("/{}/", row.name), }; visiting.remove(directory_id); paths.insert( (version_id.to_string(), directory_id.to_string()), path.clone(), ); Ok(Some(path)) } fn directory_parent_cycle_error(version_id: &str, directory_id: &str) -> LixError { LixError::new( LixError::CODE_CONSTRAINT_VIOLATION, format!( "lix_directory_descriptor parent_id cycle in version '{version_id}' while resolving directory '{directory_id}'" ), ) } fn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec>) -> Result { let fields = match projection { Some(indices) => indices .iter() .map(|index| base_schema.field(*index).as_ref().clone()) .collect::>(), None => base_schema .fields() .iter() .map(|field| field.as_ref().clone()) .collect::>(), }; Ok(Arc::new(Schema::new(fields))) } fn lix_file_scan_request( version_binding: Option<&str>, projected_schema: Option<&Schema>, limit: Option, ) -> LiveStateScanRequest { LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![ FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), BLOB_REF_SCHEMA_KEY.to_string(), DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), ], version_ids: version_binding .map(|version_id| vec![version_id.to_string()]) .unwrap_or_default(), ..LiveStateFilter::default() }, projection: lix_file_live_state_projection(projected_schema), limit, } } fn lix_file_live_state_projection(projected_schema: Option<&Schema>) -> LiveStateProjection { let Some(schema) = projected_schema else { return LiveStateProjection::default(); }; let mut columns = Vec::new(); let needs_snapshot = schema.fields().iter().any(|field| { matches!( field.name().as_str(), "path" | "directory_id" | "name" | "hidden" | "data" ) }); if needs_snapshot { columns.push("snapshot_content".to_string()); } if schema .fields() .iter() .any(|field| field.name() == "lixcol_metadata") { columns.push("metadata".to_string()); } LiveStateProjection { columns } } async fn scan_lix_file_live_rows( live_state: Arc, request: &LiveStateScanRequest, target_file_ids: &FileIdConstraint, ) -> std::result::Result, LixError> { let target_file_ids = match target_file_ids { FileIdConstraint::All => return live_state.scan_rows(request).await, FileIdConstraint::None => return Ok(Vec::new()), FileIdConstraint::Ids(target_file_ids) => target_file_ids, }; let mut file_request = request.clone(); file_request.filter.schema_keys = vec![ FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), BLOB_REF_SCHEMA_KEY.to_string(), ]; file_request.filter.entity_ids = target_file_ids .iter() .map(|file_id| EntityIdentity::single(file_id.clone())) .collect(); let mut rows = live_state.scan_rows(&file_request).await?; let mut directory_request = request.clone(); directory_request.filter.schema_keys = vec![DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string()]; directory_request.filter.entity_ids.clear(); directory_request.limit = None; rows.extend(live_state.scan_rows(&directory_request).await?); Ok(rows) } #[derive(Debug, Clone, PartialEq, Eq)] enum FileIdConstraint { All, None, Ids(BTreeSet), } impl FileIdConstraint { fn from_ids(ids: Vec) -> Self { let ids = ids.into_iter().collect::>(); if ids.is_empty() { Self::None } else { Self::Ids(ids) } } fn intersect(self, other: Self) -> Self { match (self, other) { (Self::None, _) | (_, Self::None) => Self::None, (Self::All, constraint) | (constraint, Self::All) => constraint, (Self::Ids(left), Self::Ids(right)) => { let ids = left.intersection(&right).cloned().collect::>(); if ids.is_empty() { Self::None } else { Self::Ids(ids) } } } } fn union(self, other: Self) -> Self { match (self, other) { (Self::All, _) | (_, Self::All) => Self::All, (Self::None, constraint) | (constraint, Self::None) => constraint, (Self::Ids(mut left), Self::Ids(right)) => { left.extend(right); Self::Ids(left) } } } } fn file_id_constraint_from_filters(filters: &[Expr]) -> Result { let analyzer = LixFileIdFilterAnalyzer; let mut constraint = FileIdConstraint::All; for filter in filters { if let Some(filter_constraint) = analyzer.analyze(filter)? { constraint = constraint.intersect(filter_constraint); } } Ok(constraint) } struct LixFileIdFilterAnalyzer; impl LixFileIdFilterAnalyzer { fn supports(&self, expr: &Expr) -> bool { self.analyze(expr) .is_ok_and(|constraint| constraint.is_some()) } fn analyze(&self, expr: &Expr) -> Result> { ExactStringColumnFilterAnalyzer::new("id").analyze(expr) } } struct ExactStringColumnFilterAnalyzer { column_name: &'static str, } impl ExactStringColumnFilterAnalyzer { fn new(column_name: &'static str) -> Self { Self { column_name } } fn supports(&self, expr: &Expr) -> bool { self.analyze(expr) .is_ok_and(|constraint| constraint.is_some()) } fn analyze(&self, expr: &Expr) -> Result> { match expr { Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => { let Some(left) = self.analyze(&binary_expr.left)? else { return Ok(None); }; let Some(right) = self.analyze(&binary_expr.right)? else { return Ok(None); }; Ok(Some(left.intersect(right))) } Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => { let Some(left) = self.analyze(&binary_expr.left)? else { return Ok(None); }; let Some(right) = self.analyze(&binary_expr.right)? else { return Ok(None); }; Ok(Some(left.union(right))) } Expr::BinaryExpr(binary_expr) => Ok(self .value_from_binary_filter(binary_expr) .map(|value| FileIdConstraint::Ids(BTreeSet::from([value])))), Expr::InList(in_list) => Ok(self .values_from_in_list_filter(in_list) .map(FileIdConstraint::from_ids)), _ => Ok(None), } } fn value_from_binary_filter(&self, binary_expr: &BinaryExpr) -> Option { if binary_expr.op != Operator::Eq { return None; } self.value_from_column_literal_filter(&binary_expr.left, &binary_expr.right) .or_else(|| { self.value_from_column_literal_filter(&binary_expr.right, &binary_expr.left) }) } fn values_from_in_list_filter(&self, in_list: &InList) -> Option> { if in_list.negated { return None; } let Expr::Column(column) = in_list.expr.as_ref() else { return None; }; if column.name != self.column_name { return None; } let values = in_list .list .iter() .map(string_expr_literal) .collect::>>()?; Some(values) } fn value_from_column_literal_filter( &self, column_expr: &Expr, literal_expr: &Expr, ) -> Option { let Expr::Column(column) = column_expr else { return None; }; if column.name != self.column_name { return None; } string_expr_literal(literal_expr) } } fn string_expr_literal(expr: &Expr) -> Option { let Expr::Literal(literal, _) = expr else { return None; }; match literal { ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)) => Some(value.clone()), _ => None, } } fn contains_column(expr: &Expr, column_name: &str) -> bool { match expr { Expr::Column(column) => column.name == column_name, Expr::BinaryExpr(binary_expr) => { contains_column(&binary_expr.left, column_name) || contains_column(&binary_expr.right, column_name) } Expr::InList(in_list) => { contains_column(&in_list.expr, column_name) || in_list .list .iter() .any(|expr| contains_column(expr, column_name)) } Expr::Between(between) => { contains_column(&between.expr, column_name) || contains_column(&between.low, column_name) || contains_column(&between.high, column_name) } Expr::Not(expr) | Expr::IsNull(expr) | Expr::IsNotNull(expr) => { contains_column(expr, column_name) } Expr::Negative(expr) => contains_column(expr, column_name), _ => false, } } fn validate_lix_file_update_assignments( schema: &SchemaRef, assignments: &[(String, Expr)], ) -> Result<()> { for (column_name, expr) in assignments { schema.field_with_name(column_name).map_err(|_| { DataFusionError::Plan(format!( "UPDATE lix_file failed: column '{column_name}' does not exist" )) })?; if !matches!( column_name.as_str(), "path" | "directory_id" | "name" | "hidden" | "data" | "lixcol_metadata" ) { return Err(DataFusionError::Execution(format!( "UPDATE lix_file cannot stage read-only column '{column_name}'" ))); } if column_name == "data" { reject_non_binary_lix_file_data_assignment(expr)?; } } Ok(()) } fn reject_non_binary_lix_file_data_assignment(expr: &Expr) -> Result<()> { match expr { Expr::Literal(value, _) => { if !scalar_is_binary_or_null(value) { return Err(non_binary_lix_file_data_assignment_error()); } } Expr::Cast(cast) if is_binary_type(&cast.data_type) => { if !logical_expr_is_binary_or_null(&cast.expr) { return Err(non_binary_lix_file_data_assignment_error()); } } _ => {} } Ok(()) } fn non_binary_lix_file_data_assignment_error() -> DataFusionError { lix_file_data_type_error( "UPDATE lix_file", "data", "use X'...' or a binary parameter for file contents", ) } fn filter_lix_file_batch( batch: RecordBatch, filters: &[Arc], ) -> Result { let Some(mask) = evaluate_lix_file_filters(&batch, filters)? else { return Ok(batch); }; Ok(filter_record_batch(&batch, &mask)?) } fn evaluate_lix_file_filters( batch: &RecordBatch, filters: &[Arc], ) -> Result> { if filters.is_empty() { return Ok(None); } let mut combined_mask: Option = None; for filter in filters { let result = filter.evaluate(batch)?; let array = result.into_array(batch.num_rows())?; let bool_array = array .as_any() .downcast_ref::() .ok_or_else(|| { DataFusionError::Execution("lix_file filter was not boolean".to_string()) })?; let normalized = bool_array .iter() .map(|value| Some(value == Some(true))) .collect::(); combined_mask = Some(match combined_mask { Some(existing) => and(&existing, &normalized)?, None => normalized, }); } Ok(combined_mask) } fn dml_count_schema() -> SchemaRef { Arc::new(Schema::new(vec![Field::new( "count", DataType::UInt64, false, )])) } fn dml_count_batch(schema: SchemaRef, count: u64) -> Result { RecordBatch::try_new( schema, vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef], ) .map_err(DataFusionError::from) } fn record_batch_has_non_null_column(batch: &RecordBatch, column_name: &str) -> Result { for row_index in 0..batch.num_rows() { if optional_scalar_value(batch, row_index, column_name)? .is_some_and(|value| !value.is_null()) { return Ok(true); } } Ok(false) } fn reject_read_only_lix_file_insert_field( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result<()> { if optional_scalar_value(batch, row_index, column_name)?.is_some_and(|value| !value.is_null()) { return Err(DataFusionError::Execution(format!( "INSERT into lix_file cannot stage read-only column '{column_name}'" ))); } Ok(()) } fn required_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result { optional_string_value(batch, row_index, column_name)?.ok_or_else(|| { DataFusionError::Execution(format!( "INSERT into lix_file requires non-null text column '{column_name}'" )) }) } fn update_required_string_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result { update_optional_string_value(batch, assignment_values, row_index, column_name)?.ok_or_else( || { DataFusionError::Execution(format!( "UPDATE lix_file requires non-null text column '{column_name}'" )) }, ) } fn update_optional_string_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result> { match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? { InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None), InsertCell::Provided(SqlCell::Value( ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)), )) => Ok(Some(value)), InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!( "UPDATE lix_file expected text-compatible column '{column_name}', got {other:?}" ))), } } fn update_optional_metadata_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, context: &str, ) -> Result> { update_optional_string_value(batch, assignment_values, row_index, column_name)? .map(|value| { let metadata = parse_row_metadata_value(&value, context) .map_err(super::error::lix_error_to_datafusion_error)?; TransactionJson::from_value(metadata, &format!("{context} metadata")) .map_err(super::error::lix_error_to_datafusion_error) }) .transpose() } fn update_optional_bool_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result> { match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? { InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None), InsertCell::Provided(SqlCell::Value(ScalarValue::Boolean(Some(value)))) => Ok(Some(value)), InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!( "UPDATE lix_file expected boolean column '{column_name}', got {other:?}" ))), } } fn update_required_binary_value( _batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result> { match assignment_values.assigned_cell(row_index, column_name)? { UpdateCell::Unassigned | UpdateCell::Assigned(SqlCell::Null) => { Err(lix_file_data_type_error( "UPDATE lix_file", column_name, "use X'' for an empty file or omit data to leave contents unchanged", )) } UpdateCell::Assigned(SqlCell::Value(ScalarValue::Binary(Some(value)))) | UpdateCell::Assigned(SqlCell::Value(ScalarValue::LargeBinary(Some(value)))) => Ok(value), UpdateCell::Assigned(SqlCell::Value(ScalarValue::FixedSizeBinary(_, Some(value)))) => { Ok(value) } UpdateCell::Assigned(SqlCell::Value(other)) => Err(lix_file_data_type_error_with_value( "UPDATE lix_file", column_name, &other, "use X'...' or a binary parameter for file contents", )), } } fn optional_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Utf8(None)) | Some(ScalarValue::Utf8View(None)) | Some(ScalarValue::LargeUtf8(None)) => Ok(None), Some(ScalarValue::Utf8(Some(value))) | Some(ScalarValue::Utf8View(Some(value))) | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "INSERT into lix_file expected text-compatible column '{column_name}', got {other:?}" ))), } } fn optional_metadata_value( batch: &RecordBatch, row_index: usize, column_name: &str, context: &str, ) -> Result> { optional_string_value(batch, row_index, column_name)? .map(|value| { let metadata = parse_row_metadata_value(&value, context) .map_err(super::error::lix_error_to_datafusion_error)?; TransactionJson::from_value(metadata, &format!("{context} metadata")) .map_err(super::error::lix_error_to_datafusion_error) }) .transpose() } fn optional_bool_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None), Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "INSERT into lix_file expected boolean column '{column_name}', got {other:?}" ))), } } fn insert_optional_binary_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result>> { match optional_scalar_value(batch, row_index, column_name)? { None => Ok(None), Some(ScalarValue::Null) | Some(ScalarValue::Binary(None)) | Some(ScalarValue::LargeBinary(None)) | Some(ScalarValue::FixedSizeBinary(_, None)) => Err(lix_file_data_type_error( "INSERT into lix_file", column_name, "use X'' for an empty file or omit data to create a descriptor without contents", )), Some(ScalarValue::Binary(Some(value))) | Some(ScalarValue::LargeBinary(Some(value))) => { Ok(Some(value)) } Some(ScalarValue::FixedSizeBinary(_, Some(value))) => Ok(Some(value)), Some(other) => Err(lix_file_data_type_error_with_value( "INSERT into lix_file", column_name, &other, "use X'...' or a binary parameter for file contents", )), } } fn optional_scalar_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { let schema = batch.schema(); let column_index = match schema.index_of(column_name) { Ok(column_index) => column_index, Err(_) => return Ok(None), }; if row_index >= batch.num_rows() { return Err(DataFusionError::Execution(format!( "row index {row_index} out of bounds for lix_file batch with {} rows", batch.num_rows() ))); } ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index) .map(Some) .map_err(|error| { DataFusionError::Execution(format!( "failed to decode lix_file column '{column_name}' at row {row_index}: {error}" )) }) } fn lix_file_schema() -> SchemaRef { Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, true), Field::new("path", DataType::Utf8, false), Field::new("directory_id", DataType::Utf8, true), Field::new("name", DataType::Utf8, false), Field::new("hidden", DataType::Boolean, true), Field::new("data", DataType::Binary, true), json_field("lixcol_entity_id", false), Field::new("lixcol_schema_key", DataType::Utf8, false), Field::new("lixcol_file_id", DataType::Utf8, true), Field::new("lixcol_global", DataType::Boolean, true), Field::new("lixcol_change_id", DataType::Utf8, true), Field::new("lixcol_created_at", DataType::Utf8, true), Field::new("lixcol_updated_at", DataType::Utf8, true), Field::new("lixcol_commit_id", DataType::Utf8, true), Field::new("lixcol_untracked", DataType::Boolean, true), json_field("lixcol_metadata", true), ])) } fn lix_file_by_version_schema() -> SchemaRef { let mut fields = lix_file_schema() .fields() .iter() .map(|field| field.as_ref().clone()) .collect::>(); fields.push(Field::new("lixcol_version_id", DataType::Utf8, false)); Arc::new(Schema::new(fields)) } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } #[cfg(test)] mod tests { use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BinaryArray, BooleanArray, StringArray}; use datafusion::arrow::datatypes::{DataType, Field, Schema}; use datafusion::arrow::record_batch::RecordBatch; use datafusion::common::{Column, ScalarValue}; use datafusion::execution::TaskContext; use datafusion::logical_expr::expr::InList; use datafusion::logical_expr::lit; use datafusion::logical_expr::{BinaryExpr, Expr, Operator}; use serde_json::Value as JsonValue; use crate::binary_cas::BlobDataReader; use crate::functions::{ FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider, }; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{LiveStateReader, LiveStateRowRequest, LiveStateScanRequest}; use crate::sql2::dml::InsertSink; use crate::sql2::{SqlWriteContext, SqlWriteExecutionContext}; use crate::transaction::types::{ TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome, }; use crate::LixError; use super::{ derive_directory_path_for, lix_file_delete_stage_from_batch, lix_file_insert_stage_from_batch, lix_file_insert_stage_from_batch_with_path_resolvers, lix_file_write_rows_from_batch, DirectoryDescriptorRecord, LixFileInsertSink, VersionBinding, }; fn test_id_generator(ids: &'static [&'static str]) -> impl FnMut() -> String { let mut ids = ids.iter(); move || ids.next().expect("test id should exist").to_string() } fn test_functions() -> FunctionProviderHandle { SharedFunctionProvider::new( Box::new(SystemFunctionProvider) as Box ) } fn string_literal(value: &str) -> Expr { Expr::Literal(ScalarValue::Utf8(Some(value.to_string())), None) } fn column(name: &str) -> Expr { Expr::Column(Column::from_name(name)) } fn eq_filter(column_name: &str, value: &str) -> Expr { Expr::BinaryExpr(BinaryExpr::new( Box::new(column(column_name)), Operator::Eq, Box::new(string_literal(value)), )) } #[test] fn file_id_filters_support_string_id_predicates() { let analyzer = super::LixFileIdFilterAnalyzer; let constraint = analyzer .analyze(&Expr::InList(InList::new( Box::new(column("id")), vec![string_literal("file-b"), string_literal("file-a")], false, ))) .unwrap() .unwrap(); assert_eq!( constraint, super::FileIdConstraint::Ids(BTreeSet::from([ "file-a".to_string(), "file-b".to_string() ])) ); assert!(analyzer.supports(&eq_filter("id", "file-a"))); assert!(analyzer.supports(&Expr::BinaryExpr(BinaryExpr::new( Box::new(string_literal("file-a")), Operator::Eq, Box::new(column("id")), )))); } #[test] fn file_id_filters_intersect_and_union_boolean_predicates() { let analyzer = super::LixFileIdFilterAnalyzer; let left = Expr::InList(InList::new( Box::new(column("id")), vec![string_literal("file-a"), string_literal("file-b")], false, )); let right = Expr::InList(InList::new( Box::new(column("id")), vec![string_literal("file-b"), string_literal("file-c")], false, )); let and_constraint = analyzer .analyze(&Expr::BinaryExpr(BinaryExpr::new( Box::new(left.clone()), Operator::And, Box::new(right.clone()), ))) .unwrap() .unwrap(); assert_eq!( and_constraint, super::FileIdConstraint::Ids(BTreeSet::from(["file-b".to_string()])) ); let or_constraint = analyzer .analyze(&Expr::BinaryExpr(BinaryExpr::new( Box::new(left), Operator::Or, Box::new(right), ))) .unwrap() .unwrap(); assert_eq!( or_constraint, super::FileIdConstraint::Ids(BTreeSet::from([ "file-a".to_string(), "file-b".to_string(), "file-c".to_string() ])) ); } #[test] fn file_id_filters_detect_contradictions() { let filters = vec![Expr::BinaryExpr(BinaryExpr::new( Box::new(eq_filter("id", "file-a")), Operator::And, Box::new(eq_filter("id", "file-b")), ))]; assert_eq!( super::file_id_constraint_from_filters(&filters).unwrap(), super::FileIdConstraint::None ); } #[test] fn file_id_filters_ignore_non_id_and_negated_predicates() { let analyzer = super::LixFileIdFilterAnalyzer; assert!(!analyzer.supports(&eq_filter("name", "readme.md"))); assert!(!analyzer.supports(&Expr::InList(InList::new( Box::new(column("id")), vec![string_literal("file-a")], true, )))); } fn lix_file_update_stage_from_batch_for_test( batch: &RecordBatch, version_binding: Option<&str>, update_columns: super::LixFileUpdateColumns, path_resolvers: Option<&mut BTreeMap>, generate_directory_id: &mut dyn FnMut() -> String, ) -> datafusion::common::Result { let mut columns = Vec::new(); if update_columns.path { columns.extend(["path", "hidden"]); } if update_columns.data { columns.push("data"); } if update_columns.descriptor { columns.extend(["directory_id", "name", "hidden"]); } let assignment_values = super::UpdateAssignmentValues::from_batch_columns(batch, &columns); super::lix_file_update_stage_from_batch( batch, &assignment_values, version_binding, update_columns, path_resolvers, generate_directory_id, ) } #[derive(Default)] struct CapturingWriteContext { rows: Vec, writes: Vec, } #[async_trait] impl BlobDataReader for CapturingWriteContext { async fn load_bytes_many( &self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { Ok(crate::binary_cas::BlobBytesBatch::new(vec![ None; hashes.len() ])) } } #[async_trait] impl SqlWriteExecutionContext for CapturingWriteContext { fn active_version_id(&self) -> &str { "version-b" } fn functions(&self) -> FunctionProviderHandle { test_functions() } fn list_visible_schemas(&self) -> Result, LixError> { Ok(Vec::new()) } async fn load_bytes_many( &mut self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { BlobDataReader::load_bytes_many(self, hashes).await } async fn scan_live_state( &mut self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_version_head( &mut self, version_id: &str, ) -> Result, LixError> { if version_id == "ghost-version" { return Ok(None); } Ok(Some(format!("commit-{version_id}"))) } async fn stage_write( &mut self, write: TransactionWrite, ) -> Result { self.writes.push(write); Ok(TransactionWriteOutcome { count: 0 }) } } #[derive(Default)] struct RowsLiveStateReader { rows: Vec, } #[async_trait] impl LiveStateReader for RowsLiveStateReader { async fn scan_rows( &self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } fn live_directory_row( entity_id: &str, version_id: &str, snapshot_content: &str, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: super::DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), file_id: None, snapshot_content: Some(snapshot_content.to_string()), metadata: None, deleted: false, version_id: version_id.to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } fn live_file_row( entity_id: &str, version_id: &str, snapshot_content: &str, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: super::FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), file_id: None, snapshot_content: Some(snapshot_content.to_string()), metadata: None, deleted: false, version_id: version_id.to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } fn string_column(values: Vec>) -> ArrayRef { Arc::new(StringArray::from(values)) as ArrayRef } fn file_insert_batch(include_version: bool, global: bool) -> RecordBatch { let mut fields = vec![ Field::new("id", DataType::Utf8, false), Field::new("directory_id", DataType::Utf8, true), Field::new("name", DataType::Utf8, false), Field::new("hidden", DataType::Boolean, false), Field::new("lixcol_global", DataType::Boolean, false), Field::new("lixcol_metadata", DataType::Utf8, true), ]; let mut columns = vec![ string_column(vec![Some("file-readme")]), string_column(vec![Some("dir-docs")]), string_column(vec![Some("readme.md")]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, Arc::new(BooleanArray::from(vec![global])) as ArrayRef, string_column(vec![Some("{\"source\":\"file\"}")]), ]; if include_version { fields.push(Field::new("lixcol_version_id", DataType::Utf8, false)); columns.push(string_column(vec![Some("version-b")])); } RecordBatch::try_new(Arc::new(Schema::new(fields)), columns).expect("file insert batch") } fn data_insert_batch() -> RecordBatch { RecordBatch::try_new( Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("directory_id", DataType::Utf8, true), Field::new("name", DataType::Utf8, false), Field::new("hidden", DataType::Boolean, false), Field::new("data", DataType::Binary, true), Field::new("lixcol_version_id", DataType::Utf8, false), ])), vec![ string_column(vec![Some("file-readme")]), string_column(vec![Some("dir-docs")]), string_column(vec![Some("readme.md")]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, Arc::new(BinaryArray::from_vec(vec![b"hello"])) as ArrayRef, string_column(vec![Some("version-b")]), ], ) .expect("file data batch") } fn path_data_insert_batch() -> RecordBatch { RecordBatch::try_new( Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("path", DataType::Utf8, false), Field::new("hidden", DataType::Boolean, false), Field::new("data", DataType::Binary, true), Field::new("lixcol_version_id", DataType::Utf8, false), ])), vec![ string_column(vec![Some("file-readme")]), string_column(vec![Some("/docs/guides/readme.md")]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, Arc::new(BinaryArray::from_vec(vec![b"hello"])) as ArrayRef, string_column(vec![Some("version-b")]), ], ) .expect("file path data batch") } fn path_update_batch() -> RecordBatch { RecordBatch::try_new( Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("path", DataType::Utf8, false), Field::new("hidden", DataType::Boolean, false), Field::new("data", DataType::Binary, true), Field::new("lixcol_version_id", DataType::Utf8, false), ])), vec![ string_column(vec![Some("file-readme")]), string_column(vec![Some("/docs/renamed.md")]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, Arc::new(BinaryArray::from_vec(vec![b"hello"])) as ArrayRef, string_column(vec![Some("version-b")]), ], ) .expect("file path update batch") } fn file_delete_batch() -> RecordBatch { RecordBatch::try_new( Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("lixcol_version_id", DataType::Utf8, false), ])), vec![ string_column(vec![Some("file-readme")]), string_column(vec![Some("version-b")]), ], ) .expect("file delete batch") } #[test] fn derives_nested_directory_paths() { let root = DirectoryDescriptorRecord { id: "dir-docs".to_string(), parent_id: None, name: "docs".to_string(), version_id: "version-a".to_string(), }; let child = DirectoryDescriptorRecord { id: "dir-guides".to_string(), parent_id: Some("dir-docs".to_string()), name: "guides".to_string(), version_id: "version-a".to_string(), }; let mut records = BTreeMap::new(); records.insert(root.id.clone(), &root); records.insert(child.id.clone(), &child); let mut paths = BTreeMap::new(); assert_eq!( derive_directory_path_for( "version-a", "dir-guides", &records, &mut paths, &mut BTreeSet::new() ) .expect("path derivation should succeed"), Some("/docs/guides/".to_string()) ); } #[tokio::test] async fn file_projection_rejects_unresolved_non_root_directory_id() { let blob_reader = Arc::new(CapturingWriteContext::default()) as Arc; let error = super::lix_file_record_batch( &super::lix_file_schema(), &blob_reader, vec![live_file_row( "file-readme", "version-b", "{\"id\":\"file-readme\",\"directory_id\":\"missing-dir\",\"name\":\"readme.md\",\"hidden\":false}", )], ) .await .expect_err("unresolved non-root directory_id should not project as root path"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); assert!(error.message.contains("missing-dir")); } #[test] fn decodes_file_insert_into_lix_state_write_row() { let batch = file_insert_batch(true, false); let rows = lix_file_write_rows_from_batch(&batch, None).expect("decode file insert"); assert_eq!(rows.len(), 1); assert_eq!( rows[0].entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(rows[0].schema_key, "lix_file_descriptor"); assert_eq!(rows[0].version_id, "version-b"); assert_eq!( rows[0].metadata.as_ref(), Some(&TransactionJson::from_value_for_test( serde_json::json!({"source": "file"}) )) ); let snapshot = rows[0].snapshot.as_ref().expect("descriptor snapshot JSON"); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["directory_id"], "dir-docs"); assert_eq!(snapshot["name"], "readme.md"); assert_eq!(snapshot["hidden"], false); } #[test] fn active_file_insert_defaults_version_id() { let batch = file_insert_batch(false, false); let rows = lix_file_write_rows_from_batch(&batch, Some("version-a")).expect("decode file insert"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].version_id, "version-a"); } #[test] fn by_version_file_insert_requires_version_id_for_non_global_rows() { let batch = file_insert_batch(false, false); let error = lix_file_write_rows_from_batch(&batch, None).expect_err("version id is required"); assert!( error.to_string().contains("requires lixcol_version_id"), "unexpected error: {error}" ); } #[test] fn file_insert_rejects_global_with_non_global_version_id() { let error = lix_file_write_rows_from_batch(&file_insert_batch(true, true), None) .expect_err("global file write should reject conflicting version id"); assert!( error .to_string() .contains("cannot set lixcol_global=true with non-global lixcol_version_id"), "unexpected error: {error}" ); } #[test] fn file_update_accepts_path_assignment() { super::validate_lix_file_update_assignments( &super::lix_file_schema(), &[("path".to_string(), lit("/docs/renamed.md"))], ) .expect("path should be writable for update"); } #[test] fn file_path_update_stages_descriptor_from_new_path() { let mut resolvers = BTreeMap::new(); resolvers.insert( super::filesystem_storage_scope_key("version-b", false, false, None), super::DirectoryPathResolver::from_existing([( "/docs/".to_string(), "dir-docs".to_string(), )]) .expect("directory resolver should seed"), ); let staged = lix_file_update_stage_from_batch_for_test( &path_update_batch(), None, super::LixFileUpdateColumns { path: true, data: false, descriptor: false, }, Some(&mut resolvers), &mut test_id_generator(&["should-not-be-used"]), ) .expect("decode file path update"); assert_eq!(staged.count, 1); assert_eq!(staged.file_data_writes.len(), 0); assert_eq!(staged.state_rows.len(), 1); let descriptor = staged .state_rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor row should be staged"); let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["directory_id"], "dir-docs"); assert_eq!(snapshot["name"], "renamed.md"); assert_eq!(snapshot["hidden"], false); } #[test] fn file_path_update_preserves_existing_data_unless_data_is_assigned() { let mut resolvers = BTreeMap::new(); resolvers.insert( super::filesystem_storage_scope_key("version-b", false, false, None), super::DirectoryPathResolver::from_existing([( "/docs/".to_string(), "dir-docs".to_string(), )]) .expect("directory resolver should seed"), ); let staged = lix_file_update_stage_from_batch_for_test( &path_update_batch(), None, super::LixFileUpdateColumns { path: true, data: false, descriptor: false, }, Some(&mut resolvers), &mut test_id_generator(&["should-not-be-used"]), ) .expect("decode file path update"); assert!( staged.file_data_writes.is_empty(), "path-only update should not rewrite file data" ); assert!( staged .state_rows .iter() .all(|row| row.schema_key != "lix_binary_blob_ref"), "path-only update should not rewrite the blob ref" ); } #[tokio::test] async fn file_path_update_seeds_resolver_from_visible_directory_state() { let mut resolvers = super::file_path_resolvers_from_live_state( Arc::new(RowsLiveStateReader { rows: vec![live_directory_row( "dir-docs", "version-b", "{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\"}", )], }) as Arc, Some("version-b"), ) .await .expect("directory state should seed path resolver"); let staged = lix_file_update_stage_from_batch_for_test( &path_update_batch(), None, super::LixFileUpdateColumns { path: true, data: false, descriptor: false, }, Some(&mut resolvers), &mut test_id_generator(&["should-not-be-used"]), ) .expect("decode file path update"); assert_eq!(staged.count, 1); assert_eq!(staged.state_rows.len(), 1); assert!(staged .state_rows .iter() .all(|row| row.schema_key != "lix_directory_descriptor")); let snapshot: JsonValue = staged.state_rows[0] .snapshot .as_ref() .unwrap() .value() .clone(); assert_eq!(snapshot["directory_id"], "dir-docs"); assert_eq!(snapshot["name"], "renamed.md"); } #[tokio::test] async fn file_path_update_stages_only_missing_parent_directories() { let mut resolvers = super::file_path_resolvers_from_live_state( Arc::new(RowsLiveStateReader::default()) as Arc, Some("version-b"), ) .await .expect("empty directory state should seed path resolver"); let staged = lix_file_update_stage_from_batch_for_test( &path_update_batch(), None, super::LixFileUpdateColumns { path: true, data: false, descriptor: false, }, Some(&mut resolvers), &mut test_id_generator(&["dir-generated-docs"]), ) .expect("decode file path update"); assert_eq!(staged.count, 1); assert_eq!(staged.state_rows.len(), 2); assert_eq!( staged .state_rows .iter() .filter(|row| row.schema_key == "lix_directory_descriptor") .count(), 1 ); let directory = staged .state_rows .iter() .find(|row| row.schema_key == "lix_directory_descriptor") .expect("missing /docs/ directory should be staged"); assert_eq!( directory.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "dir-generated-docs" )) ); let descriptor = staged .state_rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor should be staged"); let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["directory_id"], "dir-generated-docs"); } #[test] fn file_path_update_with_data_assignment_stages_blob_ref_and_payload() { let mut resolvers = BTreeMap::new(); resolvers.insert( super::filesystem_storage_scope_key("version-b", false, false, None), super::DirectoryPathResolver::from_existing([( "/docs/".to_string(), "dir-docs".to_string(), )]) .expect("directory resolver should seed"), ); let staged = lix_file_update_stage_from_batch_for_test( &path_update_batch(), None, super::LixFileUpdateColumns { path: true, data: true, descriptor: false, }, Some(&mut resolvers), &mut test_id_generator(&["should-not-be-used"]), ) .expect("decode file path and data update"); assert_eq!(staged.count, 1); assert_eq!(staged.file_data_writes.len(), 1); assert_eq!(staged.file_data_writes[0].file_id, "file-readme"); assert_eq!(staged.file_data_writes[0].data, b"hello"); assert!(staged .state_rows .iter() .any(|row| row.schema_key == "lix_file_descriptor")); assert!(staged .state_rows .iter() .any(|row| row.schema_key == "lix_binary_blob_ref")); } #[test] fn file_data_update_without_path_ignores_materialized_path_column() { let staged = lix_file_update_stage_from_batch_for_test( &path_update_batch(), None, super::LixFileUpdateColumns { path: false, data: true, descriptor: false, }, None, &mut test_id_generator(&["should-not-be-used"]), ) .expect("decode file data update"); assert_eq!(staged.count, 1); assert_eq!(staged.file_data_writes.len(), 1); assert_eq!(staged.file_data_writes[0].file_id, "file-readme"); assert_eq!(staged.state_rows.len(), 1); assert_eq!(staged.state_rows[0].schema_key, "lix_binary_blob_ref"); } #[test] fn file_insert_stages_non_null_data() { let batch = data_insert_batch(); let staged = lix_file_insert_stage_from_batch(&batch, None).expect("decode file data"); assert_eq!(staged.count, 1); assert_eq!(staged.state_rows.len(), 2); assert!(staged .state_rows .iter() .any(|row| row.schema_key == "lix_file_descriptor")); let blob_ref_row = staged .state_rows .iter() .find(|row| row.schema_key == "lix_binary_blob_ref") .expect("data insert should stage blob ref row"); assert_eq!( blob_ref_row.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(blob_ref_row.file_id.as_deref(), Some("file-readme")); assert_eq!(staged.file_data_writes.len(), 1); assert_eq!(staged.file_data_writes[0].file_id, "file-readme"); assert_eq!(staged.file_data_writes[0].version_id, "version-b"); assert_eq!(staged.file_data_writes[0].data, b"hello"); } #[test] fn file_delete_with_blob_ref_stages_descriptor_and_blob_ref_tombstones() { let batch = file_delete_batch(); let staged = lix_file_delete_stage_from_batch( &batch, None, &BTreeSet::from(["file-readme".to_string()]), ) .expect("decode file delete"); assert_eq!(staged.count, 1); assert_eq!(staged.state_rows.len(), 2); let descriptor = staged .state_rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor tombstone should be staged"); assert_eq!( descriptor.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(descriptor.file_id, None); assert_eq!(descriptor.snapshot, None); let blob_ref = staged .state_rows .iter() .find(|row| row.schema_key == "lix_binary_blob_ref") .expect("blob ref tombstone should be staged"); assert_eq!( blob_ref.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(blob_ref.file_id.as_deref(), Some("file-readme")); assert_eq!(blob_ref.snapshot, None); } #[test] fn file_delete_without_blob_ref_stages_only_descriptor_tombstone() { let batch = file_delete_batch(); let staged = lix_file_delete_stage_from_batch(&batch, None, &BTreeSet::new()) .expect("decode file delete"); assert_eq!(staged.count, 1); assert_eq!(staged.state_rows.len(), 1); assert_eq!(staged.state_rows[0].schema_key, "lix_file_descriptor"); assert_eq!( staged.state_rows[0].entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(staged.state_rows[0].snapshot, None); } #[test] fn file_path_insert_reuses_existing_parent_directory() { let mut resolvers = BTreeMap::new(); resolvers.insert( super::filesystem_storage_scope_key("version-b", false, false, None), super::DirectoryPathResolver::from_existing([ ("/docs/".to_string(), "dir-docs".to_string()), ("/docs/guides/".to_string(), "dir-guides".to_string()), ]) .expect("directory resolver should seed"), ); let staged = lix_file_insert_stage_from_batch_with_path_resolvers( &path_data_insert_batch(), None, "lix_file", &mut resolvers, &mut test_id_generator(&["should-not-be-used"]), true, ) .expect("decode file path data"); assert_eq!(staged.count, 1); assert_eq!(staged.file_data_writes.len(), 1); assert_eq!(staged.file_data_writes[0].file_id, "file-readme"); assert_eq!(staged.state_rows.len(), 2); let descriptor = staged .state_rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor row should be staged"); let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["directory_id"], "dir-guides"); assert_eq!(snapshot["name"], "readme.md"); } #[test] fn file_path_insert_stages_missing_parent_directories_once() { let mut resolvers = BTreeMap::new(); let staged = lix_file_insert_stage_from_batch_with_path_resolvers( &path_data_insert_batch(), None, "lix_file", &mut resolvers, &mut test_id_generator(&["dir-generated-docs", "dir-generated-guides"]), true, ) .expect("decode file path data"); assert_eq!(staged.count, 1); assert_eq!(staged.state_rows.len(), 4); let directory_rows = staged .state_rows .iter() .filter(|row| row.schema_key == "lix_directory_descriptor") .collect::>(); assert_eq!(directory_rows.len(), 2); let descriptor = staged .state_rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor row should be staged"); let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["directory_id"], "dir-generated-guides"); } #[tokio::test] async fn file_insert_sink_stages_decoded_lix_state_rows() { let batch = file_insert_batch(true, false); let mut write_context = CapturingWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let sink = LixFileInsertSink::new( batch.schema(), write_ctx, test_functions(), VersionBinding::explicit(), false, ); let count = sink .write_batches(vec![batch], &Arc::new(TaskContext::default())) .await .expect("file insert sink should stage"); assert_eq!(count, 1); let writes = &write_context.writes; assert_eq!(writes.len(), 1); match &writes[0] { TransactionWrite::Rows { mode, rows } => { assert_eq!(*mode, TransactionWriteMode::Insert); assert_eq!(rows.len(), 1); assert_eq!( rows[0].entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(rows[0].schema_key, "lix_file_descriptor"); } other => panic!("expected insert staged write, got {other:?}"), } } #[tokio::test] async fn file_insert_sink_stages_file_data_writes() { let batch = data_insert_batch(); let mut write_context = CapturingWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let sink = LixFileInsertSink::new( batch.schema(), write_ctx, test_functions(), VersionBinding::explicit(), true, ); let count = sink .write_batches(vec![batch], &Arc::new(TaskContext::default())) .await .expect("file insert sink should stage data"); assert_eq!(count, 1); let writes = &write_context.writes; assert_eq!(writes.len(), 1); match &writes[0] { TransactionWrite::RowsWithFileData { mode, rows, file_data, count, .. } => { assert_eq!(*mode, TransactionWriteMode::Insert); assert_eq!(*count, 1); assert_eq!(rows.len(), 2); assert!(rows .iter() .any(|row| row.schema_key == "lix_file_descriptor")); assert!(rows .iter() .any(|row| row.schema_key == "lix_binary_blob_ref")); assert_eq!(file_data.len(), 1); assert_eq!(file_data[0].file_id, "file-readme"); assert_eq!(file_data[0].data, b"hello"); } other => panic!("expected insert with file data staged write, got {other:?}"), } } #[tokio::test] async fn file_insert_sink_seeds_path_resolver_from_live_state() { let batch = path_data_insert_batch(); let mut write_context = CapturingWriteContext { rows: vec![ live_directory_row( "dir-docs", "version-b", "{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\"}", ), live_directory_row( "dir-guides", "version-b", "{\"id\":\"dir-guides\",\"parent_id\":\"dir-docs\",\"name\":\"guides\"}", ), ], writes: Vec::new(), }; let write_ctx = SqlWriteContext::new(&mut write_context); let sink = LixFileInsertSink::new( batch.schema(), write_ctx, test_functions(), VersionBinding::explicit(), true, ); let count = sink .write_batches(vec![batch], &Arc::new(TaskContext::default())) .await .expect("file insert sink should stage path data"); assert_eq!(count, 1); let writes = &write_context.writes; assert_eq!(writes.len(), 1); match &writes[0] { TransactionWrite::RowsWithFileData { rows, file_data, count, .. } => { assert_eq!(*count, 1); assert_eq!(file_data.len(), 1); assert_eq!(file_data[0].file_id, "file-readme"); let descriptor = rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor row should be staged"); let snapshot: JsonValue = descriptor.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["directory_id"], "dir-guides"); } other => panic!("expected insert with file data staged write, got {other:?}"), } } } ================================================ FILE: packages/engine/src/sql2/filesystem_planner.rs ================================================ #![allow(dead_code)] use std::collections::{BTreeMap, BTreeSet}; use serde::Deserialize; use serde_json::{json, Map as JsonMap, Value as JsonValue}; use crate::common::{ directory_ancestor_paths, directory_name_from_path, normalize_directory_path, parent_directory_path, stable_content_fingerprint_hex, ParsedFilePath, }; use crate::entity_identity::EntityIdentity; use crate::live_state::MaterializedLiveStateRow; use crate::LixError; use super::filesystem_visibility::VisibleFilesystem; use crate::transaction::types::{TransactionFileData, TransactionJson, TransactionWriteRow}; pub(crate) const FILE_DESCRIPTOR_SCHEMA_KEY: &str = "lix_file_descriptor"; pub(crate) const DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = "lix_directory_descriptor"; pub(crate) const BLOB_REF_SCHEMA_KEY: &str = "lix_binary_blob_ref"; /// Planned filesystem write output after SQL surface columns have been lowered /// into state rows and optional file payload writes. /// /// Providers should emit this shape; transaction/commit code should not need /// to know whether a row came from `lix_file`, `lix_directory`, or a future /// filesystem write surface. #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct FilesystemWritePlan { pub(crate) rows: Vec, pub(crate) file_data: Vec, pub(crate) count: u64, } /// Planned filesystem delete output after SQL predicates have selected rows /// and the surface delete has been lowered into tombstone state rows. #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct FilesystemDeletePlan { pub(crate) rows: Vec, pub(crate) count: u64, } /// Common state-row lane fields shared by filesystem descriptor/blob rows. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct FilesystemRowContext { pub(crate) version_id: String, pub(crate) global: bool, pub(crate) untracked: bool, pub(crate) file_id: Option, pub(crate) metadata: Option, } impl FilesystemRowContext { pub(crate) fn active_version(version_id: impl Into) -> Self { Self { version_id: version_id.into(), global: false, untracked: false, file_id: None, metadata: None, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct DirectoryDescriptorRowInput { pub(crate) id: String, pub(crate) parent_id: Option, pub(crate) name: String, pub(crate) hidden: bool, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct FileDescriptorRowInput { pub(crate) id: String, pub(crate) directory_id: Option, pub(crate) name: String, pub(crate) hidden: bool, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct DirectoryDescriptorWriteIntent { pub(crate) id: Option, pub(crate) parent_id: Option, pub(crate) name: String, pub(crate) hidden: Option, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct FileDescriptorWriteIntent { pub(crate) id: Option, pub(crate) directory_id: Option, pub(crate) name: String, pub(crate) hidden: Option, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct BlobRefRowInput { pub(crate) file_id: String, pub(crate) data: Vec, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct FilePathWriteInput { pub(crate) id: Option, pub(crate) path: String, pub(crate) data: Option>, pub(crate) hidden: Option, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct FileDeleteInput { pub(crate) file_id: String, pub(crate) has_blob_ref: bool, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct DirectoryDeleteInput { pub(crate) directory_id: String, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Deserialize)] struct DirectoryDescriptorSnapshot { id: String, parent_id: Option, name: String, } #[derive(Debug, Deserialize)] struct FileDescriptorSnapshot { id: String, directory_id: Option, name: String, } #[derive(Debug, Clone, PartialEq, Eq)] enum FilesystemNamespaceEntry { Directory(String), File(String), } /// Resolves directory paths while planning filesystem writes. /// /// The resolver is seeded from the transaction-visible filesystem state and is /// then updated as the current statement stages implicit directories. That is /// what prevents path inserts from restaging committed ancestors or duplicating /// an ancestor created earlier in the same SQL batch. #[derive(Debug, Clone, Default)] pub(crate) struct DirectoryPathResolver { directory_ids_by_path: BTreeMap, entries_by_parent_and_name: BTreeMap<(Option, String), FilesystemNamespaceEntry>, } impl DirectoryPathResolver { pub(crate) fn from_existing( existing_directories: impl IntoIterator, ) -> Result { Self::from_existing_filesystem(existing_directories, std::iter::empty()) } pub(crate) fn from_existing_filesystem( existing_directories: impl IntoIterator, existing_files: impl IntoIterator, String, String)>, ) -> Result { let mut directory_ids_by_path = BTreeMap::new(); for (path, id) in existing_directories { directory_ids_by_path.insert(normalize_directory_path(&path)?, id); } let mut resolver = Self { directory_ids_by_path, entries_by_parent_and_name: BTreeMap::new(), }; let mut paths = resolver .directory_ids_by_path .iter() .map(|(path, id)| (path.clone(), id.clone())) .collect::>(); paths.sort_by_key(|(path, _)| path.len()); for (path, id) in paths { let parent_id = parent_directory_path(&path) .and_then(|parent_path| resolver.directory_ids_by_path.get(&parent_path).cloned()); let name = directory_name_from_path(&path).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("directory path '{path}' does not contain a directory name"), ) })?; resolver.reserve_directory(parent_id, name, id)?; } for (directory_id, entry_name, file_id) in existing_files { resolver.reserve_file(directory_id, entry_name, file_id)?; } Ok(resolver) } pub(crate) fn directory_id(&self, path: &str) -> Result, LixError> { Ok(self .directory_ids_by_path .get(&normalize_directory_path(path)?) .map(String::as_str)) } /// Stages only the missing descriptors needed for `directory_path`. /// /// Existing directories keep their original ids. Missing directories receive /// deterministic ids so repeated planning of the same transaction-visible /// path resolves to the same descriptor identity. pub(crate) fn ensure_directory_path( &mut self, directory_path: &str, context: FilesystemRowContext, hidden: bool, generate_directory_id: &mut dyn FnMut() -> String, ) -> Result, LixError> { self.ensure_directory_path_with_leaf_id( directory_path, None, context, hidden, generate_directory_id, ) } pub(crate) fn ensure_directory_path_with_leaf_id( &mut self, directory_path: &str, leaf_id: Option, context: FilesystemRowContext, hidden: bool, generate_directory_id: &mut dyn FnMut() -> String, ) -> Result, LixError> { self.plan_directory_path( directory_path, leaf_id, context, hidden, generate_directory_id, false, ) } pub(crate) fn create_directory_path_with_leaf_id( &mut self, directory_path: &str, leaf_id: Option, context: FilesystemRowContext, hidden: bool, generate_directory_id: &mut dyn FnMut() -> String, ) -> Result, LixError> { self.plan_directory_path( directory_path, leaf_id, context, hidden, generate_directory_id, true, ) } fn plan_directory_path( &mut self, directory_path: &str, leaf_id: Option, context: FilesystemRowContext, hidden: bool, generate_directory_id: &mut dyn FnMut() -> String, reject_existing_leaf: bool, ) -> Result, LixError> { let directory_path = normalize_directory_path(directory_path)?; if directory_path == "/" { if reject_existing_leaf { return Err(duplicate_directory_path_error(&directory_path)); } return Ok(Vec::new()); } let mut paths = directory_ancestor_paths(&directory_path); paths.push(directory_path.clone()); let mut rows = Vec::new(); for path in paths { if self.directory_ids_by_path.contains_key(&path) { if reject_existing_leaf && path == directory_path { return Err(duplicate_directory_path_error(&directory_path)); } continue; } let id = if path == directory_path { leaf_id.clone().unwrap_or_else(&mut *generate_directory_id) } else { generate_directory_id() }; let parent_id = parent_directory_path(&path) .and_then(|parent_path| self.directory_ids_by_path.get(&parent_path).cloned()); let name = directory_name_from_path(&path).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("directory path '{path}' does not contain a directory name"), ) })?; self.reserve_directory(parent_id.clone(), name.clone(), id.clone())?; rows.push(directory_descriptor_row(DirectoryDescriptorRowInput { id: id.clone(), parent_id, name, hidden, context: FilesystemRowContext { // Directory descriptors are their own filesystem state row, // even when they are implicitly planned from a file insert. file_id: None, ..context.clone() }, })); self.directory_ids_by_path.insert(path, id); } Ok(rows) } pub(crate) fn reserve_directory( &mut self, parent_id: Option, name: String, directory_id: String, ) -> Result<(), LixError> { let key = (parent_id, name); match self.entries_by_parent_and_name.get(&key) { Some(FilesystemNamespaceEntry::Directory(existing_id)) if existing_id == &directory_id => { Ok(()) } Some(existing) => Err(filesystem_namespace_conflict_error( &key.0, &key.1, existing, )), None => { self.entries_by_parent_and_name .insert(key, FilesystemNamespaceEntry::Directory(directory_id)); Ok(()) } } } pub(crate) fn reserve_file( &mut self, directory_id: Option, entry_name: String, file_id: String, ) -> Result<(), LixError> { let key = (directory_id, entry_name); match self.entries_by_parent_and_name.get(&key) { Some(FilesystemNamespaceEntry::File(existing_id)) if existing_id == &file_id => Ok(()), Some(existing) => Err(filesystem_namespace_conflict_error( &key.0, &key.1, existing, )), None => { self.entries_by_parent_and_name .insert(key, FilesystemNamespaceEntry::File(file_id)); Ok(()) } } } } fn duplicate_directory_path_error(path: &str) -> LixError { LixError::new( LixError::CODE_UNIQUE, format!("unique constraint violation on lix_directory.path for value {path:?}"), ) } fn filesystem_namespace_conflict_error( parent_id: &Option, entry_name: &str, existing: &FilesystemNamespaceEntry, ) -> LixError { let parent = parent_id.as_deref().unwrap_or(""); let existing_kind = match existing { FilesystemNamespaceEntry::Directory(_) => "directory", FilesystemNamespaceEntry::File(_) => "file", }; LixError::new( LixError::CODE_UNIQUE, format!( "filesystem namespace conflict: parent {parent:?} already contains {existing_kind} entry {entry_name:?}" ), ) } pub(crate) fn directory_descriptor_row(input: DirectoryDescriptorRowInput) -> TransactionWriteRow { directory_descriptor_write_row(DirectoryDescriptorWriteIntent { id: Some(input.id), parent_id: input.parent_id, name: input.name, hidden: Some(input.hidden), context: input.context, }) } pub(crate) fn file_descriptor_row(input: FileDescriptorRowInput) -> TransactionWriteRow { file_descriptor_write_row(FileDescriptorWriteIntent { id: Some(input.id), directory_id: input.directory_id, name: input.name, hidden: Some(input.hidden), context: input.context, }) } pub(crate) fn directory_descriptor_write_row( input: DirectoryDescriptorWriteIntent, ) -> TransactionWriteRow { let mut snapshot = JsonMap::new(); if let Some(id) = input.id.as_ref() { snapshot.insert("id".to_string(), JsonValue::String(id.clone())); } snapshot.insert( "parent_id".to_string(), input .parent_id .clone() .map(JsonValue::String) .unwrap_or(JsonValue::Null), ); snapshot.insert("name".to_string(), JsonValue::String(input.name)); if let Some(hidden) = input.hidden { snapshot.insert("hidden".to_string(), JsonValue::Bool(hidden)); } partial_state_row( input.id, DIRECTORY_DESCRIPTOR_SCHEMA_KEY, Some(JsonValue::Object(snapshot)), input.context, ) } pub(crate) fn file_descriptor_write_row(input: FileDescriptorWriteIntent) -> TransactionWriteRow { let mut snapshot = JsonMap::new(); if let Some(id) = input.id.as_ref() { snapshot.insert("id".to_string(), JsonValue::String(id.clone())); } snapshot.insert( "directory_id".to_string(), input .directory_id .clone() .map(JsonValue::String) .unwrap_or(JsonValue::Null), ); snapshot.insert("name".to_string(), JsonValue::String(input.name)); if let Some(hidden) = input.hidden { snapshot.insert("hidden".to_string(), JsonValue::Bool(hidden)); } partial_state_row( input.id, FILE_DESCRIPTOR_SCHEMA_KEY, Some(JsonValue::Object(snapshot)), input.context, ) } pub(crate) fn blob_ref_row(input: BlobRefRowInput) -> Result { let size_bytes = u64::try_from(input.data.len()).map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", format!( "binary blob size exceeds supported range for file '{}' version '{}'", input.file_id, input.context.version_id ), ) })?; let snapshot = json!({ "id": input.file_id.clone(), "blob_hash": stable_content_fingerprint_hex(&input.data), "size_bytes": size_bytes, }); Ok(state_row( input.file_id.clone(), BLOB_REF_SCHEMA_KEY, Some(snapshot), FilesystemRowContext { file_id: Some(input.file_id), ..input.context }, )) } pub(crate) fn plan_file_path_write( resolver: &mut DirectoryPathResolver, input: FilePathWriteInput, generate_directory_id: &mut dyn FnMut() -> String, ) -> Result { let parsed = ParsedFilePath::try_from_path(&input.path)?; let mut rows = Vec::new(); let file_id = input.id.unwrap_or_else(&mut *generate_directory_id); let directory_id = match parsed.directory_path.as_ref() { Some(directory_path) => { rows.extend(resolver.ensure_directory_path( directory_path.as_str(), input.context.clone(), false, generate_directory_id, )?); resolver .directory_id(directory_path.as_str())? .map(ToOwned::to_owned) } None => None, }; resolver.reserve_file(directory_id.clone(), parsed.name.clone(), file_id.clone())?; rows.push(file_descriptor_row(FileDescriptorRowInput { id: file_id.clone(), directory_id, name: parsed.name.clone(), hidden: input.hidden.unwrap_or(false), context: input.context.clone(), })); let mut file_data = Vec::new(); if let Some(data) = input.data { rows.push(blob_ref_row(BlobRefRowInput { file_id: file_id.clone(), data: data.clone(), context: FilesystemRowContext { file_id: None, metadata: None, ..input.context.clone() }, })?); file_data.push(TransactionFileData { file_id, version_id: input.context.version_id, untracked: input.context.untracked, data, }); } Ok(FilesystemWritePlan { rows, file_data, count: 1, }) } pub(crate) fn plan_file_path_update( resolver: &mut DirectoryPathResolver, existing_file_id: String, new_path: String, existing_hidden: bool, _existing_data: Option>, context: FilesystemRowContext, generate_directory_id: &mut dyn FnMut() -> String, ) -> Result { let parsed = ParsedFilePath::try_from_path(&new_path)?; let mut rows = Vec::new(); let directory_id = match parsed.directory_path.as_ref() { Some(directory_path) => { rows.extend(resolver.ensure_directory_path( directory_path.as_str(), context.clone(), false, generate_directory_id, )?); resolver .directory_id(directory_path.as_str())? .map(ToOwned::to_owned) } None => None, }; resolver.reserve_file( directory_id.clone(), parsed.name.clone(), existing_file_id.clone(), )?; rows.push(file_descriptor_row(FileDescriptorRowInput { id: existing_file_id, directory_id, name: parsed.name.clone(), hidden: existing_hidden, context, })); // Data/blob-ref state is intentionally left untouched for path-only // updates. A provider should plan blob rows only when `data` is assigned. Ok(FilesystemWritePlan { rows, file_data: Vec::new(), count: 1, }) } pub(crate) fn plan_file_delete(input: FileDeleteInput) -> FilesystemDeletePlan { let mut rows = vec![tombstone_row( input.file_id.clone(), FILE_DESCRIPTOR_SCHEMA_KEY, FilesystemRowContext { file_id: None, ..input.context.clone() }, )]; if input.has_blob_ref { rows.push(tombstone_row( input.file_id.clone(), BLOB_REF_SCHEMA_KEY, FilesystemRowContext { file_id: Some(input.file_id), metadata: None, ..input.context }, )); } FilesystemDeletePlan { rows, count: 1 } } pub(crate) fn plan_directory_delete(input: DirectoryDeleteInput) -> FilesystemDeletePlan { FilesystemDeletePlan { rows: vec![tombstone_row( input.directory_id, DIRECTORY_DESCRIPTOR_SCHEMA_KEY, FilesystemRowContext { file_id: None, ..input.context }, )], count: 1, } } pub(crate) fn plan_recursive_directory_delete( root_directory_id: &str, visible_filesystem: &VisibleFilesystem, context: FilesystemRowContext, ) -> FilesystemDeletePlan { let mut rows = Vec::new(); let mut count = 0; collect_recursive_directory_delete( root_directory_id, visible_filesystem, &context, &mut rows, &mut count, ); FilesystemDeletePlan { rows, count } } pub(crate) fn directory_path_resolvers_from_state_rows( rows: Vec, ) -> Result, LixError> { let mut directory_rows = BTreeMap::>::new(); let mut file_rows = BTreeMap::, String, String)>>::new(); for row in rows { let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let resolver_key = filesystem_storage_scope_key( &row.version_id, row.global, row.untracked, row.file_id.as_deref(), ); match row.schema_key.as_str() { DIRECTORY_DESCRIPTOR_SCHEMA_KEY => { let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_directory_descriptor snapshot JSON: {error}"), ) })?; directory_rows.entry(resolver_key).or_default().insert( snapshot.id.clone(), DirectoryDescriptorSeed { id: snapshot.id, parent_id: snapshot.parent_id, name: snapshot.name, }, ); } FILE_DESCRIPTOR_SCHEMA_KEY => { let snapshot: FileDescriptorSnapshot = serde_json::from_str(snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_file_descriptor snapshot JSON: {error}"), ) })?; file_rows.entry(resolver_key).or_default().push(( snapshot.directory_id, snapshot.name, snapshot.id, )); } _ => {} } } let mut resolvers = BTreeMap::new(); for (version_id, records) in directory_rows { let mut paths = BTreeMap::::new(); for directory_id in records.keys() { resolve_directory_seed_path(directory_id, &records, &mut paths, &mut BTreeSet::new())?; } let seeds = paths .into_iter() .map(|(directory_id, path)| (path, directory_id)) .collect::>(); let files = file_rows.remove(&version_id).unwrap_or_default(); resolvers.insert( version_id, DirectoryPathResolver::from_existing_filesystem(seeds, files)?, ); } for (version_id, files) in file_rows { resolvers.insert( version_id, DirectoryPathResolver::from_existing_filesystem(std::iter::empty(), files)?, ); } Ok(resolvers) } pub(crate) fn filesystem_storage_scope_key( version_id: &str, global: bool, untracked: bool, file_id: Option<&str>, ) -> String { format!( "version={version_id}\0global={global}\0untracked={untracked}\0file_id={}", file_id.unwrap_or("") ) } #[derive(Debug, Clone)] struct DirectoryDescriptorSeed { id: String, parent_id: Option, name: String, } fn resolve_directory_seed_path( directory_id: &str, records: &BTreeMap, paths: &mut BTreeMap, visiting: &mut BTreeSet, ) -> Result, LixError> { if let Some(path) = paths.get(directory_id) { return Ok(Some(path.clone())); } if !visiting.insert(directory_id.to_string()) { return Err(directory_parent_cycle_error(directory_id)); } let Some(row) = records.get(directory_id) else { visiting.remove(directory_id); return Ok(None); }; let path = match row.parent_id.as_deref() { Some(parent_id) => { let Some(parent_path) = resolve_directory_seed_path(parent_id, records, paths, visiting)? else { visiting.remove(directory_id); return Ok(None); }; format!("{parent_path}{}/", row.name) } None => format!("/{}/", row.name), }; visiting.remove(directory_id); paths.insert(row.id.clone(), path.clone()); Ok(Some(path)) } fn directory_parent_cycle_error(directory_id: &str) -> LixError { LixError::new( LixError::CODE_CONSTRAINT_VIOLATION, format!( "lix_directory_descriptor parent_id cycle detected while resolving directory '{directory_id}'" ), ) } fn state_row( entity_id: String, schema_key: &str, snapshot: Option, context: FilesystemRowContext, ) -> TransactionWriteRow { partial_state_row(Some(entity_id), schema_key, snapshot, context) } fn partial_state_row( entity_id: Option, schema_key: &str, snapshot: Option, context: FilesystemRowContext, ) -> TransactionWriteRow { let snapshot = snapshot.map(TransactionJson::from_value_unchecked); TransactionWriteRow { entity_id: entity_id.map(EntityIdentity::single), schema_key: schema_key.to_string(), file_id: context.file_id, snapshot, metadata: context.metadata, origin: None, created_at: None, updated_at: None, global: context.global, change_id: None, commit_id: None, untracked: context.untracked, version_id: context.version_id, } } fn tombstone_row( entity_id: String, schema_key: &str, context: FilesystemRowContext, ) -> TransactionWriteRow { state_row(entity_id, schema_key, None, context) } fn collect_recursive_directory_delete( directory_id: &str, visible_filesystem: &VisibleFilesystem, context: &FilesystemRowContext, rows: &mut Vec, count: &mut u64, ) { if let Some(child_ids) = visible_filesystem .directory_children_by_parent_id .get(&Some(directory_id.to_string())) { for child_id in child_ids { collect_recursive_directory_delete(child_id, visible_filesystem, context, rows, count); } } if let Some(files) = visible_filesystem .files_by_directory_id .get(&Some(directory_id.to_string())) { for file_id in files.keys() { let plan = plan_file_delete(FileDeleteInput { file_id: file_id.clone(), has_blob_ref: visible_filesystem .blob_refs_by_file_id .contains_key(file_id), context: context.clone(), }); rows.extend(plan.rows); *count += plan.count; } } let plan = plan_directory_delete(DirectoryDeleteInput { directory_id: directory_id.to_string(), context: context.clone(), }); rows.extend(plan.rows); *count += plan.count; } #[cfg(test)] mod tests { use std::collections::{BTreeMap, BTreeSet}; use serde_json::Value as JsonValue; use super::{ blob_ref_row, directory_descriptor_row, file_descriptor_row, plan_file_path_update, plan_file_path_write, BlobRefRowInput, DirectoryDeleteInput, DirectoryDescriptorRowInput, DirectoryPathResolver, FileDeleteInput, FileDescriptorRowInput, FilePathWriteInput, FilesystemRowContext, }; use crate::sql2::filesystem_visibility::{ VisibleBlobRef, VisibleDirectory, VisibleFile, VisibleFilesystem, }; use crate::{entity_identity::EntityIdentity, live_state::MaterializedLiveStateRow}; fn test_id_generator(ids: &'static [&'static str]) -> impl FnMut() -> String { let mut ids = ids.iter(); move || ids.next().expect("test id should exist").to_string() } #[test] fn directory_descriptor_row_builds_state_row() { let row = directory_descriptor_row(DirectoryDescriptorRowInput { id: "dir-docs".to_string(), parent_id: None, name: "docs".to_string(), hidden: false, context: FilesystemRowContext::active_version("version-a"), }); assert_eq!( row.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single("dir-docs")) ); assert_eq!(row.schema_key, "lix_directory_descriptor"); assert_eq!(row.version_id, "version-a"); let snapshot: JsonValue = row.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "dir-docs"); assert_eq!(snapshot["parent_id"], JsonValue::Null); assert_eq!(snapshot["name"], "docs"); assert_eq!(snapshot["hidden"], false); } #[test] fn file_descriptor_row_builds_state_row() { let row = file_descriptor_row(FileDescriptorRowInput { id: "file-readme".to_string(), directory_id: Some("dir-docs".to_string()), name: "readme.md".to_string(), hidden: false, context: FilesystemRowContext::active_version("version-a"), }); assert_eq!( row.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(row.schema_key, "lix_file_descriptor"); let snapshot: JsonValue = row.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["directory_id"], "dir-docs"); assert_eq!(snapshot["name"], "readme.md"); } #[test] fn blob_ref_row_builds_state_row() { let row = blob_ref_row(BlobRefRowInput { file_id: "file-readme".to_string(), data: b"Hello".to_vec(), context: FilesystemRowContext::active_version("version-a"), }) .expect("blob ref row should build"); assert_eq!( row.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(row.file_id.as_deref(), Some("file-readme")); assert_eq!(row.schema_key, "lix_binary_blob_ref"); let snapshot: JsonValue = row.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["size_bytes"], 5); assert!(snapshot["blob_hash"] .as_str() .is_some_and(|hash| !hash.is_empty())); } #[test] fn directory_path_resolver_reuses_existing_ancestor() { let mut resolver = DirectoryPathResolver::from_existing([("/docs/".to_string(), "dir-docs".to_string())]) .expect("existing directories should normalize"); let rows = resolver .ensure_directory_path( "/docs/nested/", FilesystemRowContext::active_version("version-a"), false, &mut test_id_generator(&["dir-generated-nested"]), ) .expect("directory path should plan"); assert_eq!(rows.len(), 1); assert_eq!(resolver.directory_id("/docs/").unwrap(), Some("dir-docs")); assert_eq!( resolver.directory_id("/docs/nested/").unwrap(), Some("dir-generated-nested") ); let snapshot: JsonValue = rows[0].snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "dir-generated-nested"); assert_eq!(snapshot["parent_id"], "dir-docs"); assert_eq!(snapshot["name"], "nested"); } #[test] fn directory_path_resolver_reuses_ancestor_staged_in_same_batch() { let mut resolver = DirectoryPathResolver::from_existing([]).expect("empty resolver should build"); let docs_rows = resolver .ensure_directory_path( "/docs/", FilesystemRowContext::active_version("version-a"), false, &mut test_id_generator(&["dir-generated-docs"]), ) .expect("top-level directory should plan"); assert_eq!(docs_rows.len(), 1); let nested_rows = resolver .ensure_directory_path( "/docs/nested/", FilesystemRowContext::active_version("version-a"), false, &mut test_id_generator(&["dir-generated-nested"]), ) .expect("nested directory should plan"); assert_eq!(nested_rows.len(), 1); let snapshot: JsonValue = nested_rows[0].snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "dir-generated-nested"); assert_eq!(snapshot["parent_id"], "dir-generated-docs"); assert_eq!(snapshot["name"], "nested"); } #[test] fn directory_path_resolver_uses_explicit_leaf_id() { let mut resolver = DirectoryPathResolver::from_existing([]).expect("empty resolver should build"); let rows = resolver .ensure_directory_path_with_leaf_id( "/docs/nested/", Some("dir-nested".to_string()), FilesystemRowContext::active_version("version-a"), false, &mut test_id_generator(&["dir-generated-docs"]), ) .expect("directory path should plan"); assert_eq!(rows.len(), 2); assert_eq!( resolver.directory_id("/docs/").unwrap(), Some("dir-generated-docs") ); assert_eq!( resolver.directory_id("/docs/nested/").unwrap(), Some("dir-nested") ); let snapshot: JsonValue = rows[1].snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "dir-nested"); assert_eq!(snapshot["parent_id"], "dir-generated-docs"); assert_eq!(snapshot["name"], "nested"); } #[test] fn directory_path_resolver_does_not_restage_same_path() { let mut resolver = DirectoryPathResolver::from_existing([]).expect("empty resolver should build"); let rows = resolver .ensure_directory_path( "/docs/nested/", FilesystemRowContext::active_version("version-a"), false, &mut test_id_generator(&["dir-generated-docs", "dir-generated-nested"]), ) .expect("directory path should plan"); assert_eq!(rows.len(), 2); let rows = resolver .ensure_directory_path( "/docs/nested/", FilesystemRowContext::active_version("version-a"), false, &mut test_id_generator(&["should-not-be-used"]), ) .expect("directory path should plan"); assert!(rows.is_empty()); } #[test] fn file_path_write_stages_missing_directories_file_blob_and_payload() { let mut resolver = DirectoryPathResolver::from_existing([]).expect("empty resolver should build"); let plan = plan_file_path_write( &mut resolver, FilePathWriteInput { id: Some("file-readme".to_string()), path: "/docs/guides/readme.md".to_string(), data: Some(b"hello".to_vec()), hidden: Some(false), context: FilesystemRowContext::active_version("version-a"), }, &mut test_id_generator(&["dir-generated-docs", "dir-generated-guides"]), ) .expect("file path write should plan"); assert_eq!(plan.count, 1); assert_eq!(plan.file_data.len(), 1); assert_eq!(plan.file_data[0].file_id, "file-readme"); assert_eq!(plan.file_data[0].version_id, "version-a"); assert_eq!(plan.file_data[0].data, b"hello"); assert_eq!(plan.rows.len(), 4); assert_eq!( plan.rows .iter() .filter(|row| row.schema_key == "lix_directory_descriptor") .count(), 2 ); assert!(plan .rows .iter() .any(|row| row.schema_key == "lix_binary_blob_ref")); let file_row = plan .rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor row should be planned"); let snapshot: JsonValue = file_row.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["directory_id"], "dir-generated-guides"); assert_eq!(snapshot["name"], "readme.md"); } #[test] fn file_path_write_reuses_existing_parent_directory() { let mut resolver = DirectoryPathResolver::from_existing([ ("/docs/".to_string(), "dir-docs".to_string()), ("/docs/guides/".to_string(), "dir-guides".to_string()), ]) .expect("existing directories should seed"); let plan = plan_file_path_write( &mut resolver, FilePathWriteInput { id: Some("file-readme".to_string()), path: "/docs/guides/readme.md".to_string(), data: Some(b"hello".to_vec()), hidden: Some(false), context: FilesystemRowContext::active_version("version-a"), }, &mut test_id_generator(&["should-not-be-used"]), ) .expect("file path write should plan"); assert_eq!(plan.rows.len(), 2); assert_eq!( plan.rows .iter() .filter(|row| row.schema_key == "lix_directory_descriptor") .count(), 0 ); let file_row = plan .rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor row should be planned"); let snapshot: JsonValue = file_row.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["directory_id"], "dir-guides"); } #[test] fn file_path_update_reuses_existing_parent_and_preserves_data() { let mut resolver = DirectoryPathResolver::from_existing([("/docs/".to_string(), "dir-docs".to_string())]) .expect("existing directories should seed"); let plan = plan_file_path_update( &mut resolver, "file-readme".to_string(), "/docs/renamed.md".to_string(), false, Some(b"hello".to_vec()), FilesystemRowContext::active_version("version-a"), &mut test_id_generator(&["should-not-be-used"]), ) .expect("file path update should plan"); assert_eq!(plan.count, 1); assert!(plan.file_data.is_empty()); assert_eq!(plan.rows.len(), 1); assert!(plan .rows .iter() .all(|row| row.schema_key != "lix_binary_blob_ref")); let snapshot: JsonValue = plan.rows[0].snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["id"], "file-readme"); assert_eq!(snapshot["directory_id"], "dir-docs"); assert_eq!(snapshot["name"], "renamed.md"); assert_eq!(snapshot["hidden"], false); } #[test] fn file_path_update_stages_missing_parent_directories() { let mut resolver = DirectoryPathResolver::from_existing([]).expect("empty resolver should build"); let plan = plan_file_path_update( &mut resolver, "file-readme".to_string(), "/docs/guides/readme.md".to_string(), true, Some(b"hello".to_vec()), FilesystemRowContext::active_version("version-a"), &mut test_id_generator(&["dir-generated-docs", "dir-generated-guides"]), ) .expect("file path update should plan"); assert_eq!(plan.count, 1); assert!(plan.file_data.is_empty()); assert_eq!(plan.rows.len(), 3); assert_eq!( plan.rows .iter() .filter(|row| row.schema_key == "lix_directory_descriptor") .count(), 2 ); assert!(plan .rows .iter() .all(|row| row.schema_key != "lix_binary_blob_ref")); let file_row = plan .rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor row should be planned"); let snapshot: JsonValue = file_row.snapshot.as_ref().unwrap().value().clone(); assert_eq!(snapshot["directory_id"], "dir-generated-guides"); assert_eq!(snapshot["name"], "readme.md"); assert_eq!(snapshot["hidden"], true); } #[test] fn directory_path_resolvers_from_state_rows_derives_nested_paths() { let resolvers = super::directory_path_resolvers_from_state_rows(vec![ live_directory_row( "dir-docs", "version-a", "{\"id\":\"dir-docs\",\"parent_id\":null,\"name\":\"docs\"}", ), live_directory_row( "dir-guides", "version-a", "{\"id\":\"dir-guides\",\"parent_id\":\"dir-docs\",\"name\":\"guides\"}", ), ]) .expect("state rows should seed directory resolvers"); let resolver = resolvers .get(&super::filesystem_storage_scope_key( "version-a", false, false, None, )) .expect("storage-scope resolver should exist"); assert_eq!(resolver.directory_id("/docs/").unwrap(), Some("dir-docs")); assert_eq!( resolver.directory_id("/docs/guides/").unwrap(), Some("dir-guides") ); } #[test] fn file_delete_plans_descriptor_and_blob_ref_tombstones() { let plan = super::plan_file_delete(FileDeleteInput { file_id: "file-readme".to_string(), has_blob_ref: true, context: FilesystemRowContext::active_version("version-a"), }); assert_eq!(plan.count, 1); assert_eq!(plan.rows.len(), 2); let descriptor = plan .rows .iter() .find(|row| row.schema_key == "lix_file_descriptor") .expect("file descriptor tombstone should be planned"); assert_eq!( descriptor.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(descriptor.file_id, None); assert_eq!(descriptor.snapshot, None); let blob_ref = plan .rows .iter() .find(|row| row.schema_key == "lix_binary_blob_ref") .expect("blob ref tombstone should be planned"); assert_eq!( blob_ref.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "file-readme" )) ); assert_eq!(blob_ref.file_id.as_deref(), Some("file-readme")); assert_eq!(blob_ref.snapshot, None); } #[test] fn file_delete_without_blob_ref_plans_only_descriptor_tombstone() { let plan = super::plan_file_delete(FileDeleteInput { file_id: "file-readme".to_string(), has_blob_ref: false, context: FilesystemRowContext::active_version("version-a"), }); assert_eq!(plan.count, 1); assert_eq!(plan.rows.len(), 1); assert_eq!(plan.rows[0].schema_key, "lix_file_descriptor"); assert_eq!(plan.rows[0].snapshot, None); } #[test] fn directory_delete_plans_descriptor_tombstone() { let plan = super::plan_directory_delete(DirectoryDeleteInput { directory_id: "dir-docs".to_string(), context: FilesystemRowContext::active_version("version-a"), }); assert_eq!(plan.count, 1); assert_eq!(plan.rows.len(), 1); assert_eq!( plan.rows[0].entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single("dir-docs")) ); assert_eq!(plan.rows[0].schema_key, "lix_directory_descriptor"); assert_eq!(plan.rows[0].file_id, None); assert_eq!(plan.rows[0].snapshot, None); } #[test] fn recursive_directory_delete_plans_files_blobs_and_deepest_directories_first() { let context = FilesystemRowContext::active_version("version-a"); let mut directories_by_id = BTreeMap::new(); directories_by_id.insert( "dir-docs".to_string(), visible_directory("dir-docs", None, "docs", context.clone()), ); directories_by_id.insert( "dir-guides".to_string(), visible_directory("dir-guides", Some("dir-docs"), "guides", context.clone()), ); let mut directory_children_by_parent_id = BTreeMap::new(); directory_children_by_parent_id.insert( Some("dir-docs".to_string()), BTreeSet::from(["dir-guides".to_string()]), ); let mut files_by_directory_id = BTreeMap::new(); files_by_directory_id.insert( Some("dir-guides".to_string()), BTreeMap::from([( "file-readme".to_string(), visible_file("file-readme", Some("dir-guides"), "readme", context.clone()), )]), ); files_by_directory_id.insert( Some("dir-docs".to_string()), BTreeMap::from([( "file-index".to_string(), visible_file("file-index", Some("dir-docs"), "index", context.clone()), )]), ); let visible_filesystem = VisibleFilesystem { directories_by_id, directory_children_by_parent_id, files_by_directory_id, blob_refs_by_file_id: BTreeMap::from([( "file-readme".to_string(), visible_blob_ref("file-readme", context.clone()), )]), }; let plan = super::plan_recursive_directory_delete("dir-docs", &visible_filesystem, context); assert_eq!(plan.count, 4); assert_eq!( plan.rows .iter() .map(|row| { ( row.schema_key.as_str(), row.entity_id .as_ref() .expect("planned recursive delete row should carry entity_id") .as_single_string_owned() .expect("planned recursive delete row should project entity_id"), ) }) .collect::>(), vec![ ("lix_file_descriptor", "file-readme".to_string()), ("lix_binary_blob_ref", "file-readme".to_string()), ("lix_directory_descriptor", "dir-guides".to_string()), ("lix_file_descriptor", "file-index".to_string()), ("lix_directory_descriptor", "dir-docs".to_string()), ] ); assert!(plan.rows.iter().all(|row| row.snapshot.is_none())); } fn visible_directory( id: &str, parent_id: Option<&str>, name: &str, context: FilesystemRowContext, ) -> VisibleDirectory { VisibleDirectory { id: id.to_string(), parent_id: parent_id.map(ToOwned::to_owned), name: name.to_string(), hidden: false, context, } } fn visible_file( id: &str, directory_id: Option<&str>, name: &str, context: FilesystemRowContext, ) -> VisibleFile { VisibleFile { id: id.to_string(), directory_id: directory_id.map(ToOwned::to_owned), name: name.to_string(), hidden: false, context, } } fn visible_blob_ref(file_id: &str, context: FilesystemRowContext) -> VisibleBlobRef { VisibleBlobRef { file_id: file_id.to_string(), blob_hash: format!("hash-{file_id}"), size_bytes: Some(1), context, } } fn live_directory_row( entity_id: &str, version_id: &str, snapshot_content: &str, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: EntityIdentity::single(entity_id), schema_key: "lix_directory_descriptor".to_string(), file_id: None, snapshot_content: Some(snapshot_content.to_string()), metadata: None, deleted: false, version_id: version_id.to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } } ================================================ FILE: packages/engine/src/sql2/filesystem_predicates.rs ================================================ use datafusion::common::tree_node::{Transformed, TreeNode}; use datafusion::common::{DataFusionError, Result, ScalarValue}; use datafusion::logical_expr::expr::{Between, InList}; use datafusion::logical_expr::{BinaryExpr, Expr, Operator}; use crate::common::{normalize_directory_path, ParsedFilePath}; use crate::LixError; use super::error::lix_error_to_datafusion_error; #[derive(Debug, Clone, Copy)] pub(crate) enum FilesystemPathKind { File, Directory, } pub(crate) fn canonicalize_filesystem_path_filters( filters: &[Expr], kind: FilesystemPathKind, ) -> Result> { filters .iter() .cloned() .map(|filter| canonicalize_filesystem_path_filter(filter, kind)) .collect() } fn canonicalize_filesystem_path_filter(expr: Expr, kind: FilesystemPathKind) -> Result { expr.transform(|expr| canonicalize_filesystem_path_expr(expr, kind)) .map(|transformed| transformed.data) } fn canonicalize_filesystem_path_expr( expr: Expr, kind: FilesystemPathKind, ) -> Result> { match expr { Expr::BinaryExpr(binary_expr) if is_path_comparison_operator(binary_expr.op) => { canonicalize_path_binary_expr(binary_expr, kind) } Expr::InList(in_list) if is_path_column(&in_list.expr) => { canonicalize_path_in_list(in_list, kind) } Expr::Between(between) if is_path_column(&between.expr) => { canonicalize_path_between(between, kind) } _ => Ok(Transformed::no(expr)), } } fn canonicalize_path_binary_expr( binary_expr: BinaryExpr, kind: FilesystemPathKind, ) -> Result> { let BinaryExpr { left, op, right } = binary_expr; let left_is_path = is_path_column(&left); let right_is_path = is_path_column(&right); let left = if right_is_path { Box::new(canonicalize_path_literal_expr(*left, kind)?) } else { left }; let right = if left_is_path { Box::new(canonicalize_path_literal_expr(*right, kind)?) } else { right }; Ok(Transformed::yes(Expr::BinaryExpr(BinaryExpr::new( left, op, right, )))) } fn canonicalize_path_in_list( in_list: InList, kind: FilesystemPathKind, ) -> Result> { let list = in_list .list .into_iter() .map(|expr| canonicalize_path_literal_expr(expr, kind)) .collect::>>()?; Ok(Transformed::yes(Expr::InList(InList::new( in_list.expr, list, in_list.negated, )))) } fn canonicalize_path_between( between: Between, kind: FilesystemPathKind, ) -> Result> { Ok(Transformed::yes(Expr::Between(Between { expr: between.expr, negated: between.negated, low: Box::new(canonicalize_path_literal_expr(*between.low, kind)?), high: Box::new(canonicalize_path_literal_expr(*between.high, kind)?), }))) } fn canonicalize_path_literal_expr(expr: Expr, kind: FilesystemPathKind) -> Result { let Expr::Literal(literal, metadata) = expr else { return Err(unsupported_dynamic_path_predicate_error(expr)); }; match literal { ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)) => { let normalized = canonicalize_path_value(&value, kind)?; Ok(Expr::Literal(ScalarValue::Utf8(Some(normalized)), metadata)) } _ => Ok(Expr::Literal(literal, metadata)), } } fn canonicalize_path_value(value: &str, kind: FilesystemPathKind) -> Result { match kind { FilesystemPathKind::File => ParsedFilePath::try_from_path(value) .map(|parsed| parsed.normalized_path.to_string()) .map_err(lix_error_to_datafusion_error), FilesystemPathKind::Directory => { normalize_directory_path(value).map_err(lix_error_to_datafusion_error) } } } fn is_path_column(expr: &Expr) -> bool { matches!(expr, Expr::Column(column) if column.name == "path") } fn is_path_comparison_operator(op: Operator) -> bool { matches!( op, Operator::Eq | Operator::NotEq | Operator::Lt | Operator::LtEq | Operator::Gt | Operator::GtEq ) } fn unsupported_dynamic_path_predicate_error(expr: Expr) -> DataFusionError { lix_error_to_datafusion_error( LixError::new( LixError::CODE_UNSUPPORTED_SQL, format!( "filesystem path predicates only support literal path values; found expression {expr:?}" ), ) .with_hint( "Compare lix_file.path or lix_directory.path to a string literal or bound parameter. \ Computed path expressions are not supported until path canonicalization can run at evaluation time.", ), ) } ================================================ FILE: packages/engine/src/sql2/filesystem_visibility.rs ================================================ #![allow(dead_code)] use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use serde::Deserialize; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{LiveStateFilter, LiveStateReader, LiveStateScanRequest}; use crate::LixError; use super::filesystem_planner::{ FilesystemRowContext, BLOB_REF_SCHEMA_KEY, DIRECTORY_DESCRIPTOR_SCHEMA_KEY, FILE_DESCRIPTOR_SCHEMA_KEY, }; /// Execution-visible filesystem metadata decoded from live-state rows. /// /// The helper intentionally depends only on `LiveStateReader`. In engine /// write execution that context may include staged rows, so filesystem planning /// sees pending writes without reaching into write-execution internals. #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct VisibleFilesystem { pub(crate) directories_by_id: BTreeMap, pub(crate) directory_children_by_parent_id: BTreeMap, BTreeSet>, pub(crate) files_by_directory_id: BTreeMap, BTreeMap>, pub(crate) blob_refs_by_file_id: BTreeMap, } impl VisibleFilesystem { /// Loads filesystem rows for a single version from execution-visible live /// state and builds lookup indexes used by filesystem write planning. pub(crate) async fn load( live_state: Arc, version_id: &str, ) -> Result { let rows = live_state .scan_rows(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![ DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), BLOB_REF_SCHEMA_KEY.to_string(), ], version_ids: vec![version_id.to_string()], ..LiveStateFilter::default() }, ..LiveStateScanRequest::default() }) .await?; Self::from_live_rows(rows) } /// Builds filesystem lookup indexes from rows that are already known to be /// transaction-visible. pub(crate) fn from_live_rows(rows: Vec) -> Result { let mut visible = Self::default(); for row in rows { let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; match row.schema_key.as_str() { DIRECTORY_DESCRIPTOR_SCHEMA_KEY => { let snapshot: DirectoryDescriptorSnapshot = serde_json::from_str(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_directory_descriptor snapshot JSON: {error}"), ) })?; let directory = VisibleDirectory { id: snapshot.id, parent_id: snapshot.parent_id, name: snapshot.name, hidden: snapshot.hidden.unwrap_or(false), context: filesystem_row_context(&row)?, }; visible .directory_children_by_parent_id .entry(directory.parent_id.clone()) .or_default() .insert(directory.id.clone()); visible .directories_by_id .insert(directory.id.clone(), directory); } FILE_DESCRIPTOR_SCHEMA_KEY => { let snapshot: FileDescriptorSnapshot = serde_json::from_str(snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_file_descriptor snapshot JSON: {error}"), ) })?; let file = VisibleFile { id: snapshot.id, directory_id: snapshot.directory_id, name: snapshot.name, hidden: snapshot.hidden, context: filesystem_row_context(&row)?, }; visible .files_by_directory_id .entry(file.directory_id.clone()) .or_default() .insert(file.id.clone(), file); } BLOB_REF_SCHEMA_KEY => { let snapshot: BlobRefSnapshot = serde_json::from_str(snapshot_content) .map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("invalid lix_binary_blob_ref snapshot JSON: {error}"), ) })?; visible.blob_refs_by_file_id.insert( snapshot.id.clone(), VisibleBlobRef { file_id: snapshot.id, blob_hash: snapshot.blob_hash, size_bytes: snapshot.size_bytes, context: filesystem_row_context(&row)?, }, ); } _ => {} } } Ok(visible) } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct VisibleDirectory { pub(crate) id: String, pub(crate) parent_id: Option, pub(crate) name: String, pub(crate) hidden: bool, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct VisibleFile { pub(crate) id: String, pub(crate) directory_id: Option, pub(crate) name: String, pub(crate) hidden: bool, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct VisibleBlobRef { pub(crate) file_id: String, pub(crate) blob_hash: String, pub(crate) size_bytes: Option, pub(crate) context: FilesystemRowContext, } #[derive(Debug, Deserialize)] struct DirectoryDescriptorSnapshot { id: String, parent_id: Option, name: String, hidden: Option, } #[derive(Debug, Deserialize)] struct FileDescriptorSnapshot { id: String, directory_id: Option, name: String, hidden: bool, } #[derive(Debug, Deserialize)] struct BlobRefSnapshot { id: String, blob_hash: String, size_bytes: Option, } fn filesystem_row_context( row: &MaterializedLiveStateRow, ) -> Result { Ok(FilesystemRowContext { version_id: row.version_id.clone(), global: row.global, untracked: row.untracked, file_id: row.file_id.clone(), metadata: row .metadata .as_deref() .map(|metadata| { crate::parse_row_metadata_value(metadata, "filesystem row metadata").and_then( |metadata| { crate::transaction::types::TransactionJson::from_value( metadata, "filesystem row metadata", ) }, ) }) .transpose()?, }) } #[cfg(test)] mod tests { use async_trait::async_trait; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{LiveStateReader, LiveStateRowRequest, LiveStateScanRequest}; use crate::LixError; use super::{ VisibleFilesystem, BLOB_REF_SCHEMA_KEY, DIRECTORY_DESCRIPTOR_SCHEMA_KEY, FILE_DESCRIPTOR_SCHEMA_KEY, }; #[tokio::test] async fn nested_directories_resolve_correctly() { let filesystem = VisibleFilesystem::load( live_state(vec![ directory_row( "dir-docs", r#"{"id":"dir-docs","parent_id":null,"name":"docs","hidden":false}"#, ), directory_row( "dir-guides", r#"{"id":"dir-guides","parent_id":"dir-docs","name":"guides","hidden":false}"#, ), ]), "version-a", ) .await .expect("visible filesystem should load"); assert_eq!( filesystem .directories_by_id .get("dir-guides") .and_then(|directory| directory.parent_id.as_deref()), Some("dir-docs") ); assert!(filesystem .directory_children_by_parent_id .get(&None) .is_some_and(|children| children.contains("dir-docs"))); assert!(filesystem .directory_children_by_parent_id .get(&Some("dir-docs".to_string())) .is_some_and(|children| children.contains("dir-guides"))); } #[tokio::test] async fn files_attach_to_directory_ids() { let filesystem = VisibleFilesystem::load( live_state(vec![file_row( "file-readme", r#"{"id":"file-readme","directory_id":"dir-guides","name":"readme.md","hidden":false}"#, )]), "version-a", ) .await .expect("visible filesystem should load"); let files = filesystem .files_by_directory_id .get(&Some("dir-guides".to_string())) .expect("directory should have attached files"); let file = files .get("file-readme") .expect("file should be indexed by id inside directory"); assert_eq!(file.name, "readme.md"); } #[tokio::test] async fn blob_refs_attach_to_file_ids() { let filesystem = VisibleFilesystem::load( live_state(vec![blob_ref_row( "file-readme", r#"{"id":"file-readme","blob_hash":"abc123","size_bytes":5}"#, )]), "version-a", ) .await .expect("visible filesystem should load"); let blob_ref = filesystem .blob_refs_by_file_id .get("file-readme") .expect("blob ref should be indexed by file id"); assert_eq!(blob_ref.blob_hash, "abc123"); assert_eq!(blob_ref.size_bytes, Some(5)); } fn live_state(rows: Vec) -> std::sync::Arc { std::sync::Arc::new(RowsLiveStateReader { rows }) } struct RowsLiveStateReader { rows: Vec, } #[async_trait] impl LiveStateReader for RowsLiveStateReader { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self .rows .iter() .filter(|row| { (request.filter.schema_keys.is_empty() || request.filter.schema_keys.contains(&row.schema_key)) && (request.filter.version_ids.is_empty() || request.filter.version_ids.contains(&row.version_id)) }) .cloned() .collect()) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } fn directory_row(entity_id: &str, snapshot_content: &str) -> MaterializedLiveStateRow { live_row( entity_id, DIRECTORY_DESCRIPTOR_SCHEMA_KEY, None, snapshot_content, ) } fn file_row(entity_id: &str, snapshot_content: &str) -> MaterializedLiveStateRow { live_row( entity_id, FILE_DESCRIPTOR_SCHEMA_KEY, None, snapshot_content, ) } fn blob_ref_row(entity_id: &str, snapshot_content: &str) -> MaterializedLiveStateRow { live_row( entity_id, BLOB_REF_SCHEMA_KEY, Some(entity_id.to_string()), snapshot_content, ) } fn live_row( entity_id: &str, schema_key: &str, file_id: Option, snapshot_content: &str, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: schema_key.to_string(), file_id, snapshot_content: Some(snapshot_content.to_string()), metadata: None, deleted: false, version_id: "version-a".to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } } ================================================ FILE: packages/engine/src/sql2/history_projection.rs ================================================ use serde_json::Value as JsonValue; use crate::entity_identity::EntityIdentity; use crate::LixError; /// Shared projection contract for typed history views. /// /// On tombstone rows (`snapshot_content IS NULL`), identity columns survive by /// projecting from canonical entity identity. Non-identity columns must remain /// NULL because there is no snapshot to project payload from. pub(crate) enum HistoryIdentityProjection<'a> { PrimaryKeyPaths(&'a [Vec]), SingleColumn { column: &'a str }, } pub(crate) fn tombstone_identity_column_value( column_name: &str, entity_id: &str, projection: HistoryIdentityProjection<'_>, ) -> Result, LixError> { match projection { HistoryIdentityProjection::SingleColumn { column } => { if column_name == column { Ok(Some(JsonValue::String(entity_id.to_string()))) } else { Ok(None) } } HistoryIdentityProjection::PrimaryKeyPaths(primary_key_paths) => { primary_key_tombstone_value(column_name, entity_id, primary_key_paths) } } } fn primary_key_tombstone_value( column_name: &str, entity_id: &str, primary_key_paths: &[Vec], ) -> Result, LixError> { let Some(part_index) = primary_key_paths .iter() .position(|path| path.as_slice() == [column_name]) else { return Ok(None); }; let identity = EntityIdentity::from_json_array_text(entity_id).map_err(|error| { LixError::unknown(format!( "failed to decode history tombstone entity identity: {error}" )) })?; Ok(identity .parts .get(part_index) .map(|part| JsonValue::String(part.clone()))) } ================================================ FILE: packages/engine/src/sql2/history_provider.rs ================================================ use std::any::Any; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, Int64Array, StringArray}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions}; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{DataFusionError, Result}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::{Expr, TableProviderFilterPushDown}; use datafusion::physical_expr::EquivalenceProperties; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use datafusion::prelude::SessionContext; use futures_util::{stream, TryStreamExt}; use tokio::sync::Mutex; use crate::commit_graph::CommitGraphReader; use crate::{serialize_row_metadata, LixError}; use super::history_route::{ load_history_entries, parse_history_filter, HistoryColumnStyle, HistoryRoute, HistoryViewDescriptor, }; use super::result_metadata::json_field; use super::SqlCommitStoreQuerySource; pub(crate) async fn register_history_providers( session: &SessionContext, commit_graph: Box, query_source: SqlCommitStoreQuerySource, ) -> Result, LixError> { let provider: Arc = Arc::new(LixStateHistoryProvider::new( Arc::new(Mutex::new(commit_graph)), query_source, )); session .register_table("lix_state_history", Arc::clone(&provider)) .map_err(datafusion_error_to_lix_error)?; Ok(provider) } pub(crate) struct LixStateHistoryProvider { schema: SchemaRef, commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, } impl std::fmt::Debug for LixStateHistoryProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixStateHistoryProvider").finish() } } impl LixStateHistoryProvider { pub(crate) fn new( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, ) -> Self { Self { schema: lix_state_history_schema(), commit_graph, query_source, } } } #[async_trait] impl TableProvider for LixStateHistoryProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::View } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { Ok(filters .iter() .map(|filter| { if parse_history_filter(filter, HistoryColumnStyle::Bare).is_some() { TableProviderFilterPushDown::Exact } else { TableProviderFilterPushDown::Unsupported } }) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, filters: &[Expr], limit: Option, ) -> Result> { let projected_schema = projected_schema(&self.schema, projection)?; Ok(Arc::new(LixStateHistoryScanExec::new( Arc::clone(&self.commit_graph), self.query_source.clone(), projected_schema, projection.cloned(), HistoryRoute::from_filters(filters, HistoryColumnStyle::Bare), limit, ))) } } struct LixStateHistoryScanExec { commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, schema: SchemaRef, projection: Option>, route: HistoryRoute, limit: Option, properties: Arc, } impl std::fmt::Debug for LixStateHistoryScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixStateHistoryScanExec") .field("limit", &self.limit) .field("route", &self.route) .finish() } } impl LixStateHistoryScanExec { fn new( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, schema: SchemaRef, projection: Option>, route: HistoryRoute, limit: Option, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&schema)), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { commit_graph, query_source, schema, projection, route, limit, properties: Arc::new(properties), } } } impl DisplayAs for LixStateHistoryScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!( f, "LixStateHistoryScanExec(limit={:?}, route={:?})", self.limit, self.route ) } DisplayFormatType::TreeRender => write!(f, "LixStateHistoryScanExec"), } } } impl ExecutionPlan for LixStateHistoryScanExec { fn name(&self) -> &str { "LixStateHistoryScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixStateHistoryScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixStateHistoryScanExec only exposes one partition, got {partition}" ))); } let commit_graph = Arc::clone(&self.commit_graph); let query_source = self.query_source.clone(); let route = self.route.clone(); let schema = Arc::clone(&self.schema); let stream_schema = Arc::clone(&schema); let limit = self.limit; let zero_column_projection = self .projection .as_ref() .is_some_and(|projection| projection.is_empty()); let stream = stream::once(async move { let rows = if route.is_contradictory() { Vec::new() } else { load_state_history_rows(commit_graph, query_source, &route) .await .map_err(lix_error_to_datafusion_error)? }; let rows = if let Some(limit) = limit { rows.into_iter().take(limit).collect::>() } else { rows }; let batch = if zero_column_projection { let options = RecordBatchOptions::new().with_row_count(Some(rows.len())); RecordBatch::try_new_with_options(Arc::clone(&stream_schema), vec![], &options) .map_err(|error| { DataFusionError::Execution(format!( "failed to build zero-column lix_state_history batch: {error}" )) })? } else { state_history_record_batch(Arc::clone(&stream_schema), &rows)? }; Ok::<_, DataFusionError>(stream::iter(vec![Ok::( batch, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream))) } } fn lix_state_history_schema() -> SchemaRef { Arc::new(Schema::new(vec![ json_field("entity_id", false), Field::new("schema_key", DataType::Utf8, false), Field::new("file_id", DataType::Utf8, true), json_field("snapshot_content", true), json_field("metadata", true), Field::new("change_id", DataType::Utf8, false), Field::new("observed_commit_id", DataType::Utf8, false), Field::new("commit_created_at", DataType::Utf8, false), Field::new("start_commit_id", DataType::Utf8, false), Field::new("depth", DataType::Int64, false), ])) } fn projected_schema(base_schema: &SchemaRef, projection: Option<&Vec>) -> Result { let fields = match projection { Some(indices) => indices .iter() .map(|index| base_schema.field(*index).as_ref().clone()) .collect::>(), None => base_schema .fields() .iter() .map(|field| field.as_ref().clone()) .collect::>(), }; Ok(Arc::new(Schema::new(fields))) } #[derive(Debug, Clone)] struct StateHistorySqlRow { entity_id: String, schema_key: String, file_id: Option, snapshot_content: Option, metadata: Option, change_id: String, observed_commit_id: String, commit_created_at: String, start_commit_id: String, depth: i64, } fn state_history_record_batch( schema: SchemaRef, rows: &[StateHistorySqlRow], ) -> Result { let arrays = schema .fields() .iter() .map(|field| { Ok(match field.name().as_str() { "entity_id" => string_array(rows.iter().map(|row| Some(row.entity_id.as_str()))), "schema_key" => string_array(rows.iter().map(|row| Some(row.schema_key.as_str()))), "file_id" => string_array(rows.iter().map(|row| row.file_id.as_deref())), "snapshot_content" => { string_array(rows.iter().map(|row| row.snapshot_content.as_deref())) } "metadata" => Arc::new(StringArray::from( rows.iter() .map(|row| row.metadata.as_ref().map(serialize_row_metadata)) .collect::>(), )), "change_id" => string_array(rows.iter().map(|row| Some(row.change_id.as_str()))), "observed_commit_id" => { string_array(rows.iter().map(|row| Some(row.observed_commit_id.as_str()))) } "commit_created_at" => { string_array(rows.iter().map(|row| Some(row.commit_created_at.as_str()))) } "start_commit_id" => { string_array(rows.iter().map(|row| Some(row.start_commit_id.as_str()))) } "depth" => Arc::new(Int64Array::from( rows.iter().map(|row| row.depth).collect::>(), )) as ArrayRef, other => { return Err(DataFusionError::Execution(format!( "lix_state_history provider does not support projected column '{other}'" ))) } }) }) .collect::>>()?; RecordBatch::try_new(schema, arrays).map_err(DataFusionError::from) } fn string_array<'a>(values: impl Iterator>) -> ArrayRef { Arc::new(StringArray::from(values.collect::>())) as ArrayRef } async fn load_state_history_rows( commit_graph: Arc>>, query_source: SqlCommitStoreQuerySource, route: &HistoryRoute, ) -> Result, LixError> { let entries = load_history_entries( HistoryViewDescriptor { view_name: "lix_state_history", start_commit_column: "start_commit_id", }, commit_graph, query_source.json_reader, route, Vec::new(), ) .await?; let mut rows = entries .into_iter() .map(|entry| -> Result { Ok(StateHistorySqlRow { entity_id: entry.change.entity_id.as_json_array_text()?, schema_key: entry.change.schema_key, file_id: entry.change.file_id, snapshot_content: entry.change.snapshot_content, metadata: entry.change.metadata, change_id: entry.change.id, observed_commit_id: entry.observed_commit_id, commit_created_at: entry.commit_created_at, start_commit_id: entry.start_commit_id, depth: i64::from(entry.depth), }) }) .collect::, _>>()?; rows.sort_by(|left, right| { left.entity_id .cmp(&right.entity_id) .then(left.file_id.cmp(&right.file_id)) .then(left.schema_key.cmp(&right.schema_key)) .then(left.depth.cmp(&right.depth)) .then(left.change_id.cmp(&right.change_id)) }); Ok(rows) } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } ================================================ FILE: packages/engine/src/sql2/history_route.rs ================================================ use std::collections::BTreeMap; use std::sync::Arc; use datafusion::common::ScalarValue; use datafusion::logical_expr::expr::InList; use datafusion::logical_expr::{Expr, Operator}; use tokio::sync::Mutex; use crate::commit_graph::{CommitGraphChangeHistoryRequest, CommitGraphReader}; use crate::entity_identity::EntityIdentity; use crate::LixError; use super::SqlJsonReader; use crate::commit_store::{materialize_change, MaterializedChange}; /// Shared routing state for commit-shaped history SQL surfaces. /// /// History providers differ in how they shape rows, but they should not drift /// in how they interpret filters such as `start_commit_id IN (...)`, entity /// filters, or depth ranges. #[derive(Debug, Clone, Default, PartialEq, Eq)] pub(crate) struct HistoryRoute { pub(crate) start_commit_ids: Vec, pub(crate) entity_ids: Vec, pub(crate) schema_keys: Vec, pub(crate) file_ids: Vec, pub(crate) min_depth: Option, pub(crate) max_depth: Option, pub(crate) contradictory: bool, } impl HistoryRoute { pub(crate) fn from_filters(filters: &[Expr], column_style: HistoryColumnStyle) -> Self { let mut route = Self::default(); for filter in filters { apply_history_filter(filter, &mut route, column_style); } route } /// Returns the part of the route that is safe to apply before a shaped /// history provider has built its output rows. /// /// Surface providers such as `lix_file_history` may be caused by different /// canonical event schemas than the schema they expose. For those providers, /// identity/schema filters must be evaluated against the shaped output row, /// not against the canonical event row. pub(crate) fn traversal_only(&self) -> Self { Self { start_commit_ids: self.start_commit_ids.clone(), min_depth: self.min_depth, max_depth: self.max_depth, contradictory: self.contradictory, ..Self::default() } } /// Returns only the explicit history starts. /// /// Shaped history providers use this for context loading: path/data shaping /// often needs ancestor descriptor rows even when the event route is /// restricted to a specific depth. pub(crate) fn starts_only(&self) -> Self { Self { start_commit_ids: self.start_commit_ids.clone(), contradictory: self.contradictory, ..Self::default() } } pub(crate) fn is_contradictory(&self) -> bool { self.contradictory || self .min_depth .zip(self.max_depth) .is_some_and(|(min, max)| min > max) || self.min_depth.is_some_and(|depth| depth < 0) || self.max_depth.is_some_and(|depth| depth < 0) } /// Checks filters that refer to the row exposed by a shaped history surface. pub(crate) fn matches_surface_row( &self, schema_key: &str, entity_id: &str, file_id: Option<&str>, depth: u32, ) -> bool { if self.is_contradictory() { return false; } if !self.schema_keys.is_empty() && !self .schema_keys .iter() .any(|candidate| candidate == schema_key) { return false; } if !self.entity_ids.is_empty() && !self .entity_ids .iter() .any(|candidate| candidate == entity_id) { return false; } if !self.file_ids.is_empty() { let Some(file_id) = file_id else { return false; }; if !self.file_ids.iter().any(|candidate| candidate == file_id) { return false; } } if self .min_depth .is_some_and(|min_depth| i64::from(depth) < min_depth) { return false; } if self .max_depth .is_some_and(|max_depth| i64::from(depth) > max_depth) { return false; } true } } /// Commit-graph history entry enriched with commit metadata needed by SQL /// history surfaces. #[derive(Debug, Clone)] pub(crate) struct HistoryEntry { pub(crate) change: MaterializedChange, pub(crate) observed_commit_id: String, pub(crate) commit_created_at: String, pub(crate) start_commit_id: String, pub(crate) depth: u32, } pub(crate) const HISTORY_COL_ENTITY_ID: &str = "lixcol_entity_id"; pub(crate) const HISTORY_COL_SCHEMA_KEY: &str = "lixcol_schema_key"; pub(crate) const HISTORY_COL_FILE_ID: &str = "lixcol_file_id"; pub(crate) const HISTORY_COL_SNAPSHOT_CONTENT: &str = "lixcol_snapshot_content"; pub(crate) const HISTORY_COL_METADATA: &str = "lixcol_metadata"; pub(crate) const HISTORY_COL_CHANGE_ID: &str = "lixcol_change_id"; pub(crate) const HISTORY_COL_OBSERVED_COMMIT_ID: &str = "lixcol_observed_commit_id"; pub(crate) const HISTORY_COL_COMMIT_CREATED_AT: &str = "lixcol_commit_created_at"; pub(crate) const HISTORY_COL_START_COMMIT_ID: &str = "lixcol_start_commit_id"; pub(crate) const HISTORY_COL_DEPTH: &str = "lixcol_depth"; pub(crate) struct HistoryViewDescriptor<'a> { pub(crate) view_name: &'a str, pub(crate) start_commit_column: &'a str, } #[derive(Debug, Clone, Copy)] pub(crate) enum HistoryColumnStyle { Bare, Prefixed, } /// Shaped history views expose delete events as tombstone rows. /// /// If the current event is the descriptor tombstone itself, the provider must /// use that tombstone row instead of looking through to an earlier live /// descriptor. This keeps one contract across typed entity, file, directory, /// and state history: `snapshot_content IS NULL` means projected user/domain /// columns are NULL while metadata columns still identify the event. pub(crate) fn history_descriptor_event_matches( descriptor_entry: &HistoryEntry, event_depth: u32, event_change_id: &str, ) -> bool { descriptor_entry.depth == event_depth && descriptor_entry.change.id == event_change_id } pub(crate) fn parse_history_filter(expr: &Expr, column_style: HistoryColumnStyle) -> Option<()> { parse_history_filter_terms(expr, column_style).map(|_| ()) } pub(crate) fn commit_graph_history_request( route: &HistoryRoute, schema_keys: Vec, ) -> Option { let schema_keys = effective_schema_keys(route, schema_keys)?; Some(CommitGraphChangeHistoryRequest { entity_ids: route .entity_ids .iter() .filter_map(|entity_id| EntityIdentity::from_json_array_text(entity_id).ok()) .collect(), schema_keys, file_ids: route.file_ids.clone(), min_depth: route.min_depth.and_then(nonnegative_u32), max_depth: route.max_depth.and_then(nonnegative_u32), include_tombstones: true, }) } /// Loads commit-graph history once for all SQL history providers. /// /// Providers pass the schema keys they know how to shape. An empty list means /// "do not constrain by provider schema"; this is what `lix_state_history` uses. pub(crate) async fn load_history_entries( descriptor: HistoryViewDescriptor<'_>, commit_graph: Arc>>, mut json_reader: SqlJsonReader, route: &HistoryRoute, schema_keys: Vec, ) -> Result, LixError> { if route.is_contradictory() { return Ok(Vec::new()); } if route.start_commit_ids.is_empty() { return Err(LixError::new( LixError::CODE_HISTORY_FILTER_REQUIRED, format!( "{} requires a {} filter", descriptor.view_name, descriptor.start_commit_column ), ) .with_hint(format!( "Use WHERE {} = lix_active_version_commit_id() to inspect {} from the active version head.", descriptor.start_commit_column, descriptor.view_name ))); } let Some(request) = commit_graph_history_request(route, schema_keys) else { return Ok(Vec::new()); }; let mut rows = Vec::new(); for start_commit_id in &route.start_commit_ids { let (entries, reachable_commits) = { let mut guard = commit_graph.lock().await; let entries = guard .change_history_from_commit(start_commit_id, &request) .await?; let reachable_commits = guard.reachable_commits(start_commit_id).await?; (entries, reachable_commits) }; let commit_created_at_by_id = reachable_commits .into_iter() .map(|reachable| { ( reachable.commit.commit_id.clone(), reachable.commit.change.created_at.clone(), ) }) .collect::>(); for entry in entries { let change = materialize_change(&mut json_reader, entry.located_change).await?; rows.push(HistoryEntry { commit_created_at: commit_created_at_by_id .get(&entry.observed_commit_id) .cloned() .unwrap_or_else(|| change.created_at.clone()), change, observed_commit_id: entry.observed_commit_id, start_commit_id: entry.start_commit_id, depth: entry.depth, }); } } Ok(rows) } fn effective_schema_keys( route: &HistoryRoute, surface_schema_keys: Vec, ) -> Option> { if surface_schema_keys.is_empty() { return Some(route.schema_keys.clone()); } if route.schema_keys.is_empty() { return Some(surface_schema_keys); } let mut effective = Vec::new(); for schema_key in surface_schema_keys { if route.schema_keys.contains(&schema_key) && !effective.contains(&schema_key) { effective.push(schema_key); } } if effective.is_empty() { None } else { Some(effective) } } fn parse_history_filter_terms( expr: &Expr, column_style: HistoryColumnStyle, ) -> Option> { match expr { Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => { let mut terms = parse_history_filter_terms(&binary_expr.left, column_style)?; terms.extend(parse_history_filter_terms( &binary_expr.right, column_style, )?); Some(terms) } Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => { parse_history_disjunction(binary_expr, column_style) } Expr::BinaryExpr(binary_expr) => { parse_history_binary_filter(binary_expr, column_style).map(|term| vec![term]) } Expr::InList(in_list) => { parse_history_in_list_filter(in_list, column_style).map(|term| vec![term]) } _ => None, } } fn collect_history_route_terms( expr: &Expr, column_style: HistoryColumnStyle, ) -> Vec { match expr { Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => { let mut terms = collect_history_route_terms(&binary_expr.left, column_style); terms.extend(collect_history_route_terms( &binary_expr.right, column_style, )); terms } // OR filters are only safe to route when the entire disjunction is a // supported history predicate. Partially routing one side would change // SQL semantics before DataFusion can apply the residual filter. Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::Or => { parse_history_disjunction(binary_expr, column_style).unwrap_or_default() } Expr::BinaryExpr(binary_expr) => parse_history_binary_filter(binary_expr, column_style) .map(|term| vec![term]) .unwrap_or_default(), Expr::InList(in_list) => parse_history_in_list_filter(in_list, column_style) .map(|term| vec![term]) .unwrap_or_default(), _ => Vec::new(), } } fn parse_history_disjunction( binary_expr: &datafusion::logical_expr::BinaryExpr, column_style: HistoryColumnStyle, ) -> Option> { let left = parse_history_filter_terms(&binary_expr.left, column_style)?; let right = parse_history_filter_terms(&binary_expr.right, column_style)?; let [left] = left.as_slice() else { return None; }; let [right] = right.as_slice() else { return None; }; merge_history_disjunction_terms(left.clone(), right.clone()).map(|term| vec![term]) } #[derive(Debug, Clone, PartialEq, Eq)] enum HistoryFilterTerm { StartCommitIds(Vec), EntityIds(Vec), SchemaKeys(Vec), FileIds(Vec), MinDepth(i64), MaxDepth(i64), ExactDepth(i64), } fn merge_history_disjunction_terms( left: HistoryFilterTerm, right: HistoryFilterTerm, ) -> Option { match (left, right) { (HistoryFilterTerm::StartCommitIds(mut left), HistoryFilterTerm::StartCommitIds(right)) => { extend_unique(&mut left, right); Some(HistoryFilterTerm::StartCommitIds(left)) } (HistoryFilterTerm::EntityIds(mut left), HistoryFilterTerm::EntityIds(right)) => { extend_unique(&mut left, right); Some(HistoryFilterTerm::EntityIds(left)) } (HistoryFilterTerm::FileIds(mut left), HistoryFilterTerm::FileIds(right)) => { extend_unique(&mut left, right); Some(HistoryFilterTerm::FileIds(left)) } (HistoryFilterTerm::SchemaKeys(mut left), HistoryFilterTerm::SchemaKeys(right)) => { extend_unique(&mut left, right); Some(HistoryFilterTerm::SchemaKeys(left)) } _ => None, } } fn parse_history_binary_filter( binary_expr: &datafusion::logical_expr::BinaryExpr, column_style: HistoryColumnStyle, ) -> Option { let Expr::Column(column) = &*binary_expr.left else { return None; }; let column_name = canonical_history_column_name(column.name.as_str(), column_style)?; let right = &*binary_expr.right; match (column_name, &binary_expr.op, right) { ("start_commit_id", Operator::Eq, Expr::Literal(ScalarValue::Utf8(Some(value)), _)) | ("schema_key", Operator::Eq, Expr::Literal(ScalarValue::Utf8(Some(value)), _)) | ("file_id", Operator::Eq, Expr::Literal(ScalarValue::Utf8(Some(value)), _)) => { Some(match column_name { "start_commit_id" => HistoryFilterTerm::StartCommitIds(vec![value.clone()]), "schema_key" => HistoryFilterTerm::SchemaKeys(vec![value.clone()]), "file_id" => HistoryFilterTerm::FileIds(vec![value.clone()]), _ => unreachable!(), }) } ("entity_id", Operator::Eq, Expr::Literal(ScalarValue::Utf8(Some(value)), _)) => { canonical_entity_id_value(value).map(|value| HistoryFilterTerm::EntityIds(vec![value])) } ("depth", Operator::Eq, depth_expr) => { scalar_i64_literal(depth_expr).map(HistoryFilterTerm::ExactDepth) } ("depth", Operator::Gt, depth_expr) => { scalar_i64_literal(depth_expr).map(|value| HistoryFilterTerm::MinDepth(value + 1)) } ("depth", Operator::GtEq, depth_expr) => { scalar_i64_literal(depth_expr).map(HistoryFilterTerm::MinDepth) } ("depth", Operator::Lt, depth_expr) => { scalar_i64_literal(depth_expr).map(|value| HistoryFilterTerm::MaxDepth(value - 1)) } ("depth", Operator::LtEq, depth_expr) => { scalar_i64_literal(depth_expr).map(HistoryFilterTerm::MaxDepth) } _ => None, } } fn parse_history_in_list_filter( in_list: &InList, column_style: HistoryColumnStyle, ) -> Option { if in_list.negated { return None; } let Expr::Column(column) = in_list.expr.as_ref() else { return None; }; let column_name = canonical_history_column_name(column.name.as_str(), column_style)?; let values = in_list .list .iter() .map(string_literal) .collect::>>()?; if values.is_empty() { return None; } match column_name { "start_commit_id" => Some(HistoryFilterTerm::StartCommitIds(values)), "entity_id" => canonical_entity_id_values(values).map(HistoryFilterTerm::EntityIds), "schema_key" => Some(HistoryFilterTerm::SchemaKeys(values)), "file_id" => Some(HistoryFilterTerm::FileIds(values)), _ => None, } } fn apply_history_filter(expr: &Expr, route: &mut HistoryRoute, column_style: HistoryColumnStyle) { for term in collect_history_route_terms(expr, column_style) { match term { HistoryFilterTerm::StartCommitIds(values) => { route.contradictory |= apply_conjunctive_values_filter(&mut route.start_commit_ids, values) } HistoryFilterTerm::EntityIds(values) => { route.contradictory |= apply_conjunctive_values_filter(&mut route.entity_ids, values) } HistoryFilterTerm::SchemaKeys(values) => { route.contradictory |= apply_conjunctive_values_filter(&mut route.schema_keys, values) } HistoryFilterTerm::FileIds(values) => { route.contradictory |= apply_conjunctive_values_filter(&mut route.file_ids, values) } HistoryFilterTerm::ExactDepth(value) => { route.min_depth = Some(value); route.max_depth = Some(value); } HistoryFilterTerm::MinDepth(value) => { route.min_depth = Some(route.min_depth.map_or(value, |current| current.max(value))); } HistoryFilterTerm::MaxDepth(value) => { route.max_depth = Some(route.max_depth.map_or(value, |current| current.min(value))); } } } } fn apply_conjunctive_values_filter(bucket: &mut Vec, incoming_values: Vec) -> bool { let mut values = Vec::new(); extend_unique(&mut values, incoming_values); if values.is_empty() { return true; } if bucket.is_empty() { extend_unique(bucket, values); return false; } bucket.retain(|existing| values.contains(existing)); bucket.is_empty() } fn canonical_entity_id_values(values: Vec) -> Option> { values .into_iter() .map(|value| canonical_entity_id_value(&value)) .collect() } fn canonical_entity_id_value(value: &str) -> Option { EntityIdentity::from_json_array_text(value) .ok()? .as_json_array_text() .ok() } fn canonical_history_column_name(name: &str, column_style: HistoryColumnStyle) -> Option<&str> { match (column_style, name) { (HistoryColumnStyle::Bare, "start_commit_id") | (HistoryColumnStyle::Prefixed, "lixcol_start_commit_id") => Some("start_commit_id"), (HistoryColumnStyle::Bare, "entity_id") | (HistoryColumnStyle::Prefixed, "lixcol_entity_id") => Some("entity_id"), (HistoryColumnStyle::Bare, "schema_key") | (HistoryColumnStyle::Prefixed, "lixcol_schema_key") => Some("schema_key"), (HistoryColumnStyle::Bare, "file_id") | (HistoryColumnStyle::Prefixed, "lixcol_file_id") => Some("file_id"), (HistoryColumnStyle::Bare, "depth") | (HistoryColumnStyle::Prefixed, "lixcol_depth") => { Some("depth") } _ => None, } } fn nonnegative_u32(value: i64) -> Option { u32::try_from(value).ok() } fn extend_unique(bucket: &mut Vec, values: Vec) { for value in values { if !bucket.contains(&value) { bucket.push(value); } } } fn string_literal(expr: &Expr) -> Option { match expr { Expr::Literal(ScalarValue::Utf8(Some(value)), _) => Some(value.clone()), _ => None, } } fn scalar_i64_literal(expr: &Expr) -> Option { match expr { Expr::Literal(ScalarValue::Int8(Some(value)), _) => Some(i64::from(*value)), Expr::Literal(ScalarValue::Int16(Some(value)), _) => Some(i64::from(*value)), Expr::Literal(ScalarValue::Int32(Some(value)), _) => Some(i64::from(*value)), Expr::Literal(ScalarValue::Int64(Some(value)), _) => Some(*value), Expr::Literal(ScalarValue::UInt8(Some(value)), _) => Some(i64::from(*value)), Expr::Literal(ScalarValue::UInt16(Some(value)), _) => Some(i64::from(*value)), Expr::Literal(ScalarValue::UInt32(Some(value)), _) => Some(i64::from(*value)), Expr::Literal(ScalarValue::UInt64(Some(value)), _) => i64::try_from(*value).ok(), _ => None, } } #[cfg(test)] mod tests { use datafusion::common::{Column, ScalarValue}; use datafusion::logical_expr::{BinaryExpr, Expr, Like, Operator}; use super::{parse_history_filter, HistoryColumnStyle, HistoryRoute}; #[test] fn route_extraction_keeps_supported_terms_from_mixed_and_filter() { let filter = and( eq(col("start_commit_id"), str_lit("commit-1")), Expr::Like(Like::new( false, Box::new(col("path")), Box::new(str_lit("/docs/%")), None, false, )), ); assert!( parse_history_filter(&filter, HistoryColumnStyle::Bare).is_none(), "mixed filters must not be advertised as exact pushdown" ); let route = HistoryRoute::from_filters(&[filter], HistoryColumnStyle::Bare); assert_eq!(route.start_commit_ids, vec!["commit-1".to_string()]); } #[test] fn route_extraction_does_not_partially_route_mixed_or_filter() { let filter = or( eq(col("start_commit_id"), str_lit("commit-1")), Expr::Like(Like::new( false, Box::new(col("path")), Box::new(str_lit("/docs/%")), None, false, )), ); let route = HistoryRoute::from_filters(&[filter], HistoryColumnStyle::Bare); assert!( route.start_commit_ids.is_empty(), "partial OR pushdown would change SQL semantics" ); } fn and(left: Expr, right: Expr) -> Expr { binary(left, Operator::And, right) } fn or(left: Expr, right: Expr) -> Expr { binary(left, Operator::Or, right) } fn eq(left: Expr, right: Expr) -> Expr { binary(left, Operator::Eq, right) } fn binary(left: Expr, op: Operator, right: Expr) -> Expr { Expr::BinaryExpr(BinaryExpr::new(Box::new(left), op, Box::new(right))) } fn col(name: &str) -> Expr { Expr::Column(Column::from_name(name)) } fn str_lit(value: &str) -> Expr { Expr::Literal(ScalarValue::Utf8(Some(value.to_string())), None) } } ================================================ FILE: packages/engine/src/sql2/lix_state_provider.rs ================================================ use std::any::Any; use std::collections::BTreeSet; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BooleanArray, StringArray, UInt64Array}; use datafusion::arrow::compute::{and, filter_record_batch}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions}; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, SchemaExt}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::dml::InsertOp; use datafusion::logical_expr::expr::InList; use datafusion::logical_expr::{BinaryExpr, Expr, Operator, TableProviderFilterPushDown}; use datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr}; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use datafusion::prelude::SessionContext; use datafusion::scalar::ScalarValue; use futures_util::{stream, TryStreamExt}; use serde_json::Value as JsonValue; use crate::entity_identity::EntityIdentity; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{ LiveStateFilter, LiveStateProjection, LiveStateReader, LiveStateScanRequest, }; use crate::sql2::dml::{InsertExec, InsertSink}; use crate::sql2::read_only::reject_read_only_stage_rows; use crate::sql2::version_scope::{resolve_provider_version_ids, VersionBinding}; use crate::sql2::write_normalization::{InsertCell, SqlCell, UpdateAssignmentValues}; use crate::transaction::types::{TransactionJson, TransactionWriteRow}; use crate::version::VersionRefReader; use crate::GLOBAL_VERSION_ID; use crate::{parse_row_metadata_value, serialize_row_metadata, LixError, NullableKeyFilter}; use crate::sql2::{ SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader, }; use crate::transaction::types::{TransactionWrite, TransactionWriteMode}; use super::predicate_typecheck::validate_json_predicate_filters; use super::result_metadata::json_field; pub(crate) async fn register_lix_state_providers( session: &SessionContext, active_version_id: &str, live_state: Arc, version_ref: Arc, ) -> Result<(), LixError> { session .register_table( "lix_state_by_version", Arc::new(LixStateProvider::by_version( Arc::clone(&live_state), Arc::clone(&version_ref), )), ) .map_err(datafusion_error_to_lix_error)?; session .register_table( "lix_state", Arc::new(LixStateProvider::active_version( active_version_id, live_state, version_ref, )), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } pub(crate) async fn register_lix_state_write_providers( session: &SessionContext, write_ctx: SqlWriteContext, ) -> Result<(), LixError> { session .register_table( "lix_state_by_version", Arc::new(LixStateProvider::by_version_with_write(write_ctx.clone())), ) .map_err(datafusion_error_to_lix_error)?; session .register_table( "lix_state", Arc::new(LixStateProvider::active_version_with_write(write_ctx)), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } pub(crate) struct LixStateProvider { schema: SchemaRef, live_state: Arc, version_ref: Arc, write_access: WriteAccess, version_binding: VersionBinding, } impl std::fmt::Debug for LixStateProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixStateProvider") .field("write_access", &self.write_access.is_write()) .finish() } } impl LixStateProvider { pub(crate) fn active_version( active_version_id: impl Into, live_state: Arc, version_ref: Arc, ) -> Self { Self { schema: lix_state_schema(), live_state, version_ref, write_access: WriteAccess::read_only(), version_binding: VersionBinding::active(active_version_id), } } pub(crate) fn active_version_with_write(write_ctx: SqlWriteContext) -> Self { let active_version_id = write_ctx.active_version_id(); let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); Self { schema: lix_state_schema(), live_state, version_ref, write_access: WriteAccess::write(write_ctx), version_binding: VersionBinding::active(active_version_id), } } pub(crate) fn by_version( live_state: Arc, version_ref: Arc, ) -> Self { Self { schema: lix_state_by_version_schema(), live_state, version_ref, write_access: WriteAccess::read_only(), version_binding: VersionBinding::explicit(), } } pub(crate) fn by_version_with_write(write_ctx: SqlWriteContext) -> Self { let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); Self { schema: lix_state_by_version_schema(), live_state, version_ref, write_access: WriteAccess::write(write_ctx), version_binding: VersionBinding::explicit(), } } } #[async_trait] impl TableProvider for LixStateProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::Base } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { Ok(filters .iter() .map(|filter| { if parse_lix_state_filter(filter).is_some() { TableProviderFilterPushDown::Exact } else { TableProviderFilterPushDown::Unsupported } }) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, filters: &[Expr], limit: Option, ) -> Result> { let route = LixStateByVersionRoute::from_filters(filters); let projected_schema = projected_schema(&self.schema, projection)?; let mut request = lix_state_scan_request( &self.schema, self.version_binding.active_version_id(), projection, &route, limit, ); if !route.contradictory { request.filter.version_ids = resolve_provider_version_ids( self.version_ref.as_ref(), &self.version_binding, request.filter.version_ids, ) .await .map_err(lix_error_to_datafusion_error)?; } Ok(Arc::new(LixStateScanExec::new( Arc::clone(&self.live_state), projected_schema, request, ))) } async fn insert_into( &self, _state: &dyn Session, input: Arc, insert_op: InsertOp, ) -> Result> { if insert_op != InsertOp::Append { return not_impl_err!("{insert_op} not implemented for lix_state yet"); } let active_version_id = self .version_binding .require_active_version_id("INSERT") .map_err(lix_error_to_datafusion_error)?; let write_ctx = self.write_access.require_write("INSERT into lix_state")?; self.schema .logically_equivalent_names_and_types(&input.schema())?; let sink = LixStateInsertSink::new( Arc::clone(&self.schema), write_ctx.clone(), active_version_id, ); Ok(Arc::new(InsertExec::new(input, Arc::new(sink)))) } async fn delete_from( &self, state: &dyn Session, filters: Vec, ) -> Result> { let active_version_id = self .version_binding .require_active_version_id("DELETE") .map_err(lix_error_to_datafusion_error)?; let write_ctx = self.write_access.require_write("DELETE FROM lix_state")?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; let route = LixStateByVersionRoute::from_filters(&filters); let request = lix_state_scan_request(&self.schema, Some(&active_version_id), None, &route, None); Ok(Arc::new(LixStateDeleteExec::new( write_ctx.clone(), Arc::clone(&self.schema), active_version_id, request, physical_filters, ))) } async fn update( &self, state: &dyn Session, assignments: Vec<(String, Expr)>, filters: Vec, ) -> Result> { let active_version_id = self .version_binding .require_active_version_id("UPDATE") .map_err(lix_error_to_datafusion_error)?; let write_ctx = self.write_access.require_write("UPDATE lix_state")?; validate_lix_state_update_assignments(&self.schema, &assignments)?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; validate_json_predicate_filters(self.schema.as_ref(), &filters)?; let physical_assignments = assignments .iter() .map(|(column_name, expr)| { Ok(( column_name.clone(), create_physical_expr(expr, &df_schema, state.execution_props())?, )) }) .collect::>>()?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; let route = LixStateByVersionRoute::from_filters(&filters); let request = lix_state_scan_request(&self.schema, Some(&active_version_id), None, &route, None); Ok(Arc::new(LixStateUpdateExec::new( write_ctx.clone(), Arc::clone(&self.schema), active_version_id, request, physical_assignments, physical_filters, ))) } } struct LixStateInsertSink { write_ctx: SqlWriteContext, version_binding: String, } impl std::fmt::Debug for LixStateInsertSink { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixStateInsertSink").finish() } } impl LixStateInsertSink { fn new(_schema: SchemaRef, write_ctx: SqlWriteContext, version_binding: String) -> Self { Self { write_ctx, version_binding, } } } impl DisplayAs for LixStateInsertSink { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixStateInsertSink") } DisplayFormatType::TreeRender => write!(f, "LixStateInsertSink"), } } } #[async_trait] impl InsertSink for LixStateInsertSink { async fn write_batches( &self, batches: Vec, _context: &Arc, ) -> Result { let mut rows = Vec::new(); for batch in batches { rows.extend(lix_state_write_rows_from_batch( &batch, &self.version_binding, )?); } reject_read_only_stage_rows(&rows, "INSERT into lix_state")?; let count = u64::try_from(rows.len()) .map_err(|_| DataFusionError::Execution("INSERT row count overflow".into()))?; self.write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows, }) .await .map_err(lix_error_to_datafusion_error)?; Ok(count) } } #[allow(dead_code)] struct LixStateDeleteExec { write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: String, request: LiveStateScanRequest, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for LixStateDeleteExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixStateDeleteExec").finish() } } impl LixStateDeleteExec { fn new( write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: String, request: LiveStateScanRequest, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { write_ctx, table_schema, version_binding, request, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for LixStateDeleteExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixStateDeleteExec(filters={})", self.filters.len()) } DisplayFormatType::TreeRender => write!(f, "LixStateDeleteExec"), } } } impl ExecutionPlan for LixStateDeleteExec { fn name(&self) -> &str { "LixStateDeleteExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixStateDeleteExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixStateDeleteExec only exposes one partition, got {partition}" ))); } let write_ctx = self.write_ctx.clone(); let table_schema = Arc::clone(&self.table_schema); let version_binding = self.version_binding.clone(); let request = self.request.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = if request.limit == Some(0) { Vec::new() } else { write_ctx .scan_live_state(&request) .await .map_err(lix_error_to_datafusion_error)? }; let source_batch = lix_state_record_batch(Arc::clone(&table_schema), &rows) .map_err(lix_error_to_datafusion_error)?; let matched_batch = filter_lix_state_batch(source_batch, &filters)?; let write_rows = lix_state_deletable_write_rows_from_batch(&matched_batch, &version_binding)?; reject_read_only_stage_rows(&write_rows, "DELETE FROM lix_state")?; let count = u64::try_from(write_rows.len()) .map_err(|_| DataFusionError::Execution("DELETE row count overflow".to_string()))?; if count > 0 { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: write_rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } #[allow(dead_code)] struct LixStateUpdateExec { write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: String, request: LiveStateScanRequest, assignments: Vec<(String, Arc)>, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for LixStateUpdateExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixStateUpdateExec").finish() } } impl LixStateUpdateExec { fn new( write_ctx: SqlWriteContext, table_schema: SchemaRef, version_binding: String, request: LiveStateScanRequest, assignments: Vec<(String, Arc)>, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = PlanProperties::new( EquivalenceProperties::new(Arc::clone(&result_schema)), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ); Self { write_ctx, table_schema, version_binding, request, assignments, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for LixStateUpdateExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!( f, "LixStateUpdateExec(assignments={}, filters={})", self.assignments.len(), self.filters.len() ) } DisplayFormatType::TreeRender => write!(f, "LixStateUpdateExec"), } } } impl ExecutionPlan for LixStateUpdateExec { fn name(&self) -> &str { "LixStateUpdateExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixStateUpdateExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixStateUpdateExec only exposes one partition, got {partition}" ))); } let write_ctx = self.write_ctx.clone(); let table_schema = Arc::clone(&self.table_schema); let version_binding = self.version_binding.clone(); let request = self.request.clone(); let assignments = self.assignments.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = if request.limit == Some(0) { Vec::new() } else { write_ctx .scan_live_state(&request) .await .map_err(lix_error_to_datafusion_error)? }; let source_batch = lix_state_record_batch(Arc::clone(&table_schema), &rows) .map_err(lix_error_to_datafusion_error)?; let matched_batch = filter_lix_state_batch(source_batch, &filters)?; let write_rows = lix_state_update_write_rows_from_batch( &matched_batch, &assignments, &version_binding, )?; reject_read_only_stage_rows(&write_rows, "UPDATE lix_state")?; let count = u64::try_from(write_rows.len()) .map_err(|_| DataFusionError::Execution("UPDATE row count overflow".to_string()))?; if count > 0 { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: write_rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } fn validate_lix_state_update_assignments( schema: &SchemaRef, assignments: &[(String, Expr)], ) -> Result<()> { for (column_name, _) in assignments { schema.field_with_name(column_name).map_err(|_| { DataFusionError::Plan(format!( "UPDATE lix_state failed: column '{column_name}' does not exist" )) })?; if !matches!(column_name.as_str(), "snapshot_content" | "metadata") { return Err(DataFusionError::Execution(format!( "UPDATE lix_state cannot stage read-only column '{column_name}'" ))); } } Ok(()) } fn filter_lix_state_batch( batch: RecordBatch, filters: &[Arc], ) -> Result { let Some(mask) = evaluate_lix_state_filters(&batch, filters)? else { return Ok(batch); }; Ok(filter_record_batch(&batch, &mask)?) } fn evaluate_lix_state_filters( batch: &RecordBatch, filters: &[Arc], ) -> Result> { if filters.is_empty() { return Ok(None); } let mut combined_mask: Option = None; for filter in filters { let result = filter.evaluate(batch)?; let array = result.into_array(batch.num_rows())?; let bool_array = array .as_any() .downcast_ref::() .ok_or_else(|| { DataFusionError::Execution("UPDATE lix_state filter was not boolean".to_string()) })?; let normalized = bool_array .iter() .map(|value| Some(value == Some(true))) .collect::(); combined_mask = Some(match combined_mask { Some(existing) => and(&existing, &normalized)?, None => normalized, }); } Ok(combined_mask) } fn lix_state_stageable_write_rows_from_batch( batch: &RecordBatch, version_binding: &str, ) -> Result> { let mut rows = lix_state_write_rows_from_batch(batch, version_binding)?; for row in &mut rows { row.created_at = None; row.updated_at = None; row.change_id = None; row.commit_id = None; } Ok(rows) } fn lix_state_update_write_rows_from_batch( batch: &RecordBatch, assignments: &[(String, Arc)], version_binding: &str, ) -> Result> { let assignment_values = UpdateAssignmentValues::evaluate(batch, assignments)?; (0..batch.num_rows()) .map(|row_index| { let global = optional_bool_value(batch, row_index, "global")?.unwrap_or(false); let version_id = optional_string_value(batch, row_index, "version_id")?.unwrap_or_else(|| { if global { GLOBAL_VERSION_ID.to_string() } else { version_binding.to_string() } }); Ok(TransactionWriteRow { entity_id: Some( EntityIdentity::from_json_array_text(&required_string_value( batch, row_index, "entity_id", )?) .map_err(|error| { DataFusionError::Execution(format!( "lix_state UPDATE has invalid entity_id: {error}" )) })?, ), schema_key: required_string_value(batch, row_index, "schema_key")?, file_id: optional_string_value(batch, row_index, "file_id")?, snapshot: update_optional_json_value( batch, &assignment_values, row_index, "snapshot_content", )?, metadata: update_optional_metadata_value( batch, &assignment_values, row_index, "metadata", "lix_state", )?, origin: None, created_at: None, updated_at: None, global, change_id: None, commit_id: None, untracked: optional_bool_value(batch, row_index, "untracked")?.unwrap_or(false), version_id, }) }) .collect() } fn lix_state_deletable_write_rows_from_batch( batch: &RecordBatch, version_binding: &str, ) -> Result> { let mut rows = lix_state_stageable_write_rows_from_batch(batch, version_binding)?; for row in &mut rows { row.snapshot = None; } Ok(rows) } fn update_optional_string_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result> { match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? { InsertCell::Omitted | InsertCell::Provided(SqlCell::Null) => Ok(None), InsertCell::Provided(SqlCell::Value( ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)), )) => Ok(Some(value)), InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!( "UPDATE lix_state expected text-compatible column '{column_name}', got {other:?}" ))), } } fn update_optional_metadata_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, context: &str, ) -> Result> { update_optional_string_value(batch, assignment_values, row_index, column_name)? .map(|value| { let metadata = parse_row_metadata_value(&value, context) .map_err(super::error::lix_error_to_datafusion_error)?; TransactionJson::from_value(metadata, &format!("{context} metadata")) .map_err(super::error::lix_error_to_datafusion_error) }) .transpose() } fn update_optional_json_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, row_index: usize, column_name: &str, ) -> Result> { update_optional_string_value(batch, assignment_values, row_index, column_name)? .map(|value| parse_snapshot_json(&value, column_name)) .transpose() } fn dml_count_schema() -> SchemaRef { Arc::new(Schema::new(vec![Field::new( "count", DataType::UInt64, false, )])) } fn dml_count_batch(schema: SchemaRef, count: u64) -> Result { RecordBatch::try_new( schema, vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef], ) .map_err(DataFusionError::from) } fn lix_state_write_rows_from_batch( batch: &RecordBatch, version_binding: &str, ) -> Result> { (0..batch.num_rows()) .map(|row_index| { let global = optional_bool_value(batch, row_index, "global")?.unwrap_or(false); let version_id = optional_string_value(batch, row_index, "version_id")?.unwrap_or_else(|| { if global { GLOBAL_VERSION_ID.to_string() } else { version_binding.to_string() } }); Ok(TransactionWriteRow { entity_id: Some( EntityIdentity::from_json_array_text(&required_string_value( batch, row_index, "entity_id", )?) .map_err(|error| { DataFusionError::Execution(format!( "lix_state INSERT has invalid entity_id: {error}" )) })?, ), schema_key: required_string_value(batch, row_index, "schema_key")?, file_id: optional_string_value(batch, row_index, "file_id")?, snapshot: optional_json_value(batch, row_index, "snapshot_content")?, metadata: optional_metadata_value(batch, row_index, "metadata", "lix_state")?, origin: None, created_at: optional_string_value(batch, row_index, "created_at")?, updated_at: optional_string_value(batch, row_index, "updated_at")?, global, change_id: optional_string_value(batch, row_index, "change_id")?, commit_id: optional_string_value(batch, row_index, "commit_id")?, untracked: optional_bool_value(batch, row_index, "untracked")?.unwrap_or(false), version_id, }) }) .collect() } fn required_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result { optional_string_value(batch, row_index, column_name)?.ok_or_else(|| { DataFusionError::Execution(format!( "INSERT into lix_state requires non-null text column '{column_name}'" )) }) } fn optional_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Utf8(None)) | Some(ScalarValue::Utf8View(None)) | Some(ScalarValue::LargeUtf8(None)) => Ok(None), Some(ScalarValue::Utf8(Some(value))) | Some(ScalarValue::Utf8View(Some(value))) | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "INSERT into lix_state expected text-compatible column '{column_name}', got {other:?}" ))), } } fn optional_metadata_value( batch: &RecordBatch, row_index: usize, column_name: &str, context: &str, ) -> Result> { optional_string_value(batch, row_index, column_name)? .map(|value| { let metadata = parse_row_metadata_value(&value, context) .map_err(super::error::lix_error_to_datafusion_error)?; TransactionJson::from_value(metadata, &format!("{context} metadata")) .map_err(super::error::lix_error_to_datafusion_error) }) .transpose() } fn optional_json_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { optional_string_value(batch, row_index, column_name)? .map(|value| parse_snapshot_json(&value, column_name)) .transpose() } fn parse_snapshot_json(value: &str, column_name: &str) -> Result { let parsed = serde_json::from_str::(value).map_err(|error| { DataFusionError::Execution(format!( "lix_state expected valid JSON in column '{column_name}': {error}" )) })?; TransactionJson::from_value(parsed, &format!("lix_state {column_name}")) .map_err(super::error::lix_error_to_datafusion_error) } fn optional_bool_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)), None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None), Some(other) => Err(DataFusionError::Execution(format!( "INSERT into lix_state expected boolean column '{column_name}', got {other:?}" ))), } } fn optional_scalar_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { let schema = batch.schema(); let column_index = match schema.index_of(column_name) { Ok(column_index) => column_index, Err(_) => return Ok(None), }; if row_index >= batch.num_rows() { return Err(DataFusionError::Execution(format!( "row index {row_index} out of bounds for lix_state batch with {} rows", batch.num_rows() ))); } ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index) .map(Some) .map_err(|error| { DataFusionError::Execution(format!( "failed to decode lix_state column '{column_name}' at row {row_index}: {error}" )) }) } struct LixStateScanExec { live_state: Arc, schema: SchemaRef, request: LiveStateScanRequest, properties: Arc, } impl std::fmt::Debug for LixStateScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixStateScanExec").finish() } } impl LixStateScanExec { fn new( live_state: Arc, schema: SchemaRef, request: LiveStateScanRequest, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(schema.clone()), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { live_state, schema, request, properties: Arc::new(properties), } } } impl DisplayAs for LixStateScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixStateScanExec(limit={:?})", self.request.limit) } DisplayFormatType::TreeRender => write!(f, "LixStateScanExec"), } } } impl ExecutionPlan for LixStateScanExec { fn name(&self) -> &str { "LixStateScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixStateScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixStateScanExec only exposes one partition, got {partition}" ))); } let live_state = Arc::clone(&self.live_state); let schema = Arc::clone(&self.schema); let request = self.request.clone(); let stream_schema = Arc::clone(&schema); let stream = stream::once(async move { let rows = if request.limit == Some(0) { Vec::new() } else { live_state .scan_rows(&request) .await .map_err(lix_error_to_datafusion_error)? }; let batch = lix_state_record_batch(Arc::clone(&stream_schema), &rows) .map_err(lix_error_to_datafusion_error)?; Ok::<_, DataFusionError>(stream::iter(vec![Ok::( batch, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream))) } } fn lix_state_schema() -> SchemaRef { Arc::new(Schema::new(vec![ json_field("entity_id", false), Field::new("schema_key", DataType::Utf8, false), Field::new("file_id", DataType::Utf8, true), json_field("snapshot_content", true), json_field("metadata", true), Field::new("created_at", DataType::Utf8, true), Field::new("updated_at", DataType::Utf8, true), Field::new("global", DataType::Boolean, true), Field::new("change_id", DataType::Utf8, true), Field::new("commit_id", DataType::Utf8, true), Field::new("untracked", DataType::Boolean, true), ])) } fn lix_state_by_version_schema() -> SchemaRef { Arc::new(Schema::new(vec![ json_field("entity_id", false), Field::new("schema_key", DataType::Utf8, false), Field::new("file_id", DataType::Utf8, true), json_field("snapshot_content", true), json_field("metadata", true), Field::new("created_at", DataType::Utf8, true), Field::new("updated_at", DataType::Utf8, true), Field::new("global", DataType::Boolean, true), Field::new("change_id", DataType::Utf8, true), Field::new("commit_id", DataType::Utf8, true), Field::new("untracked", DataType::Boolean, true), Field::new("version_id", DataType::Utf8, false), ])) } #[derive(Debug, Clone, PartialEq, Eq, Default)] struct LixStateByVersionRoute { schema_keys: Option>, version_ids: Option>, entity_ids: Option>, file_id: Option>, contradictory: bool, } impl LixStateByVersionRoute { fn from_filters(filters: &[Expr]) -> Self { let mut route = Self::default(); for filter in filters { let Some(predicates) = parse_lix_state_filters(filter) else { continue; }; for predicate in predicates { match predicate { LixStateFilterPredicate::SchemaKeys(values) => { merge_string_route_slot( &mut route.schema_keys, values, &mut route.contradictory, ); } LixStateFilterPredicate::VersionIds(values) => { merge_string_route_slot( &mut route.version_ids, values, &mut route.contradictory, ); } LixStateFilterPredicate::EntityIds(values) => { merge_string_route_slot( &mut route.entity_ids, values, &mut route.contradictory, ); } LixStateFilterPredicate::FileId(filter) => { merge_nullable_key_route_slot( &mut route.file_id, filter, &mut route.contradictory, ); } } } } route } } #[derive(Debug, Clone, PartialEq, Eq)] enum LixStateFilterPredicate { SchemaKeys(BTreeSet), VersionIds(BTreeSet), EntityIds(BTreeSet), FileId(NullableKeyFilter), } fn lix_state_scan_request( schema: &SchemaRef, version_binding: Option<&str>, projection: Option<&Vec>, route: &LixStateByVersionRoute, limit: Option, ) -> LiveStateScanRequest { let projection = LiveStateProjection { columns: projection_column_names(schema, projection), }; let mut filter = LiveStateFilter { schema_keys: route .schema_keys .as_ref() .map(|values| values.iter().cloned().collect()) .unwrap_or_default(), entity_ids: route .entity_ids .as_ref() .map(|values| { values .iter() .filter_map(|value| EntityIdentity::from_json_array_text(value).ok()) .collect() }) .unwrap_or_default(), version_ids: version_binding .map(|value| vec![value.to_string()]) .or_else(|| { route .version_ids .as_ref() .map(|values| values.iter().cloned().collect()) }) .unwrap_or_default(), ..LiveStateFilter::default() }; if let Some(file_id) = route.file_id.clone() { filter.file_ids.push(file_id); } LiveStateScanRequest { filter, projection, limit: route.contradictory.then_some(0).or(limit), } } fn projection_column_names(schema: &SchemaRef, projection: Option<&Vec>) -> Vec { projection .map(|indices| { indices .iter() .filter_map(|index| schema.fields().get(*index)) .map(|field| field.name().to_string()) .collect::>() }) .unwrap_or_default() } fn merge_string_route_slot( slot: &mut Option>, values: BTreeSet, contradictory: &mut bool, ) { if values.is_empty() { return; } match slot { Some(existing) => { existing.retain(|value| values.contains(value)); if existing.is_empty() { *contradictory = true; } } None => *slot = Some(values), } } fn merge_nullable_key_route_slot( slot: &mut Option>, value: NullableKeyFilter, contradictory: &mut bool, ) { match slot { Some(existing) if *existing != value => *contradictory = true, Some(_) => {} None => *slot = Some(value), } } fn parse_lix_state_filter(expr: &Expr) -> Option { parse_lix_state_filters(expr)?.into_iter().next() } fn parse_lix_state_filters(expr: &Expr) -> Option> { match expr { Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => { let mut predicates = parse_lix_state_filters(&binary_expr.left)?; predicates.extend(parse_lix_state_filters(&binary_expr.right)?); Some(predicates) } Expr::BinaryExpr(binary_expr) => { parse_lix_state_binary_filter(binary_expr).map(|predicate| vec![predicate]) } Expr::InList(in_list) => { parse_lix_state_in_list_filter(in_list).map(|predicate| vec![predicate]) } Expr::IsNull(expr) => parse_lix_state_null_filter(expr).map(|predicate| vec![predicate]), _ => None, } } fn parse_lix_state_binary_filter(binary_expr: &BinaryExpr) -> Option { if binary_expr.op != Operator::Eq { return None; } parse_lix_state_column_literal_filter(&binary_expr.left, &binary_expr.right) .or_else(|| parse_lix_state_column_literal_filter(&binary_expr.right, &binary_expr.left)) } fn parse_lix_state_in_list_filter(in_list: &InList) -> Option { if in_list.negated { return None; } let Expr::Column(column) = in_list.expr.as_ref() else { return None; }; let values = in_list .list .iter() .map(string_expr_literal) .collect::>>()?; if values.is_empty() { return None; } let values = values.into_iter().collect::>(); match column.name.as_str() { "schema_key" => Some(LixStateFilterPredicate::SchemaKeys(values)), "version_id" => Some(LixStateFilterPredicate::VersionIds(values)), "entity_id" => canonical_entity_id_values(values).map(LixStateFilterPredicate::EntityIds), _ => None, } } fn parse_lix_state_null_filter(expr: &Expr) -> Option { let Expr::Column(column) = expr else { return None; }; match column.name.as_str() { "file_id" => Some(LixStateFilterPredicate::FileId(NullableKeyFilter::Null)), _ => None, } } fn parse_lix_state_column_literal_filter( column_expr: &Expr, literal_expr: &Expr, ) -> Option { let Expr::Column(column) = column_expr else { return None; }; match column.name.as_str() { "schema_key" => string_expr_literal(literal_expr) .map(|value| LixStateFilterPredicate::SchemaKeys(BTreeSet::from([value]))), "version_id" => string_expr_literal(literal_expr) .map(|value| LixStateFilterPredicate::VersionIds(BTreeSet::from([value]))), "entity_id" => string_expr_literal(literal_expr) .and_then(|value| canonical_entity_id_value(&value)) .map(|value| LixStateFilterPredicate::EntityIds(BTreeSet::from([value]))), "file_id" => nullable_key_literal(literal_expr).map(LixStateFilterPredicate::FileId), _ => None, } } fn canonical_entity_id_values(values: BTreeSet) -> Option> { values .into_iter() .map(|value| canonical_entity_id_value(&value)) .collect() } fn canonical_entity_id_value(value: &str) -> Option { EntityIdentity::from_json_array_text(value) .ok()? .as_json_array_text() .ok() } fn nullable_key_literal(expr: &Expr) -> Option> { if is_null_literal(expr) { return Some(NullableKeyFilter::Null); } string_expr_literal(expr).map(NullableKeyFilter::Value) } fn string_expr_literal(expr: &Expr) -> Option { let Expr::Literal(literal, _) = expr else { return None; }; match literal { ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)) => Some(value.clone()), _ => None, } } fn is_null_literal(expr: &Expr) -> bool { matches!(expr, Expr::Literal(ScalarValue::Null, _)) } fn lix_state_record_batch( schema: SchemaRef, rows: &[MaterializedLiveStateRow], ) -> Result { if schema.fields().is_empty() { let options = RecordBatchOptions::new().with_row_count(Some(rows.len())); return RecordBatch::try_new_with_options(schema, vec![], &options).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("sql2 failed to build zero-column lix_state batch: {error}"), ) }); } let columns = schema .fields() .iter() .map(|field| { Ok(match field.name().as_str() { "entity_id" => Arc::new(StringArray::from( rows.iter() .map(|row| row.entity_id.as_json_array_text().map(Some)) .collect::, LixError>>()?, )) as ArrayRef, "schema_key" => string_array(rows.iter().map(|row| Some(row.schema_key.as_str()))), "file_id" => string_array(rows.iter().map(|row| row.file_id.as_deref())), "snapshot_content" => { string_array(rows.iter().map(|row| row.snapshot_content.as_deref())) } "metadata" => Arc::new(StringArray::from( rows.iter() .map(|row| row.metadata.as_ref().map(serialize_row_metadata)) .collect::>(), )), "created_at" => string_array(rows.iter().map(|row| Some(row.created_at.as_str()))), "updated_at" => string_array(rows.iter().map(|row| Some(row.updated_at.as_str()))), "global" => Arc::new(BooleanArray::from( rows.iter().map(|row| row.global).collect::>(), )) as ArrayRef, "change_id" => string_array(rows.iter().map(|row| row.change_id.as_deref())), "commit_id" => string_array(rows.iter().map(|row| row.commit_id.as_deref())), "untracked" => Arc::new(BooleanArray::from( rows.iter().map(|row| row.untracked).collect::>(), )) as ArrayRef, "version_id" => string_array(rows.iter().map(|row| Some(row.version_id.as_str()))), other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("sql2 does not support lix_state column '{other}'"), )) } }) }) .collect::, _>>()?; RecordBatch::try_new(schema, columns).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("sql2 failed to build lix_state_by_version batch: {error}"), ) }) } fn string_array<'a>(values: impl Iterator>) -> ArrayRef { let values = values .map(|value| value.map(ToOwned::to_owned)) .collect::>(); Arc::new(StringArray::from(values)) as ArrayRef } fn projected_schema(schema: &SchemaRef, projection: Option<&Vec>) -> Result { let Some(projection) = projection else { return Ok(Arc::clone(schema)); }; let projected = schema.project(projection).map_err(|error| { DataFusionError::Execution(format!("sql2 failed to project lix_state schema: {error}")) })?; Ok(Arc::new(projected)) } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } #[cfg(test)] mod tests { use super::{ lix_state_scan_request, lix_state_schema, lix_state_write_rows_from_batch, parse_lix_state_filter, register_lix_state_write_providers, LixStateByVersionRoute, LixStateDeleteExec, LixStateFilterPredicate, LixStateInsertSink, LixStateProvider, LixStateUpdateExec, }; use crate::binary_cas::BlobDataReader; use crate::functions::{ FunctionProvider, FunctionProviderHandle, SharedFunctionProvider, SystemFunctionProvider, }; use crate::sql2::dml::{InsertExec, InsertSink}; use crate::sql2::{SqlWriteContext, SqlWriteExecutionContext}; use crate::transaction::types::{ TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome, TransactionWriteRow, }; use crate::version::{VersionHead, VersionRefReader}; use crate::{ entity_identity::EntityIdentity, live_state::{ LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, }, }; use crate::{LixError, NullableKeyFilter}; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BooleanArray, StringArray, UInt64Array}; use datafusion::arrow::datatypes::DataType; use datafusion::arrow::record_batch::RecordBatch; use datafusion::catalog::TableProvider; use datafusion::common::{Column, DataFusionError}; use datafusion::execution::TaskContext; use datafusion::logical_expr::dml::InsertOp; use datafusion::logical_expr::expr::InList; use datafusion::logical_expr::{BinaryExpr, Expr, Operator}; use datafusion::physical_expr::EquivalenceProperties; use datafusion::physical_plan::empty::EmptyExec; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use datafusion::prelude::SessionContext; use datafusion::scalar::ScalarValue; use futures_util::stream; use serde_json::json; use std::collections::BTreeSet; use std::sync::Arc; struct EmptyLiveStateReader; struct EmptyVersionRefReader; #[allow(dead_code)] struct RowsLiveStateReader { rows: Vec, } struct DummyBlobReader; #[derive(Default)] struct DummyWriteContext { rows: Vec, } #[derive(Default)] struct CapturingWriteContext { rows: Vec, writes: Vec, } struct SingleBatchExec { batch: RecordBatch, properties: Arc, } impl std::fmt::Debug for SingleBatchExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("SingleBatchExec").finish() } } impl SingleBatchExec { fn new(batch: RecordBatch) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(batch.schema()), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { batch, properties: Arc::new(properties), } } } impl DisplayAs for SingleBatchExec { fn fmt_as( &self, _t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>, ) -> std::fmt::Result { write!(f, "SingleBatchExec") } } impl ExecutionPlan for SingleBatchExec { fn name(&self) -> &str { "SingleBatchExec" } fn as_any(&self) -> &dyn std::any::Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> datafusion::common::Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "SingleBatchExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> datafusion::common::Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "SingleBatchExec only exposes one partition, got {partition}" ))); } let batch = self.batch.clone(); let schema = batch.schema(); let stream = stream::iter(vec![Ok(batch)]); Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream))) } } #[async_trait] impl LiveStateReader for EmptyLiveStateReader { async fn scan_rows( &self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(vec![]) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } #[async_trait] impl VersionRefReader for EmptyVersionRefReader { async fn load_head(&self, _version_id: &str) -> Result, LixError> { Ok(None) } async fn scan_heads(&self) -> Result, LixError> { Ok(Vec::new()) } } fn empty_version_ref() -> Arc { Arc::new(EmptyVersionRefReader) } #[async_trait] impl LiveStateReader for RowsLiveStateReader { async fn scan_rows( &self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } fn test_functions() -> FunctionProviderHandle { SharedFunctionProvider::new( Box::new(SystemFunctionProvider) as Box ) } #[async_trait] impl BlobDataReader for DummyBlobReader { async fn load_bytes_many( &self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { Ok(crate::binary_cas::BlobBytesBatch::new(vec![ None; hashes.len() ])) } } #[async_trait] impl SqlWriteExecutionContext for DummyWriteContext { fn active_version_id(&self) -> &str { "version-a" } fn functions(&self) -> FunctionProviderHandle { test_functions() } fn list_visible_schemas(&self) -> Result, LixError> { Ok(Vec::new()) } async fn load_bytes_many( &mut self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { DummyBlobReader.load_bytes_many(hashes).await } async fn scan_live_state( &mut self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_version_head( &mut self, version_id: &str, ) -> Result, LixError> { if version_id == "ghost-version" { return Ok(None); } Ok(Some(format!("commit-{version_id}"))) } async fn stage_write( &mut self, _write: TransactionWrite, ) -> Result { Ok(TransactionWriteOutcome { count: 0 }) } } #[async_trait] impl SqlWriteExecutionContext for CapturingWriteContext { fn active_version_id(&self) -> &str { "version-a" } fn functions(&self) -> FunctionProviderHandle { test_functions() } fn list_visible_schemas(&self) -> Result, LixError> { Ok(Vec::new()) } async fn load_bytes_many( &mut self, hashes: &[crate::binary_cas::BlobHash], ) -> Result { DummyBlobReader.load_bytes_many(hashes).await } async fn scan_live_state( &mut self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.rows.clone()) } async fn load_version_head( &mut self, version_id: &str, ) -> Result, LixError> { if version_id == "ghost-version" { return Ok(None); } Ok(Some(format!("commit-{version_id}"))) } async fn stage_write( &mut self, write: TransactionWrite, ) -> Result { self.writes.push(write); Ok(TransactionWriteOutcome { count: 0 }) } } fn col(name: &str) -> Expr { Expr::Column(Column::from_name(name)) } fn str_lit(value: &str) -> Expr { Expr::Literal(ScalarValue::Utf8(Some(value.to_string())), None) } fn json_lit(value: &str) -> Expr { Expr::Literal( ScalarValue::Utf8(Some(value.to_string())), Some(datafusion::common::metadata::FieldMetadata::new( std::collections::BTreeMap::from([( crate::sql2::result_metadata::LIX_VALUE_TYPE_METADATA_KEY.to_string(), crate::sql2::result_metadata::LIX_VALUE_TYPE_JSON.to_string(), )]), )), ) } fn string_column(values: Vec>) -> ArrayRef { Arc::new(StringArray::from(values)) as ArrayRef } fn one_row_lix_state_batch(global: bool) -> RecordBatch { RecordBatch::try_new( lix_state_schema(), vec![ string_column(vec![Some("[\"entity-1\"]")]), string_column(vec![Some("lix_key_value")]), string_column(vec![None]), string_column(vec![Some("{\"key\":\"hello\",\"value\":\"world\"}")]), string_column(vec![Some("{\"source\":\"test\"}")]), string_column(vec![Some("2026-04-23T00:00:00Z")]), string_column(vec![Some("2026-04-23T01:00:00Z")]), Arc::new(BooleanArray::from(vec![global])) as ArrayRef, string_column(vec![Some("change-a")]), string_column(vec![None]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, ], ) .expect("valid lix_state batch") } fn one_row_stageable_lix_state_batch() -> RecordBatch { RecordBatch::try_new( lix_state_schema(), vec![ string_column(vec![Some("[\"entity-1\"]")]), string_column(vec![Some("lix_key_value")]), string_column(vec![None]), string_column(vec![Some("{\"key\":\"hello\",\"value\":\"world\"}")]), string_column(vec![None]), string_column(vec![None]), string_column(vec![None]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, string_column(vec![None]), string_column(vec![None]), Arc::new(BooleanArray::from(vec![false])) as ArrayRef, ], ) .expect("valid stageable lix_state batch") } fn live_row(entity_id: &str, metadata: Option<&str>) -> MaterializedLiveStateRow { MaterializedLiveStateRow { entity_id: EntityIdentity::single(entity_id), schema_key: "lix_key_value".to_string(), file_id: None, snapshot_content: Some("{\"key\":\"hello\",\"value\":\"world\"}".to_string()), metadata: metadata.map(str::to_string), deleted: false, version_id: "version-a".to_string(), change_id: Some(format!("change-{entity_id}")), commit_id: Some(format!("commit-{entity_id}")), global: false, untracked: false, created_at: "2026-04-23T00:00:00Z".to_string(), updated_at: "2026-04-23T01:00:00Z".to_string(), } } #[test] fn parses_eq_filter_for_schema_key() { let expr = Expr::BinaryExpr(BinaryExpr::new( Box::new(col("schema_key")), Operator::Eq, Box::new(str_lit("profile")), )); assert_eq!( parse_lix_state_filter(&expr), Some(LixStateFilterPredicate::SchemaKeys(BTreeSet::from([ "profile".to_string(), ]))) ); } #[test] fn parses_in_list_filter_for_version_id() { let expr = Expr::InList(InList::new( Box::new(col("version_id")), vec![str_lit("a"), str_lit("b")], false, )); assert_eq!( parse_lix_state_filter(&expr), Some(LixStateFilterPredicate::VersionIds(BTreeSet::from([ "a".to_string(), "b".to_string(), ]))) ); } #[test] fn builds_scan_request_from_route_and_projection() { let schema = super::lix_state_by_version_schema(); let route = LixStateByVersionRoute::from_filters(&[ Expr::BinaryExpr(BinaryExpr::new( Box::new(col("schema_key")), Operator::Eq, Box::new(str_lit("profile")), )), Expr::BinaryExpr(BinaryExpr::new( Box::new(col("version_id")), Operator::Eq, Box::new(str_lit("v1")), )), Expr::IsNull(Box::new(col("file_id"))), ]); let request = lix_state_scan_request(&schema, None, Some(&vec![0, 1, 11]), &route, Some(10)); assert_eq!(request.filter.schema_keys, vec!["profile".to_string()]); assert_eq!(request.filter.version_ids, vec!["v1".to_string()]); assert_eq!(request.filter.file_ids, vec![NullableKeyFilter::Null]); assert_eq!( request.projection.columns, vec![ "entity_id".to_string(), "schema_key".to_string(), "version_id".to_string() ] ); assert_eq!(request.limit, Some(10)); } #[test] fn builds_route_from_and_filter_tree() { let route = LixStateByVersionRoute::from_filters(&[Expr::BinaryExpr(BinaryExpr::new( Box::new(Expr::BinaryExpr(BinaryExpr::new( Box::new(col("entity_id")), Operator::Eq, Box::new(str_lit("[\"entity-a\"]")), ))), Operator::And, Box::new(Expr::InList(InList::new( Box::new(col("version_id")), vec![str_lit("version-a"), str_lit("global")], false, ))), ))]); assert_eq!( route.entity_ids, Some(BTreeSet::from(["[\"entity-a\"]".to_string()])) ); assert_eq!( route.version_ids, Some(BTreeSet::from([ "global".to_string(), "version-a".to_string() ])) ); } #[test] fn contradictory_filters_turn_into_zero_limit_request() { let schema = super::lix_state_by_version_schema(); let route = LixStateByVersionRoute::from_filters(&[ Expr::BinaryExpr(BinaryExpr::new( Box::new(col("schema_key")), Operator::Eq, Box::new(str_lit("a")), )), Expr::BinaryExpr(BinaryExpr::new( Box::new(col("schema_key")), Operator::Eq, Box::new(str_lit("b")), )), ]); let request = lix_state_scan_request(&schema, None, None, &route, None); assert_eq!(request.limit, Some(0)); assert!(request.filter.schema_keys.is_empty()); } #[test] fn active_version_view_pins_version_filter() { let schema = super::lix_state_schema(); let route = LixStateByVersionRoute::from_filters(&[Expr::BinaryExpr(BinaryExpr::new( Box::new(col("schema_key")), Operator::Eq, Box::new(str_lit("profile")), ))]); let request = lix_state_scan_request(&schema, Some("version-a"), None, &route, None); assert_eq!(request.filter.schema_keys, vec!["profile".to_string()]); assert_eq!(request.filter.version_ids, vec!["version-a".to_string()]); } #[tokio::test] async fn registers_active_lix_state_with_write_context_only() { let session = SessionContext::new(); let mut write_context = DummyWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); register_lix_state_write_providers(&session, write_ctx) .await .expect("lix_state providers should register"); let lix_state = session .table_provider("lix_state") .await .expect("lix_state provider should exist"); let lix_state = lix_state .as_any() .downcast_ref::() .expect("lix_state should be a LixStateProvider"); assert!(lix_state.write_access.is_write()); let by_version = session .table_provider("lix_state_by_version") .await .expect("lix_state_by_version provider should exist"); let by_version = by_version .as_any() .downcast_ref::() .expect("lix_state_by_version should be a LixStateProvider"); assert!(by_version.write_access.is_write()); } #[tokio::test] async fn insert_into_requires_write_transaction() { let session = SessionContext::new(); let live_state = Arc::new(EmptyLiveStateReader) as Arc; let provider = LixStateProvider::active_version("version-a", live_state, empty_version_ref()); let input = Arc::new(EmptyExec::new(provider.schema())) as Arc; let error = provider .insert_into(&session.state(), input, InsertOp::Append) .await .expect_err("insert without a write context should fail"); assert!( error.to_string().contains("requires a write transaction"), "unexpected error: {error}" ); } #[tokio::test] async fn update_requires_write_transaction() { let session = SessionContext::new(); let live_state = Arc::new(EmptyLiveStateReader) as Arc; let provider = LixStateProvider::active_version("version-a", live_state, empty_version_ref()); let error = provider .update( &session.state(), vec![("metadata".to_string(), str_lit("{\"source\":\"update\"}"))], vec![], ) .await .expect_err("update without a write context should fail"); assert!( error.to_string().contains("requires a write transaction"), "unexpected error: {error}" ); } #[tokio::test] async fn delete_requires_write_transaction() { let session = SessionContext::new(); let live_state = Arc::new(EmptyLiveStateReader) as Arc; let provider = LixStateProvider::active_version("version-a", live_state, empty_version_ref()); let error = provider .delete_from(&session.state(), vec![]) .await .expect_err("delete without a write context should fail"); assert!( error.to_string().contains("requires a write transaction"), "unexpected error: {error}" ); } #[tokio::test] async fn delete_returns_lix_state_delete_exec_with_write_ctx() { let session = SessionContext::new(); let mut write_context = DummyWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let provider = LixStateProvider::active_version_with_write(write_ctx); let plan = provider .delete_from(&session.state(), vec![]) .await .expect("delete should produce a write plan"); assert!(plan.as_any().is::()); } #[tokio::test] async fn update_rejects_read_only_lix_state_columns() { let session = SessionContext::new(); let mut write_context = DummyWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let provider = LixStateProvider::active_version_with_write(write_ctx); let error = provider .update( &session.state(), vec![("entity_id".to_string(), str_lit("entity-2"))], vec![], ) .await .expect_err("updating a read-only field should fail"); assert!( error.to_string().contains("read-only column 'entity_id'"), "unexpected error: {error}" ); } #[tokio::test] async fn update_returns_lix_state_update_exec_with_write_ctx() { let session = SessionContext::new(); let mut write_context = DummyWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let provider = LixStateProvider::active_version_with_write(write_ctx); let plan = provider .update( &session.state(), vec![("metadata".to_string(), str_lit("{\"source\":\"update\"}"))], vec![], ) .await .expect("update should produce a write plan"); assert!(plan.as_any().is::()); } #[tokio::test] async fn insert_into_returns_data_sink_exec_with_write_ctx() { let session = SessionContext::new(); let mut write_context = DummyWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let provider = LixStateProvider::active_version_with_write(write_ctx); let input = Arc::new(EmptyExec::new(provider.schema())) as Arc; let plan = provider .insert_into(&session.state(), input, InsertOp::Append) .await .expect("insert should produce a write plan"); assert!(plan.as_any().is::()); } #[test] fn decodes_lix_state_batch_into_write_rows() { let rows = lix_state_write_rows_from_batch(&one_row_lix_state_batch(false), "version-a") .expect("batch should decode"); assert_eq!( rows, vec![TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("entity-1")), schema_key: "lix_key_value".to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_for_test( json!({"key":"hello","value":"world"}) )), metadata: Some(TransactionJson::from_value_for_test( json!({"source": "test"}) )), origin: None, created_at: Some("2026-04-23T00:00:00Z".to_string()), updated_at: Some("2026-04-23T01:00:00Z".to_string()), global: false, change_id: Some("change-a".to_string()), commit_id: None, untracked: false, version_id: "version-a".to_string(), }] ); } #[test] fn decodes_global_lix_state_batch_into_global_version() { let rows = lix_state_write_rows_from_batch(&one_row_lix_state_batch(true), "version-a") .expect("batch should decode"); assert_eq!(rows[0].version_id, "global"); assert!(rows[0].global); } #[tokio::test] async fn insert_sink_stages_decoded_lix_state_rows() { let mut write_context = CapturingWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let sink = LixStateInsertSink::new(lix_state_schema(), write_ctx, "version-a".to_string()); let batch = one_row_lix_state_batch(false); let count = sink .write_batches(vec![batch], &Arc::new(TaskContext::default())) .await .expect("sink should stage write"); assert_eq!(count, 1); assert_eq!( write_context.writes.as_slice(), &[TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows: vec![TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("entity-1")), schema_key: "lix_key_value".to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_for_test( json!({"key":"hello","value":"world"}) )), metadata: Some(TransactionJson::from_value_for_test( json!({"source": "test"}) )), origin: None, created_at: Some("2026-04-23T00:00:00Z".to_string()), updated_at: Some("2026-04-23T01:00:00Z".to_string()), global: false, change_id: Some("change-a".to_string()), commit_id: None, untracked: false, version_id: "version-a".to_string(), }] }] ); } #[tokio::test] async fn insert_plan_returns_datafusion_count_uint64() { let session = SessionContext::new(); let mut write_context = CapturingWriteContext::default(); let write_ctx = SqlWriteContext::new(&mut write_context); let provider = LixStateProvider::active_version_with_write(write_ctx); let input = Arc::new(SingleBatchExec::new(one_row_stageable_lix_state_batch())) as Arc; let plan = provider .insert_into(&session.state(), input, InsertOp::Append) .await .expect("insert should produce a write plan"); let batches = datafusion::physical_plan::collect(plan, Arc::new(TaskContext::default())) .await .expect("insert write plan should execute"); assert_eq!(batches.len(), 1); assert_eq!(batches[0].num_rows(), 1); assert_eq!(batches[0].num_columns(), 1); assert_eq!(batches[0].schema().field(0).name(), "count"); assert_eq!(batches[0].schema().field(0).data_type(), &DataType::UInt64); assert!(!batches[0].schema().field(0).is_nullable()); let count = batches[0] .column(0) .as_any() .downcast_ref::() .expect("count should be UInt64"); assert_eq!(count.value(0), 1); assert_eq!(write_context.writes.len(), 1); } #[tokio::test] async fn update_plan_evaluates_filters_assignments_and_stages_rows() { let session = SessionContext::new(); let mut write_context = CapturingWriteContext { rows: vec![ live_row("entity-1", Some("{\"source\":\"match\"}")), live_row("entity-2", Some("{\"source\":\"skip\"}")), ], writes: Vec::new(), }; let write_ctx = SqlWriteContext::new(&mut write_context); let provider = LixStateProvider::active_version_with_write(write_ctx); let plan = provider .update( &session.state(), vec![ ( "snapshot_content".to_string(), str_lit("{\"key\":\"hello\",\"value\":\"updated\"}"), ), ( "metadata".to_string(), str_lit("{\"schema_key\":\"lix_key_value\"}"), ), ], vec![Expr::BinaryExpr(BinaryExpr::new( Box::new(col("metadata")), Operator::Eq, Box::new(json_lit("{\"source\":\"match\"}")), ))], ) .await .expect("update should produce a write plan"); let batches = datafusion::physical_plan::collect(plan, Arc::new(TaskContext::default())) .await .expect("update write plan should execute"); assert_eq!(batches.len(), 1); assert_eq!(batches[0].schema().field(0).name(), "count"); assert_eq!(batches[0].schema().field(0).data_type(), &DataType::UInt64); let count = batches[0] .column(0) .as_any() .downcast_ref::() .expect("count should be UInt64"); assert_eq!(count.value(0), 1); assert_eq!( write_context.writes.as_slice(), &[TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("entity-1")), schema_key: "lix_key_value".to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_for_test( json!({"key":"hello","value":"updated"}) )), metadata: Some(TransactionJson::from_value_for_test( json!({"schema_key": "lix_key_value"}) )), origin: None, created_at: None, updated_at: None, global: false, change_id: None, commit_id: None, untracked: false, version_id: "version-a".to_string(), }] }] ); } #[tokio::test] async fn delete_plan_with_empty_filters_stages_all_visible_rows() { let session = SessionContext::new(); let mut write_context = CapturingWriteContext { rows: vec![ live_row("entity-1", Some("{\"source\":\"one\"}")), live_row("entity-2", Some("{\"source\":\"two\"}")), ], writes: Vec::new(), }; let write_ctx = SqlWriteContext::new(&mut write_context); let provider = LixStateProvider::active_version_with_write(write_ctx); let plan = provider .delete_from(&session.state(), vec![]) .await .expect("delete should produce a write plan"); let batches = datafusion::physical_plan::collect(plan, Arc::new(TaskContext::default())) .await .expect("delete write plan should execute"); assert_eq!(batches.len(), 1); assert_eq!(batches[0].schema().field(0).name(), "count"); assert_eq!(batches[0].schema().field(0).data_type(), &DataType::UInt64); let count = batches[0] .column(0) .as_any() .downcast_ref::() .expect("count should be UInt64"); assert_eq!(count.value(0), 2); assert_eq!( write_context.writes.as_slice(), &[TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![ TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("entity-1")), schema_key: "lix_key_value".to_string(), file_id: None, snapshot: None, metadata: Some(TransactionJson::from_value_for_test( json!({"source": "one"}) )), origin: None, created_at: None, updated_at: None, global: false, change_id: None, commit_id: None, untracked: false, version_id: "version-a".to_string(), }, TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("entity-2")), schema_key: "lix_key_value".to_string(), file_id: None, snapshot: None, metadata: Some(TransactionJson::from_value_for_test( json!({"source": "two"}) )), origin: None, created_at: None, updated_at: None, global: false, change_id: None, commit_id: None, untracked: false, version_id: "version-a".to_string(), }, ] }] ); } } ================================================ FILE: packages/engine/src/sql2/mod.rs ================================================ mod change_provider; mod classify; mod context; mod directory_history_provider; mod directory_provider; mod dml; mod entity_history_provider; mod entity_provider; mod error; mod execute; mod file_history_provider; mod file_provider; mod filesystem_planner; mod filesystem_predicates; mod filesystem_visibility; mod history_projection; mod history_provider; mod history_route; mod lix_state_provider; mod predicate_typecheck; mod public_bind; mod read_only; mod record_batch; mod result_metadata; mod runtime; mod session; mod udfs; mod version_provider; mod version_scope; mod write_normalization; pub(crate) use classify::{ classify_statement, datafusion_statement_dml_target_table_names, validate_supported_datafusion_statement_ast, validate_supported_statement_ast, SqlStatementKind, }; pub(crate) use context::{ CommitStoreQuerySource, SqlCommitStoreQuerySource, SqlExecutionContext, SqlJsonReader, SqlWriteContext, SqlWriteExecutionContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader, }; #[allow(unused_imports)] pub(crate) use execute::{ create_logical_plan, create_write_logical_plan, execute_logical_plan, execute_sql, SqlLogicalPlan, }; ================================================ FILE: packages/engine/src/sql2/predicate_typecheck.rs ================================================ use datafusion::arrow::datatypes::{Field, Schema}; use datafusion::common::{DFSchema, DataFusionError, ScalarValue}; use datafusion::logical_expr::expr::{Between, InList}; use datafusion::logical_expr::{BinaryExpr, Expr, Like, Operator}; use crate::LixError; use super::error::lix_error_to_datafusion_error; use super::result_metadata::{field_is_json, LIX_VALUE_TYPE_JSON, LIX_VALUE_TYPE_METADATA_KEY}; pub(crate) fn validate_json_predicate_filters( schema: &Schema, filters: &[Expr], ) -> Result<(), DataFusionError> { for filter in filters { validate_json_predicate_expr_with_arrow_schema(schema, filter) .map_err(lix_error_to_datafusion_error)?; } Ok(()) } pub(crate) fn validate_json_predicate_expr_with_dfschema( schema: &DFSchema, expr: &Expr, ) -> Result<(), LixError> { validate_expr(expr, &|column| { schema .field_with_name(column.relation.as_ref(), &column.name) .ok() .map(|field| field.as_ref()) }) } fn validate_json_predicate_expr_with_arrow_schema( schema: &Schema, expr: &Expr, ) -> Result<(), LixError> { validate_expr(expr, &|column| { schema .fields() .iter() .find(|field| field.name() == &column.name) .map(|field| field.as_ref()) }) } fn validate_expr<'a>( expr: &'a Expr, lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>, ) -> Result<(), LixError> { match expr { Expr::BinaryExpr(binary) => validate_binary_expr(binary, lookup_field), Expr::InList(in_list) => validate_in_list(in_list, lookup_field), Expr::Between(between) => validate_between(between, lookup_field), Expr::Like(like) | Expr::SimilarTo(like) => validate_like(like, lookup_field), Expr::Alias(alias) => validate_expr(&alias.expr, lookup_field), Expr::Not(inner) | Expr::IsNotNull(inner) | Expr::IsNull(inner) | Expr::IsTrue(inner) | Expr::IsFalse(inner) | Expr::IsUnknown(inner) | Expr::IsNotTrue(inner) | Expr::IsNotFalse(inner) | Expr::IsNotUnknown(inner) | Expr::Negative(inner) => validate_expr(inner, lookup_field), Expr::Cast(cast) => validate_expr(&cast.expr, lookup_field), Expr::TryCast(cast) => validate_expr(&cast.expr, lookup_field), Expr::ScalarFunction(function) => { for arg in &function.args { validate_expr(arg, lookup_field)?; } Ok(()) } Expr::Case(case) => { if let Some(expr) = &case.expr { validate_expr(expr, lookup_field)?; } for (when, then) in &case.when_then_expr { validate_expr(when, lookup_field)?; validate_expr(then, lookup_field)?; } if let Some(expr) = &case.else_expr { validate_expr(expr, lookup_field)?; } Ok(()) } Expr::AggregateFunction(function) => { for arg in &function.params.args { validate_expr(arg, lookup_field)?; } Ok(()) } Expr::WindowFunction(function) => { for arg in &function.params.args { validate_expr(arg, lookup_field)?; } Ok(()) } _ => Ok(()), } } fn validate_binary_expr<'a>( binary: &'a BinaryExpr, lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>, ) -> Result<(), LixError> { validate_expr(&binary.left, lookup_field)?; validate_expr(&binary.right, lookup_field)?; if !is_comparison_operator(binary.op) { return Ok(()); } validate_comparison_operands(&binary.left, &binary.right, lookup_field) } fn validate_in_list<'a>( in_list: &'a InList, lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>, ) -> Result<(), LixError> { validate_expr(&in_list.expr, lookup_field)?; for item in &in_list.list { validate_expr(item, lookup_field)?; } if is_json_expr(&in_list.expr, lookup_field) { for item in &in_list.list { require_json_comparison_operand(item, lookup_field)?; } } for item in &in_list.list { if is_json_expr(item, lookup_field) { require_json_comparison_operand(&in_list.expr, lookup_field)?; } } Ok(()) } fn validate_between<'a>( between: &'a Between, lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>, ) -> Result<(), LixError> { validate_expr(&between.expr, lookup_field)?; validate_expr(&between.low, lookup_field)?; validate_expr(&between.high, lookup_field)?; if is_json_expr(&between.expr, lookup_field) { require_json_comparison_operand(&between.low, lookup_field)?; require_json_comparison_operand(&between.high, lookup_field)?; } Ok(()) } fn validate_like<'a>( like: &'a Like, lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>, ) -> Result<(), LixError> { validate_expr(&like.expr, lookup_field)?; validate_expr(&like.pattern, lookup_field)?; if is_json_expr(&like.expr, lookup_field) { return Err(json_predicate_type_error(&like.expr)); } Ok(()) } fn validate_comparison_operands<'a>( left: &'a Expr, right: &'a Expr, lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>, ) -> Result<(), LixError> { let left_is_json = is_json_expr(left, lookup_field); let right_is_json = is_json_expr(right, lookup_field); if left_is_json { require_json_comparison_operand(right, lookup_field)?; } if right_is_json { require_json_comparison_operand(left, lookup_field)?; } Ok(()) } fn require_json_comparison_operand<'a>( expr: &'a Expr, lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>, ) -> Result<(), LixError> { if is_json_expr(expr, lookup_field) || is_null_literal(expr) || matches!(expr, Expr::Placeholder(_)) { return Ok(()); } Err(json_predicate_type_error(expr)) } fn is_json_expr<'a>( expr: &'a Expr, lookup_field: &impl Fn(&datafusion::common::Column) -> Option<&'a Field>, ) -> bool { match expr { Expr::Column(column) => lookup_field(column).is_some_and(field_is_json), Expr::Literal(_, Some(metadata)) => metadata .inner() .get(LIX_VALUE_TYPE_METADATA_KEY) .is_some_and(|value| value == LIX_VALUE_TYPE_JSON), Expr::ScalarFunction(function) => matches!(function.name(), "lix_json" | "lix_json_get"), Expr::Alias(alias) => is_json_expr(&alias.expr, lookup_field), Expr::Cast(cast) => is_json_expr(&cast.expr, lookup_field), Expr::TryCast(cast) => is_json_expr(&cast.expr, lookup_field), _ => false, } } fn is_null_literal(expr: &Expr) -> bool { matches!(expr, Expr::Literal(value, _) if matches!(value, ScalarValue::Null)) } fn is_comparison_operator(op: Operator) -> bool { matches!( op, Operator::Eq | Operator::NotEq | Operator::Lt | Operator::LtEq | Operator::Gt | Operator::GtEq | Operator::IsDistinctFrom | Operator::IsNotDistinctFrom ) } fn json_predicate_type_error(expr: &Expr) -> LixError { LixError::new( LixError::CODE_TYPE_MISMATCH, format!("JSON columns can only be compared with JSON expressions, got {expr}"), ) .with_hint("Wrap JSON text with lix_json(...), use lix_json_get(...) for JSON values, or use IS NULL for null checks.") } ================================================ FILE: packages/engine/src/sql2/public_bind/assignment.rs ================================================ use std::collections::BTreeSet; use crate::LixError; use super::table::{PublicSurface, PublicTableContracts}; pub(crate) fn validate_update_assignments( surface: &PublicSurface, columns: Vec, contracts: &PublicTableContracts, ) -> Result<(), LixError> { let Some(contract) = contracts.get(surface) else { return Ok(()); }; let mut seen = BTreeSet::new(); for column in columns { if !seen.insert(column.clone()) { return Err(LixError::new( LixError::CODE_INVALID_PARAM, format!( "update {} assigns column '{column}' more than once", surface.name() ), )); } let Some(column_contract) = contract.column(&column) else { return Err(LixError::new( LixError::CODE_INVALID_PARAM, format!( "update {} references unknown column '{column}'", surface.name() ), )); }; if !column_contract.writable { return Err(LixError::new( LixError::CODE_UNSUPPORTED_SQL, format!( "update {} cannot assign read-only column '{column}'", surface.name() ), )); } } Ok(()) } ================================================ FILE: packages/engine/src/sql2/public_bind/capability.rs ================================================ use crate::LixError; use super::table::{Capability, PublicSurface, PublicTableContracts}; use super::DmlOperation; pub(crate) fn validate_table_operation( surface: &PublicSurface, operation: DmlOperation, contracts: &PublicTableContracts, ) -> Result<(), LixError> { let Some(contract) = contracts.get(surface) else { return Ok(()); }; match contract.operation(operation) { Capability::Allowed => Ok(()), Capability::ReadOnly(hint) => { let message = if surface.name().ends_with("_history") { format!( "DML cannot write read-only history view '{}'", surface.name() ) } else { format!( "{} {} is not allowed because the SQL surface is read-only", operation.as_str(), surface.name() ) }; Err(LixError::new(LixError::CODE_READ_ONLY, message).with_hint(hint)) } Capability::Unsupported(hint) => Err(LixError::new( LixError::CODE_UNSUPPORTED_SQL, format!( "{} {} is not supported by Lix SQL", operation.as_str(), surface.name() ), ) .with_hint(hint)), } } ================================================ FILE: packages/engine/src/sql2/public_bind/dml.rs ================================================ use datafusion::logical_expr::{LogicalPlan, WriteOp}; use datafusion::sql::sqlparser::ast::{ Assignment, AssignmentTarget, Delete, FromTable, ObjectName, Statement, TableFactor, TableObject, TableWithJoins, Update, }; use datafusion::sql::sqlparser::dialect::GenericDialect; use datafusion::sql::sqlparser::parser::Parser; use serde_json::Value as JsonValue; use crate::LixError; use super::assignment::validate_update_assignments; use super::capability::validate_table_operation; use super::table::{PublicSurface, PublicTableContracts}; #[derive(Clone, Copy, Debug, Eq, PartialEq)] pub(crate) enum DmlOperation { Insert, Update, Delete, } impl DmlOperation { pub(crate) fn as_str(self) -> &'static str { match self { Self::Insert => "insert", Self::Update => "update", Self::Delete => "delete", } } } pub(crate) fn validate_sql(sql: &str, visible_schemas: &[JsonValue]) -> Result<(), LixError> { let statements = Parser::parse_sql(&GenericDialect {}, sql).map_err(|error| { LixError::new( LixError::CODE_PARSE_ERROR, format!("sql2 SQL parse error: {error}"), ) })?; let [statement] = statements.as_slice() else { return Ok(()); }; let contracts = PublicTableContracts::new(visible_schemas)?; validate_statement(statement, &contracts) } pub(crate) fn validate_plan( plan: &LogicalPlan, visible_schemas: &[JsonValue], ) -> Result<(), LixError> { let contracts = PublicTableContracts::new(visible_schemas)?; validate_plan_with_contracts(plan, &contracts) } fn validate_plan_with_contracts( plan: &LogicalPlan, contracts: &PublicTableContracts, ) -> Result<(), LixError> { if let LogicalPlan::Dml(dml) = plan { let surface = PublicSurface::named(dml.table_name.table()); validate_table_operation(&surface, operation_from_write_op(&dml.op), contracts)?; } for input in plan.inputs() { validate_plan_with_contracts(input, contracts)?; } Ok(()) } fn operation_from_write_op(op: &WriteOp) -> DmlOperation { match op { WriteOp::Insert(_) | WriteOp::Ctas => DmlOperation::Insert, WriteOp::Update => DmlOperation::Update, WriteOp::Delete | WriteOp::Truncate => DmlOperation::Delete, } } fn validate_statement( statement: &Statement, contracts: &PublicTableContracts, ) -> Result<(), LixError> { match statement { Statement::Insert(insert) => { let Some(table_name) = insert_target_name(&insert.table) else { return Ok(()); }; let surface = PublicSurface::named(table_name); validate_table_operation(&surface, DmlOperation::Insert, contracts) } Statement::Update(update) => validate_update(update, contracts), Statement::Delete(delete) => validate_delete(delete, contracts), Statement::Explain { statement, .. } => validate_statement(statement, contracts), _ => Ok(()), } } fn validate_update(update: &Update, contracts: &PublicTableContracts) -> Result<(), LixError> { let Some(table_name) = table_with_joins_target_name(&update.table) else { return Ok(()); }; let surface = PublicSurface::named(table_name); validate_table_operation(&surface, DmlOperation::Update, contracts)?; validate_update_assignments( &surface, assignment_column_names(&update.assignments)?, contracts, ) } fn validate_delete(delete: &Delete, contracts: &PublicTableContracts) -> Result<(), LixError> { for table in delete_from_tables(delete) { let Some(table_name) = table_with_joins_target_name(table) else { continue; }; let surface = PublicSurface::named(table_name); validate_table_operation(&surface, DmlOperation::Delete, contracts)?; } Ok(()) } fn delete_from_tables(delete: &Delete) -> &[TableWithJoins] { match &delete.from { FromTable::WithFromKeyword(tables) | FromTable::WithoutKeyword(tables) => tables, } } fn assignment_column_names(assignments: &[Assignment]) -> Result, LixError> { let mut columns = Vec::new(); for assignment in assignments { match &assignment.target { AssignmentTarget::ColumnName(name) => { if let Some(column) = object_name_leaf(name) { columns.push(column); } } AssignmentTarget::Tuple(names) => { for name in names { if let Some(column) = object_name_leaf(name) { columns.push(column); } } } } } Ok(columns) } fn insert_target_name(table: &TableObject) -> Option { match table { TableObject::TableName(name) => object_name_leaf(name), _ => None, } } fn table_with_joins_target_name(table: &TableWithJoins) -> Option { match &table.relation { TableFactor::Table { name, .. } => object_name_leaf(name), _ => None, } } fn object_name_leaf(name: &ObjectName) -> Option { name.0 .last() .and_then(|part| part.as_ident()) .map(|ident| ident.value.to_ascii_lowercase()) } ================================================ FILE: packages/engine/src/sql2/public_bind/mod.rs ================================================ mod assignment; mod capability; mod dml; mod table; use datafusion::logical_expr::LogicalPlan; use serde_json::Value as JsonValue; use crate::LixError; pub(crate) use dml::DmlOperation; pub(crate) fn validate_public_dml_sql( sql: &str, visible_schemas: &[JsonValue], ) -> Result<(), LixError> { dml::validate_sql(sql, visible_schemas) } pub(crate) fn validate_public_dml_plan( plan: &LogicalPlan, visible_schemas: &[JsonValue], ) -> Result<(), LixError> { dml::validate_plan(plan, visible_schemas) } ================================================ FILE: packages/engine/src/sql2/public_bind/table.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use serde_json::Value as JsonValue; use crate::schema::schema_key_from_definition; use crate::LixError; #[derive(Clone, Copy, Debug, Eq, PartialEq)] pub(crate) enum Capability { Allowed, ReadOnly(&'static str), Unsupported(&'static str), } #[derive(Clone, Debug)] pub(crate) struct ColumnContract { pub(crate) writable: bool, } #[derive(Clone, Debug)] pub(crate) struct TableContract { pub(crate) insert: Capability, pub(crate) update: Capability, pub(crate) delete: Capability, pub(crate) columns: BTreeMap, } impl TableContract { pub(crate) fn operation(&self, operation: super::DmlOperation) -> Capability { match operation { super::DmlOperation::Insert => self.insert, super::DmlOperation::Update => self.update, super::DmlOperation::Delete => self.delete, } } pub(crate) fn column(&self, column: &str) -> Option<&ColumnContract> { self.columns.get(column) } } #[derive(Clone, Debug, Eq, PartialEq, Ord, PartialOrd)] pub(crate) struct PublicSurface { name: String, } impl PublicSurface { pub(crate) fn named(name: impl Into) -> Self { Self { name: name.into().to_ascii_lowercase(), } } pub(crate) fn name(&self) -> &str { &self.name } } #[derive(Clone, Debug)] pub(crate) struct PublicTableContracts { contracts: BTreeMap, } impl PublicTableContracts { pub(crate) fn new(visible_schemas: &[JsonValue]) -> Result { let mut contracts = builtin_contracts(); for schema in visible_schemas { let schema_key = schema_key_from_definition(schema)?.schema_key; contracts.insert( format!("{}_history", schema_key.to_ascii_lowercase()), history_contract(), ); } Ok(Self { contracts }) } pub(crate) fn get(&self, surface: &PublicSurface) -> Option<&TableContract> { self.contracts.get(surface.name()) } } fn builtin_contracts() -> BTreeMap { let mut contracts = BTreeMap::new(); for table in [ "lix_change", "lix_commit", "lix_commit_by_version", "lix_commit_edge", "lix_commit_edge_by_version", "lix_change_set", "lix_change_set_by_version", "lix_change_set_element", "lix_change_set_element_by_version", ] { contracts.insert(table.to_string(), commit_graph_contract()); } for table in [ "lix_state_history", "lix_file_history", "lix_directory_history", ] { contracts.insert(table.to_string(), history_contract()); } contracts.insert( "lix_registered_schema".to_string(), TableContract { insert: Capability::Allowed, update: Capability::Allowed, delete: Capability::Unsupported( "lix_registered_schema deletion is not supported; register an amended schema instead", ), columns: columns(&["value", "lixcol_metadata", "lixcol_global", "lixcol_untracked"]), }, ); contracts.insert( "lix_key_value".to_string(), TableContract { insert: Capability::Allowed, update: Capability::Allowed, delete: Capability::Allowed, columns: columns(&["key", "value", "lixcol_metadata"]), }, ); contracts } fn commit_graph_contract() -> TableContract { TableContract { insert: Capability::ReadOnly( "Commit graph and changelog surfaces are read-only; Lix creates them when transactions commit.", ), update: Capability::ReadOnly( "Commit graph and changelog surfaces are read-only; Lix creates them when transactions commit.", ), delete: Capability::ReadOnly( "Commit graph and changelog surfaces are read-only; Lix creates them when transactions commit.", ), columns: BTreeMap::new(), } } fn history_contract() -> TableContract { TableContract { insert: Capability::ReadOnly( "History views are query-only; write to the live surface such as lix_state, lix_file, lix_directory, or the typed entity table.", ), update: Capability::ReadOnly( "History views are query-only; write to the live surface such as lix_state, lix_file, lix_directory, or the typed entity table.", ), delete: Capability::ReadOnly( "History views are query-only; write to the live surface such as lix_state, lix_file, lix_directory, or the typed entity table.", ), columns: BTreeMap::new(), } } fn columns(writable: &[&str]) -> BTreeMap { let writable = writable.iter().copied().collect::>(); writable .into_iter() .map(|column| (column.to_string(), ColumnContract { writable: true })) .collect() } ================================================ FILE: packages/engine/src/sql2/read_only.rs ================================================ use datafusion::error::DataFusionError; use crate::transaction::types::TransactionWriteRow; use crate::LixError; pub(crate) fn reject_read_only_entity_surface( schema_key: &str, action: &str, ) -> Result<(), DataFusionError> { if schema_key == "lix_directory_descriptor" { return Err(read_only_error( action, schema_key, "Use the writable lix_directory surface to create, update, or delete directories.", )); } if let Some(message) = read_only_schema_message(schema_key) { return Err(read_only_error(action, schema_key, message)); } Ok(()) } pub(crate) fn reject_read_only_stage_rows( rows: &[TransactionWriteRow], action: &str, ) -> Result<(), DataFusionError> { for row in rows { if let Some(message) = read_only_schema_message(&row.schema_key) { return Err(read_only_error(action, &row.schema_key, message)); } } Ok(()) } fn read_only_error(action: &str, schema_key: &str, message: &'static str) -> DataFusionError { super::error::lix_error_to_datafusion_error( LixError::new( LixError::CODE_READ_ONLY, format!("{action} cannot write read-only surface '{schema_key}'"), ) .with_hint(message), ) } fn read_only_schema_message(schema_key: &str) -> Option<&'static str> { match schema_key { "lix_version_descriptor" | "lix_version_ref" => { Some("Use the writable lix_version surface to create, update, or delete versions.") } "lix_file_descriptor" => { Some("Use the writable lix_file surface to create, update, or delete files.") } "lix_binary_blob_ref" => { Some("Use the writable lix_file data column to create, update, or delete file contents.") } "lix_commit" | "lix_commit_edge" | "lix_change" => Some( "Commit graph and changelog surfaces are read-only; Lix creates them when transactions commit.", ), _ => None, } } ================================================ FILE: packages/engine/src/sql2/record_batch.rs ================================================ use datafusion::arrow::array::ArrayRef; use datafusion::arrow::datatypes::SchemaRef; use datafusion::arrow::record_batch::{RecordBatch, RecordBatchOptions}; use datafusion::common::{DataFusionError, Result}; pub(crate) fn record_batch_with_row_count( schema: SchemaRef, columns: Vec, row_count: usize, ) -> Result { if schema.fields().is_empty() { let options = RecordBatchOptions::new().with_row_count(Some(row_count)); return RecordBatch::try_new_with_options(schema, columns, &options) .map_err(DataFusionError::from); } RecordBatch::try_new(schema, columns).map_err(DataFusionError::from) } ================================================ FILE: packages/engine/src/sql2/result_metadata.rs ================================================ use std::collections::HashMap; use datafusion::arrow::datatypes::Field; pub(crate) const LIX_VALUE_TYPE_METADATA_KEY: &str = "lix.value_type"; pub(crate) const LIX_VALUE_TYPE_JSON: &str = "json"; pub(crate) fn json_field(name: impl Into, nullable: bool) -> Field { Field::new(name, datafusion::arrow::datatypes::DataType::Utf8, nullable) .with_metadata(json_field_metadata_map()) } pub(crate) fn mark_json_field(field: Field) -> Field { field.with_metadata(json_field_metadata_map()) } pub(crate) fn field_is_json(field: &Field) -> bool { field .metadata() .get(LIX_VALUE_TYPE_METADATA_KEY) .is_some_and(|value| value == LIX_VALUE_TYPE_JSON) } fn json_field_metadata_map() -> HashMap { HashMap::from([( LIX_VALUE_TYPE_METADATA_KEY.to_string(), LIX_VALUE_TYPE_JSON.to_string(), )]) } ================================================ FILE: packages/engine/src/sql2/runtime.rs ================================================ use std::sync::Arc; use datafusion::arrow::record_batch::RecordBatch; use datafusion::dataframe::DataFrame; use datafusion::error::Result; use datafusion::execution::TaskContext; use datafusion::physical_plan::{ExecutionPlan, ExecutionPlanProperties}; use futures_util::TryStreamExt; pub(crate) async fn collect_dataframe(dataframe: DataFrame) -> Result> { let task_ctx = Arc::new(dataframe.task_ctx()); let plan = dataframe.create_physical_plan().await?; collect_input_plan(plan, task_ctx).await } pub(crate) async fn collect_input_plan( plan: Arc, task_ctx: Arc, ) -> Result> { validate_physical_plan(&plan)?; let partition_count = plan.output_partitioning().partition_count(); let mut batches = Vec::new(); for partition in 0..partition_count { let partition_batches = plan .execute(partition, Arc::clone(&task_ctx))? .try_collect::>() .await?; batches.extend(partition_batches); } Ok(batches) } #[cfg(not(target_arch = "wasm32"))] fn validate_physical_plan(_plan: &Arc) -> Result<()> { Ok(()) } #[cfg(target_arch = "wasm32")] fn validate_physical_plan(plan: &Arc) -> Result<()> { let operator_name = plan.name(); if is_wasm_unsafe_operator(operator_name) { return Err(datafusion::error::DataFusionError::Plan(format!( "SQL physical operator '{operator_name}' is not supported by the WebAssembly runtime yet" ))); } for child in plan.children() { validate_physical_plan(child)?; } Ok(()) } #[cfg(target_arch = "wasm32")] fn is_wasm_unsafe_operator(operator_name: &str) -> bool { matches!( operator_name, "CoalescePartitionsExec" | "RepartitionExec" | "SortPreservingMergeExec" ) } ================================================ FILE: packages/engine/src/sql2/session.rs ================================================ use std::sync::Arc; use datafusion::prelude::{SessionConfig, SessionContext}; use crate::LixError; use super::change_provider::register_lix_change_provider; use super::directory_history_provider::register_lix_directory_history_provider; use super::directory_provider::{ register_lix_directory_providers, register_lix_directory_write_providers, }; use super::entity_provider::{register_entity_providers, register_entity_write_providers}; use super::file_history_provider::register_lix_file_history_provider; use super::file_provider::{register_lix_file_providers, register_lix_file_write_providers}; use super::history_provider::register_history_providers; use super::lix_state_provider::{register_lix_state_providers, register_lix_state_write_providers}; use super::udfs::register_sql2_functions; use super::version_provider::{register_lix_version_provider, register_lix_version_write_provider}; use super::{SqlExecutionContext, SqlWriteContext, SqlWriteExecutionContext}; pub(crate) async fn build_read_session( ctx: &dyn SqlExecutionContext, ) -> Result { let session = new_sql_session_context(); let version_ref = ctx.version_ref(); let active_version_commit_id = version_ref .load_head(ctx.active_version_id()) .await? .map(|head| head.commit_id); register_sql2_functions(&session, ctx.functions(), active_version_commit_id); register_lix_state_providers( &session, ctx.active_version_id(), ctx.live_state(), Arc::clone(&version_ref), ) .await?; register_lix_version_provider(&session, ctx.live_state(), Arc::clone(&version_ref)).await?; let commit_store_query_source = ctx.commit_store_query_source(); register_lix_change_provider(&session, commit_store_query_source.clone()).await?; let state_history_commit_graph = ctx.commit_graph(); register_history_providers( &session, state_history_commit_graph, commit_store_query_source.clone(), ) .await?; let file_history_commit_graph = ctx.commit_graph(); register_lix_file_history_provider( &session, file_history_commit_graph, commit_store_query_source.clone(), ctx.blob_reader(), ) .await?; let directory_history_commit_graph = ctx.commit_graph(); register_lix_directory_history_provider( &session, directory_history_commit_graph, commit_store_query_source.clone(), ) .await?; let entity_commit_graph = Arc::new(tokio::sync::Mutex::new(ctx.commit_graph())); register_lix_directory_providers( &session, ctx.active_version_id(), ctx.live_state(), Arc::clone(&version_ref), ctx.functions(), ) .await?; register_lix_file_providers( &session, ctx.active_version_id(), ctx.live_state(), Arc::clone(&version_ref), ctx.blob_reader(), ctx.functions(), ) .await?; register_entity_providers( &session, ctx.active_version_id(), ctx.live_state(), Arc::clone(&version_ref), entity_commit_graph, commit_store_query_source, &ctx.list_visible_schemas()?, ) .await?; Ok(session) } pub(crate) async fn build_write_session( ctx: &mut dyn SqlWriteExecutionContext, ) -> Result { let session = new_sql_session_context(); let write_ctx = SqlWriteContext::new(ctx); let active_version_commit_id = write_ctx .load_version_head(&write_ctx.active_version_id()) .await?; register_sql2_functions(&session, write_ctx.functions(), active_version_commit_id); register_lix_state_write_providers(&session, write_ctx.clone()).await?; register_lix_version_write_provider(&session, write_ctx.clone()).await?; register_lix_directory_write_providers(&session, write_ctx.clone()).await?; register_lix_file_write_providers(&session, write_ctx.clone()).await?; register_entity_write_providers( &session, write_ctx.clone(), &write_ctx.list_visible_schemas()?, ) .await?; Ok(session) } pub(crate) fn new_sql_session_context() -> SessionContext { SessionContext::new_with_config( SessionConfig::new() .with_information_schema(true) .with_target_partitions(1) .set_bool("datafusion.optimizer.repartition_aggregations", false) .set_bool("datafusion.optimizer.repartition_joins", false) .set_bool("datafusion.optimizer.repartition_sorts", false) .set_bool("datafusion.optimizer.repartition_windows", false) .set_bool("datafusion.optimizer.repartition_file_scans", false) .set_bool("datafusion.optimizer.enable_round_robin_repartition", false), ) } ================================================ FILE: packages/engine/src/sql2/udfs/common.rs ================================================ use std::sync::Arc; use datafusion::arrow::array::{ Array, ArrayRef, BinaryArray, BooleanArray, Float32Array, Float64Array, Int16Array, Int32Array, Int64Array, Int8Array, LargeBinaryArray, LargeStringArray, StringArray, UInt16Array, UInt32Array, UInt64Array, UInt8Array, }; use datafusion::common::{plan_err, DataFusionError, Result}; use datafusion::logical_expr::ColumnarValue; use serde_json::Value as JsonValue; pub(super) fn scalar_inputs(args: &[ColumnarValue]) -> bool { args.iter() .all(|value| matches!(value, ColumnarValue::Scalar(_))) } pub(super) fn json_value_to_serde(array: &dyn Array, row: usize) -> Result> { let Some(raw) = text_like_value(array, row)? else { return Ok(None); }; serde_json::from_str::(&raw) .map(Some) .map_err(|error| { DataFusionError::Execution(format!( "JSON function expected valid JSON text in its first argument, got error: {error}" )) }) } pub(super) fn text_like_value(array: &dyn Array, row: usize) -> Result> { if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| array.value(row).to_string())); } if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| array.value(row).to_string())); } if let Some(value) = numeric_value(array, row)? { return Ok(Some(value)); } if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| { if array.value(row) { "true".to_string() } else { "false".to_string() } })); } if let Some(array) = array.as_any().downcast_ref::() { return Ok( (!array.is_null(row)).then(|| String::from_utf8_lossy(array.value(row)).to_string()) ); } if let Some(array) = array.as_any().downcast_ref::() { return Ok( (!array.is_null(row)).then(|| String::from_utf8_lossy(array.value(row)).to_string()) ); } Err(DataFusionError::Execution(format!( "unsupported argument type for JSON/text function: {:?}", array.data_type() ))) } pub(super) fn numeric_value(array: &dyn Array, row: usize) -> Result> { macro_rules! numeric_array { ($ty:ty) => { if let Some(array) = array.as_any().downcast_ref::<$ty>() { return Ok((!array.is_null(row)).then(|| array.value(row).to_string())); } }; } numeric_array!(Int8Array); numeric_array!(Int16Array); numeric_array!(Int32Array); numeric_array!(Int64Array); numeric_array!(UInt8Array); numeric_array!(UInt16Array); numeric_array!(UInt32Array); numeric_array!(UInt64Array); numeric_array!(Float32Array); numeric_array!(Float64Array); Ok(None) } pub(super) fn decode_utf8_value(array: &dyn Array, row: usize) -> Result> { if let Some(array) = array.as_any().downcast_ref::() { return (!array.is_null(row)) .then(|| String::from_utf8(array.value(row).to_vec())) .transpose() .map_err(|error| { DataFusionError::Execution(format!( "lix_text_decode() expected valid UTF8 bytes: {error}" )) }); } if let Some(array) = array.as_any().downcast_ref::() { return (!array.is_null(row)) .then(|| String::from_utf8(array.value(row).to_vec())) .transpose() .map_err(|error| { DataFusionError::Execution(format!( "lix_text_decode() expected valid UTF8 bytes: {error}" )) }); } if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| array.value(row).to_string())); } if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| array.value(row).to_string())); } Err(DataFusionError::Execution(format!( "lix_text_decode() expected Binary or Utf8, got {:?}", array.data_type() ))) } pub(super) fn encode_utf8_value(array: &dyn Array, row: usize) -> Result>> { if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| array.value(row).as_bytes().to_vec())); } if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| array.value(row).as_bytes().to_vec())); } if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| array.value(row).to_vec())); } if let Some(array) = array.as_any().downcast_ref::() { return Ok((!array.is_null(row)).then(|| array.value(row).to_vec())); } Err(DataFusionError::Execution(format!( "lix_text_encode() expected Utf8 or Binary, got {:?}", array.data_type() ))) } pub(super) fn validate_utf8_encoding_arg( fn_name: &str, encoding: Option<&ColumnarValue>, ) -> Result<()> { let Some(encoding) = encoding else { return Ok(()); }; let arrays = ColumnarValue::values_to_arrays(std::slice::from_ref(encoding))?; let array = &arrays[0]; if array.len() == 0 { return Ok(()); } let Some(value) = text_like_value(array.as_ref(), 0)? else { return Ok(()); }; let normalized = value.trim().to_ascii_uppercase().replace('-', ""); if normalized == "UTF8" { Ok(()) } else { plan_err!("{fn_name}() only supports UTF8 encoding, got '{value}'") } } pub(super) fn extract_json_path( fn_name: &str, arrays: &[ArrayRef], row: usize, ) -> Result> { let Some(mut current) = json_value_to_serde(arrays[0].as_ref(), row)? else { return Ok(None); }; for path in &arrays[1..] { let Some(segment) = json_path_segment(fn_name, path.as_ref(), row)? else { return Ok(None); }; let next = match segment { JsonPathSegment::Key(key) => current.get(&key).cloned(), JsonPathSegment::Index(index) => current .as_array() .and_then(|values| values.get(index)) .cloned(), }; let Some(value) = next else { return Ok(None); }; current = value; } Ok(Some(current)) } pub(super) fn json_text_value(value: &JsonValue) -> Result { match value { JsonValue::String(text) => Ok(text.clone()), JsonValue::Number(number) => Ok(number.to_string()), JsonValue::Bool(boolean) => Ok(if *boolean { "true".to_string() } else { "false".to_string() }), JsonValue::Array(_) | JsonValue::Object(_) => { serde_json::to_string(value).map_err(|error| { DataFusionError::Execution(format!( "lix_json_get_text() could not render JSON value: {error}" )) }) } JsonValue::Null => Ok("null".to_string()), } } pub(super) fn json_json_value(value: &JsonValue) -> Result { serde_json::to_string(value).map_err(|error| { DataFusionError::Execution(format!( "lix_json_get() could not render JSON value: {error}" )) }) } enum JsonPathSegment { Key(String), Index(usize), } fn json_path_segment( fn_name: &str, array: &dyn Array, row: usize, ) -> Result> { if let Some(array) = array.as_any().downcast_ref::() { if array.is_null(row) { return Ok(None); } let value = array.value(row).to_string(); validate_json_path_key_segment(fn_name, &value)?; return Ok(Some(JsonPathSegment::Key(value))); } if let Some(array) = array.as_any().downcast_ref::() { if array.is_null(row) { return Ok(None); } let value = array.value(row).to_string(); validate_json_path_key_segment(fn_name, &value)?; return Ok(Some(JsonPathSegment::Key(value))); } macro_rules! index_array { ($ty:ty) => { if let Some(array) = array.as_any().downcast_ref::<$ty>() { if array.is_null(row) { return Ok(None); } let value = array.value(row); let index = usize::try_from(value).map_err(|_| { DataFusionError::Execution(format!( "{fn_name}() path indexes must be non-negative integers" )) })?; return Ok(Some(JsonPathSegment::Index(index))); } }; } index_array!(UInt8Array); index_array!(UInt16Array); index_array!(UInt32Array); index_array!(UInt64Array); index_array!(Int8Array); index_array!(Int16Array); index_array!(Int32Array); index_array!(Int64Array); Err(DataFusionError::Execution(format!( "{fn_name}() path arguments must be strings or non-negative integers, got {:?}", array.data_type() ))) } fn validate_json_path_key_segment(fn_name: &str, value: &str) -> Result<()> { if value == "$" || value.starts_with("$.") || value.starts_with("$[") || value.starts_with('/') { return Err(DataFusionError::Execution(format!( "{fn_name}() uses variadic path segments, not JSONPath or JSON Pointer; got '{value}'" ))); } Ok(()) } pub(super) fn binary_array_from_owned(values: &[Option>]) -> BinaryArray { let refs = values .iter() .map(|value| value.as_deref()) .collect::>(); BinaryArray::from(refs) } pub(super) fn array_ref(array: T) -> ArrayRef { Arc::new(array) } ================================================ FILE: packages/engine/src/sql2/udfs/lix_active_version_commit_id.rs ================================================ use std::any::Any; use datafusion::arrow::datatypes::DataType; use datafusion::common::{plan_err, Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; #[derive(Clone, PartialEq, Eq, Hash)] pub(super) struct LixActiveVersionCommitId { commit_id: Option, } impl LixActiveVersionCommitId { pub(super) fn new(commit_id: Option) -> Self { Self { commit_id } } } impl std::fmt::Debug for LixActiveVersionCommitId { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixActiveVersionCommitId").finish() } } impl ScalarUDFImpl for LixActiveVersionCommitId { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_active_version_commit_id" } fn signature(&self) -> &Signature { static SIGNATURE: std::sync::LazyLock = std::sync::LazyLock::new(|| Signature::nullary(Volatility::Stable)); &SIGNATURE } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Utf8) } fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result { if !args.args.is_empty() { return plan_err!("lix_active_version_commit_id requires no arguments"); } Ok(ColumnarValue::Scalar(ScalarValue::Utf8( self.commit_id.clone(), ))) } } ================================================ FILE: packages/engine/src/sql2/udfs/lix_empty_blob.rs ================================================ use std::any::Any; use datafusion::arrow::datatypes::DataType; use datafusion::common::{Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; #[derive(Debug, Clone, PartialEq, Eq, Hash)] pub(super) struct LixEmptyBlob; impl ScalarUDFImpl for LixEmptyBlob { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_empty_blob" } fn signature(&self) -> &Signature { static SIGNATURE: std::sync::LazyLock = std::sync::LazyLock::new(|| Signature::nullary(Volatility::Immutable)); &SIGNATURE } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Binary) } fn invoke_with_args(&self, _args: ScalarFunctionArgs) -> Result { Ok(ColumnarValue::Scalar(ScalarValue::Binary(Some(Vec::new())))) } } #[cfg(test)] mod tests { use super::super::test_support::single_binary; #[tokio::test] async fn returns_empty_binary_value() { assert_eq!( single_binary("SELECT lix_empty_blob()").await, Some(Vec::new()) ); } } ================================================ FILE: packages/engine/src/sql2/udfs/lix_json.rs ================================================ use std::any::Any; use std::sync::Arc; use datafusion::arrow::array::{Array, StringArray}; use datafusion::arrow::datatypes::{DataType, FieldRef}; use datafusion::common::{plan_err, DataFusionError, Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ReturnFieldArgs, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; use serde_json::Value as JsonValue; use crate::sql2::result_metadata::json_field; use super::common::{scalar_inputs, text_like_value}; #[derive(Debug, Clone, PartialEq, Eq, Hash)] pub(super) struct LixJson; impl ScalarUDFImpl for LixJson { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_json" } fn signature(&self) -> &Signature { static SIGNATURE: std::sync::LazyLock = std::sync::LazyLock::new(|| Signature::any(1, Volatility::Immutable)); &SIGNATURE } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Utf8) } fn return_field_from_args(&self, _args: ReturnFieldArgs) -> Result { Ok(Arc::new(json_field(self.name(), true))) } fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result { if args.args.len() != 1 { return plan_err!("lix_json requires exactly 1 argument"); } let scalar_inputs = scalar_inputs(&args.args); let arrays = ColumnarValue::values_to_arrays(&args.args)?; let input = &arrays[0]; let len = input.len(); let mut values = Vec::with_capacity(len); for row in 0..len { values.push(json_value(input.as_ref(), row)?); } if scalar_inputs { Ok(ColumnarValue::Scalar(ScalarValue::Utf8( values.into_iter().next().flatten(), ))) } else { Ok(ColumnarValue::Array(Arc::new(StringArray::from(values)))) } } } fn json_value(array: &dyn Array, row: usize) -> Result> { if matches!(array.data_type(), DataType::Null) { return Ok(Some("null".to_string())); } let Some(raw) = text_like_value(array, row)? else { return Ok(Some("null".to_string())); }; let parsed = serde_json::from_str::(&raw).map_err(|error| { DataFusionError::Execution(format!( "lix_json() expected valid JSON text, got error: {error}" )) })?; Ok(Some(serde_json::to_string(&parsed).map_err(|error| { DataFusionError::Execution(format!("lix_json() could not render JSON: {error}")) })?)) } #[cfg(test)] mod tests { use super::super::test_support::single_text; #[tokio::test] async fn canonicalizes_json_text() { assert_eq!( single_text("SELECT lix_json('{ \"name\" : \"Ada\" }')").await, Some("{\"name\":\"Ada\"}".to_string()) ); } #[tokio::test] async fn null_input_returns_json_null() { assert_eq!( single_text("SELECT lix_json(NULL)").await, Some("null".to_string()) ); } } ================================================ FILE: packages/engine/src/sql2/udfs/lix_json_get.rs ================================================ use std::any::Any; use std::sync::Arc; use datafusion::arrow::array::StringArray; use datafusion::arrow::datatypes::{DataType, FieldRef}; use datafusion::common::{plan_err, Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ReturnFieldArgs, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; use serde_json::Value as JsonValue; use crate::sql2::result_metadata::json_field; use super::common::{extract_json_path, json_json_value, scalar_inputs}; #[derive(Debug, Clone, PartialEq, Eq, Hash)] pub(super) struct LixJsonGet { signature: Signature, } impl LixJsonGet { pub(super) fn new() -> Self { Self { signature: Signature::variadic_any(Volatility::Immutable), } } } impl ScalarUDFImpl for LixJsonGet { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_json_get" } fn signature(&self) -> &Signature { &self.signature } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Utf8) } fn return_field_from_args(&self, _args: ReturnFieldArgs) -> Result { Ok(Arc::new(json_field(self.name(), true))) } fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result { if args.args.len() < 2 { return plan_err!("lix_json_get requires at least 2 arguments"); } let scalar_inputs = scalar_inputs(&args.args); let arrays = ColumnarValue::values_to_arrays(&args.args)?; let len = arrays.first().map(|array| array.len()).unwrap_or(1); let mut values = Vec::with_capacity(len); for row in 0..len { values.push(match extract_json_path(self.name(), &arrays, row)? { None | Some(JsonValue::Null) => None, Some(other) => Some(json_json_value(&other)?), }); } if scalar_inputs { Ok(ColumnarValue::Scalar(ScalarValue::Utf8( values.into_iter().next().flatten(), ))) } else { Ok(ColumnarValue::Array(Arc::new(StringArray::from(values)))) } } } #[cfg(test)] mod tests { use super::super::test_support::single_text; #[tokio::test] async fn returns_json_representation() { assert_eq!( single_text("SELECT lix_json_get('{\"name\":\"Ada\"}', 'name')").await, Some("\"Ada\"".to_string()) ); assert_eq!( single_text("SELECT lix_json_get('{\"tags\":[\"db\"]}', 'tags')").await, Some("[\"db\"]".to_string()) ); } #[tokio::test] async fn missing_path_returns_null() { assert_eq!( single_text("SELECT lix_json_get('{\"name\":\"Ada\"}', 'missing')").await, None ); } } ================================================ FILE: packages/engine/src/sql2/udfs/lix_json_get_text.rs ================================================ use std::any::Any; use std::sync::Arc; use datafusion::arrow::array::StringArray; use datafusion::arrow::datatypes::DataType; use datafusion::common::{plan_err, Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; use serde_json::Value as JsonValue; use super::common::{extract_json_path, json_text_value, scalar_inputs}; #[derive(Debug, Clone, PartialEq, Eq, Hash)] pub(super) struct LixJsonGetText { signature: Signature, } impl LixJsonGetText { pub(super) fn new() -> Self { Self { signature: Signature::variadic_any(Volatility::Immutable), } } } impl ScalarUDFImpl for LixJsonGetText { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_json_get_text" } fn signature(&self) -> &Signature { &self.signature } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Utf8) } fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result { if args.args.len() < 2 { return plan_err!("lix_json_get_text requires at least 2 arguments"); } let scalar_inputs = scalar_inputs(&args.args); let arrays = ColumnarValue::values_to_arrays(&args.args)?; let len = arrays.first().map(|array| array.len()).unwrap_or(1); let mut values = Vec::with_capacity(len); for row in 0..len { values.push(match extract_json_path(self.name(), &arrays, row)? { None | Some(JsonValue::Null) => None, Some(JsonValue::Bool(value)) => Some(if value { "true".to_string() } else { "false".to_string() }), Some(JsonValue::String(value)) => Some(value), Some(other) => Some(json_text_value(&other)?), }); } if scalar_inputs { Ok(ColumnarValue::Scalar(ScalarValue::Utf8( values.into_iter().next().flatten(), ))) } else { Ok(ColumnarValue::Array(Arc::new(StringArray::from(values)))) } } } #[cfg(test)] mod tests { use super::super::test_support::single_text; #[tokio::test] async fn returns_unwrapped_text() { assert_eq!( single_text("SELECT lix_json_get_text('{\"name\":\"Ada\"}', 'name')").await, Some("Ada".to_string()) ); assert_eq!( single_text("SELECT lix_json_get_text('{\"active\":true}', 'active')").await, Some("true".to_string()) ); } #[tokio::test] async fn missing_path_returns_null() { assert_eq!( single_text("SELECT lix_json_get_text('{\"name\":\"Ada\"}', 'missing')").await, None ); } } ================================================ FILE: packages/engine/src/sql2/udfs/lix_text_decode.rs ================================================ use std::any::Any; use std::sync::Arc; use datafusion::arrow::array::StringArray; use datafusion::arrow::datatypes::DataType; use datafusion::common::{plan_err, Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; use super::common::{decode_utf8_value, scalar_inputs, validate_utf8_encoding_arg}; #[derive(Debug, Clone, PartialEq, Eq, Hash)] pub(super) struct LixTextDecode { signature: Signature, } impl LixTextDecode { pub(super) fn new() -> Self { Self { signature: Signature::one_of( vec![Signature::any(1, Volatility::Immutable).type_signature], Volatility::Immutable, ), } } } impl ScalarUDFImpl for LixTextDecode { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_text_decode" } fn signature(&self) -> &Signature { &self.signature } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Utf8) } fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result { if !(1..=2).contains(&args.args.len()) { return plan_err!("lix_text_decode requires 1 or 2 arguments"); } validate_utf8_encoding_arg(self.name(), args.args.get(1))?; let scalar_inputs = scalar_inputs(&args.args); let arrays = ColumnarValue::values_to_arrays(&args.args)?; let input = &arrays[0]; let len = input.len(); let mut values = Vec::with_capacity(len); for row in 0..len { values.push(decode_utf8_value(input.as_ref(), row)?); } if scalar_inputs { Ok(ColumnarValue::Scalar(ScalarValue::Utf8( values.into_iter().next().flatten(), ))) } else { Ok(ColumnarValue::Array(Arc::new(StringArray::from(values)))) } } } #[cfg(test)] mod tests { use super::super::test_support::single_text; #[tokio::test] async fn decodes_utf8_binary_to_text() { assert_eq!( single_text("SELECT lix_text_decode(X'416461')").await, Some("Ada".to_string()) ); } } ================================================ FILE: packages/engine/src/sql2/udfs/lix_text_encode.rs ================================================ use std::any::Any; use datafusion::arrow::datatypes::DataType; use datafusion::common::{plan_err, Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; use super::common::{ array_ref, binary_array_from_owned, encode_utf8_value, scalar_inputs, validate_utf8_encoding_arg, }; #[derive(Debug, Clone, PartialEq, Eq, Hash)] pub(super) struct LixTextEncode { signature: Signature, } impl LixTextEncode { pub(super) fn new() -> Self { Self { signature: Signature::one_of( vec![Signature::any(1, Volatility::Immutable).type_signature], Volatility::Immutable, ), } } } impl ScalarUDFImpl for LixTextEncode { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_text_encode" } fn signature(&self) -> &Signature { &self.signature } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Binary) } fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result { if !(1..=2).contains(&args.args.len()) { return plan_err!("lix_text_encode requires 1 or 2 arguments"); } validate_utf8_encoding_arg(self.name(), args.args.get(1))?; let scalar_inputs = scalar_inputs(&args.args); let arrays = ColumnarValue::values_to_arrays(&args.args)?; let input = &arrays[0]; let len = input.len(); let mut values = Vec::with_capacity(len); for row in 0..len { values.push(encode_utf8_value(input.as_ref(), row)?); } if scalar_inputs { Ok(ColumnarValue::Scalar(ScalarValue::Binary( values.into_iter().next().flatten(), ))) } else { Ok(ColumnarValue::Array(array_ref(binary_array_from_owned( &values, )))) } } } #[cfg(test)] mod tests { use super::super::test_support::single_binary; #[tokio::test] async fn encodes_utf8_text_to_binary() { assert_eq!( single_binary("SELECT lix_text_encode('Ada')").await, Some(b"Ada".to_vec()) ); } } ================================================ FILE: packages/engine/src/sql2/udfs/lix_timestamp.rs ================================================ use std::any::Any; use datafusion::arrow::datatypes::DataType; use datafusion::common::{plan_err, Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; use crate::functions::FunctionProviderHandle; #[derive(Clone)] pub(super) struct LixTimestamp { pub(super) functions: FunctionProviderHandle, } impl PartialEq for LixTimestamp { fn eq(&self, _other: &Self) -> bool { true } } impl Eq for LixTimestamp {} impl std::hash::Hash for LixTimestamp { fn hash(&self, state: &mut H) { self.name().hash(state); } } impl std::fmt::Debug for LixTimestamp { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixTimestamp").finish() } } impl ScalarUDFImpl for LixTimestamp { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_timestamp" } fn signature(&self) -> &Signature { static SIGNATURE: std::sync::LazyLock = std::sync::LazyLock::new(|| Signature::nullary(Volatility::Volatile)); &SIGNATURE } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Utf8) } fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result { if !args.args.is_empty() { return plan_err!("lix_timestamp requires no arguments"); } Ok(ColumnarValue::Scalar(ScalarValue::Utf8(Some( self.functions.call_timestamp(), )))) } } #[cfg(test)] mod tests { use super::super::test_support::single_text; #[tokio::test] async fn returns_timestamp_text() { let value = single_text("SELECT lix_timestamp()") .await .expect("timestamp should not be null"); assert!(!value.is_empty()); } } ================================================ FILE: packages/engine/src/sql2/udfs/lix_uuid_v7.rs ================================================ use std::any::Any; use datafusion::arrow::datatypes::DataType; use datafusion::common::{plan_err, Result, ScalarValue}; use datafusion::logical_expr::{ ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility, }; use crate::functions::FunctionProviderHandle; #[derive(Clone)] pub(super) struct LixUuidV7 { pub(super) functions: FunctionProviderHandle, } impl PartialEq for LixUuidV7 { fn eq(&self, _other: &Self) -> bool { true } } impl Eq for LixUuidV7 {} impl std::hash::Hash for LixUuidV7 { fn hash(&self, state: &mut H) { self.name().hash(state); } } impl std::fmt::Debug for LixUuidV7 { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixUuidV7").finish() } } impl ScalarUDFImpl for LixUuidV7 { fn as_any(&self) -> &dyn Any { self } fn name(&self) -> &str { "lix_uuid_v7" } fn signature(&self) -> &Signature { static SIGNATURE: std::sync::LazyLock = std::sync::LazyLock::new(|| Signature::nullary(Volatility::Volatile)); &SIGNATURE } fn return_type(&self, _arg_types: &[DataType]) -> Result { Ok(DataType::Utf8) } fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result { if !args.args.is_empty() { return plan_err!("lix_uuid_v7 requires no arguments"); } Ok(ColumnarValue::Scalar(ScalarValue::Utf8(Some( self.functions.call_uuid_v7(), )))) } } #[cfg(test)] mod tests { use super::super::test_support::single_text; #[tokio::test] async fn returns_uuid_text() { let value = single_text("SELECT lix_uuid_v7()") .await .expect("uuid should not be null"); assert!(!value.is_empty()); } } ================================================ FILE: packages/engine/src/sql2/udfs/mod.rs ================================================ mod common; mod lix_active_version_commit_id; mod lix_empty_blob; mod lix_json; mod lix_json_get; mod lix_json_get_text; mod lix_text_decode; mod lix_text_encode; mod lix_timestamp; mod lix_uuid_v7; mod public_call; use datafusion::execution::context::SessionContext; use datafusion::logical_expr::ScalarUDF; use crate::functions::FunctionProviderHandle; pub(crate) use public_call::validate_public_udf_calls; #[cfg(test)] pub(crate) fn system_sql2_function_provider() -> FunctionProviderHandle { use crate::functions::{FunctionProvider, SharedFunctionProvider, SystemFunctionProvider}; SharedFunctionProvider::new(Box::new(SystemFunctionProvider) as Box) } pub(crate) fn register_sql2_functions( ctx: &SessionContext, functions: FunctionProviderHandle, active_version_commit_id: Option, ) { ctx.register_udf(ScalarUDF::from( lix_active_version_commit_id::LixActiveVersionCommitId::new(active_version_commit_id), )); ctx.register_udf(ScalarUDF::from(lix_json_get::LixJsonGet::new())); ctx.register_udf(ScalarUDF::from(lix_json_get_text::LixJsonGetText::new())); ctx.register_udf(ScalarUDF::from(lix_text_decode::LixTextDecode::new())); ctx.register_udf(ScalarUDF::from(lix_text_encode::LixTextEncode::new())); ctx.register_udf(ScalarUDF::from(lix_json::LixJson)); ctx.register_udf(ScalarUDF::from(lix_empty_blob::LixEmptyBlob)); ctx.register_udf(ScalarUDF::from(lix_uuid_v7::LixUuidV7 { functions: functions.clone(), })); ctx.register_udf(ScalarUDF::from(lix_timestamp::LixTimestamp { functions })); } #[cfg(test)] pub(super) mod test_support { use datafusion::arrow::array::{Array, BinaryArray, StringArray}; use datafusion::prelude::SessionContext; use super::{register_sql2_functions, system_sql2_function_provider}; pub(super) async fn single_text(sql: &str) -> Option { let ctx = SessionContext::new(); register_sql2_functions(&ctx, system_sql2_function_provider(), None); let batches = ctx .sql(sql) .await .expect("query should plan") .collect() .await .expect("query should execute"); let array = batches[0] .column(0) .as_any() .downcast_ref::() .expect("first column should be utf8"); (!array.is_null(0)).then(|| array.value(0).to_string()) } pub(super) async fn single_binary(sql: &str) -> Option> { let ctx = SessionContext::new(); register_sql2_functions(&ctx, system_sql2_function_provider(), None); let batches = ctx .sql(sql) .await .expect("query should plan") .collect() .await .expect("query should execute"); let array = batches[0] .column(0) .as_any() .downcast_ref::() .expect("first column should be binary"); (!array.is_null(0)).then(|| array.value(0).to_vec()) } } ================================================ FILE: packages/engine/src/sql2/udfs/public_call.rs ================================================ use std::ops::ControlFlow; use datafusion::sql::sqlparser::ast::{ Expr, Function, FunctionArg, FunctionArgExpr, FunctionArguments, ObjectNamePart, Statement, Value, Visit, Visitor, }; use datafusion::sql::sqlparser::dialect::GenericDialect; use datafusion::sql::sqlparser::parser::Parser; use crate::LixError; pub(crate) fn validate_public_udf_calls(sql: &str) -> Result<(), LixError> { let statements = Parser::parse_sql(&GenericDialect {}, sql).map_err(|error| { LixError::new( LixError::CODE_PARSE_ERROR, format!("sql2 SQL parse error: {error}"), ) })?; let mut visitor = PublicUdfCallVisitor; match statements.visit(&mut visitor) { ControlFlow::Continue(()) => Ok(()), ControlFlow::Break(error) => Err(*error), } } struct PublicUdfCallVisitor; impl Visitor for PublicUdfCallVisitor { type Break = Box; fn pre_visit_expr(&mut self, expr: &Expr) -> ControlFlow { let Expr::Function(function) = expr else { return ControlFlow::Continue(()); }; match validate_public_function_call(function) { Ok(()) => ControlFlow::Continue(()), Err(error) => ControlFlow::Break(Box::new(error)), } } fn pre_visit_statement(&mut self, statement: &Statement) -> ControlFlow { match statement { Statement::CreateFunction(_) | Statement::DropFunction(_) => ControlFlow::Continue(()), _ => ControlFlow::Continue(()), } } } fn validate_public_function_call(function: &Function) -> Result<(), LixError> { let Some(name) = public_lix_function_name(function) else { return Ok(()); }; let arity = function_arity(&function.args); match name { "lix_json" => expect_exact_arity(name, arity, 1), "lix_empty_blob" => expect_exact_arity(name, arity, 0), "lix_timestamp" => expect_exact_arity(name, arity, 0), "lix_uuid_v7" => expect_exact_arity(name, arity, 0), "lix_active_version_commit_id" => expect_exact_arity(name, arity, 0), "lix_text_encode" | "lix_text_decode" => { expect_arity_range(name, arity, 1, 2)?; validate_literal_utf8_encoding(name, &function.args) } _ => Ok(()), } } fn public_lix_function_name(function: &Function) -> Option<&'static str> { let part = function.name.0.last()?; let ident = match part { ObjectNamePart::Identifier(ident) => ident.value.as_str(), ObjectNamePart::Function(_) => return None, }; match ident.to_ascii_lowercase().as_str() { "lix_json" => Some("lix_json"), "lix_empty_blob" => Some("lix_empty_blob"), "lix_timestamp" => Some("lix_timestamp"), "lix_uuid_v7" => Some("lix_uuid_v7"), "lix_active_version_commit_id" => Some("lix_active_version_commit_id"), "lix_text_encode" => Some("lix_text_encode"), "lix_text_decode" => Some("lix_text_decode"), _ => None, } } fn function_arity(args: &FunctionArguments) -> usize { match args { FunctionArguments::None => 0, FunctionArguments::Subquery(_) => 1, FunctionArguments::List(list) => list.args.len(), } } fn expect_exact_arity(name: &str, actual: usize, expected: usize) -> Result<(), LixError> { if actual == expected { return Ok(()); } let expectation = if expected == 0 { "no arguments".to_string() } else if expected == 1 { "exactly 1 argument".to_string() } else { format!("exactly {expected} arguments") }; Err(invalid_param(format!("{name} requires {expectation}"))) } fn expect_arity_range(name: &str, actual: usize, min: usize, max: usize) -> Result<(), LixError> { if (min..=max).contains(&actual) { return Ok(()); } Err(invalid_param(format!( "{name} requires {min} or {max} arguments" ))) } fn validate_literal_utf8_encoding(name: &str, args: &FunctionArguments) -> Result<(), LixError> { let Some(encoding) = function_arg(args, 1) else { return Ok(()); }; let Some(value) = string_literal_arg(encoding) else { return Ok(()); }; let normalized = value.trim().to_ascii_uppercase().replace('-', ""); if normalized == "UTF8" { Ok(()) } else { Err(invalid_param(format!( "{name}() only supports UTF8 encoding, got '{value}'" ))) } } fn function_arg(args: &FunctionArguments, index: usize) -> Option<&FunctionArg> { match args { FunctionArguments::List(list) => list.args.get(index), _ => None, } } fn string_literal_arg(arg: &FunctionArg) -> Option<&str> { let expr = match arg { FunctionArg::Unnamed(FunctionArgExpr::Expr(expr)) | FunctionArg::Named { arg: FunctionArgExpr::Expr(expr), .. } | FunctionArg::ExprNamed { arg: FunctionArgExpr::Expr(expr), .. } => expr, _ => return None, }; let Expr::Value(value) = expr else { return None; }; match &value.value { Value::SingleQuotedString(value) | Value::DoubleQuotedString(value) | Value::TripleSingleQuotedString(value) | Value::TripleDoubleQuotedString(value) | Value::EscapedStringLiteral(value) | Value::UnicodeStringLiteral(value) | Value::NationalStringLiteral(value) | Value::SingleQuotedRawStringLiteral(value) | Value::DoubleQuotedRawStringLiteral(value) | Value::TripleSingleQuotedRawStringLiteral(value) | Value::TripleDoubleQuotedRawStringLiteral(value) => Some(value.as_str()), Value::DollarQuotedString(value) => Some(value.value.as_str()), _ => None, } } fn invalid_param(message: impl Into) -> LixError { LixError::new(LixError::CODE_INVALID_PARAM, message) } #[cfg(test)] mod tests { use super::validate_public_udf_calls; #[test] fn rejects_lix_udf_wrong_arity_as_public_invalid_param() { let error = validate_public_udf_calls("SELECT lix_uuid_v7('extra')") .expect_err("wrong arity should be rejected"); assert_eq!(error.code, "LIX_INVALID_PARAM"); assert!(error.message.contains("lix_uuid_v7 requires no arguments")); } #[test] fn rejects_unsupported_literal_encoding_as_public_invalid_param() { let error = validate_public_udf_calls("SELECT lix_text_encode('Ada', 'base64')") .expect_err("unsupported encoding should be rejected"); assert_eq!(error.code, "LIX_INVALID_PARAM"); assert!(error .message .contains("lix_text_encode() only supports UTF8 encoding")); } #[test] fn accepts_valid_public_lix_udf_calls() { validate_public_udf_calls( "SELECT lix_json('{\"x\":1}'), lix_text_decode(X'416461', 'utf-8')", ) .expect("valid calls should pass public validation"); } } ================================================ FILE: packages/engine/src/sql2/version_provider.rs ================================================ use std::any::Any; use std::sync::Arc; use async_trait::async_trait; use datafusion::arrow::array::{ArrayRef, BooleanArray, StringArray, UInt64Array}; use datafusion::arrow::compute::{and, filter_record_batch}; use datafusion::arrow::datatypes::{DataType, Field, Schema, SchemaRef}; use datafusion::arrow::record_batch::RecordBatch; use datafusion::catalog::{Session, TableProvider}; use datafusion::common::{not_impl_err, DFSchema, DataFusionError, Result, ScalarValue}; use datafusion::datasource::TableType; use datafusion::execution::TaskContext; use datafusion::logical_expr::dml::InsertOp; use datafusion::logical_expr::{Expr, TableProviderFilterPushDown}; use datafusion::physical_expr::{create_physical_expr, EquivalenceProperties, PhysicalExpr}; use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType, PlanProperties}; use datafusion::physical_plan::stream::RecordBatchStreamAdapter; use datafusion::physical_plan::{ DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, SendableRecordBatchStream, }; use futures_util::{stream, TryStreamExt}; use serde_json::Value as JsonValue; use crate::live_state::{ LiveStateFilter, LiveStateReader, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::sql2::dml::{InsertExec, InsertSink}; use crate::sql2::record_batch::record_batch_with_row_count; use crate::sql2::write_normalization::{InsertCell, SqlCell, UpdateAssignmentValues}; use crate::sql2::{ SqlWriteContext, WriteAccess, WriteContextLiveStateReader, WriteContextVersionRefReader, }; use crate::transaction::types::{ LogicalPrimaryKey, TransactionWrite, TransactionWriteMode, TransactionWriteOperation, TransactionWriteOrigin, TransactionWriteRow, }; use crate::version::{ version_descriptor_stage_row, version_descriptor_tombstone_row, version_ref_stage_row, version_ref_tombstone_row, VersionRefReader, }; use crate::LixError; use crate::GLOBAL_VERSION_ID; pub(crate) async fn register_lix_version_provider( session: &datafusion::prelude::SessionContext, live_state: Arc, version_ref: Arc, ) -> Result<(), LixError> { session .register_table( "lix_version", Arc::new(LixVersionProvider::new(live_state, version_ref)), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } pub(crate) async fn register_lix_version_write_provider( session: &datafusion::prelude::SessionContext, write_ctx: SqlWriteContext, ) -> Result<(), LixError> { session .register_table( "lix_version", Arc::new(LixVersionProvider::with_write(write_ctx)), ) .map_err(datafusion_error_to_lix_error)?; Ok(()) } struct LixVersionProvider { schema: SchemaRef, live_state: Arc, version_ref: Arc, write_access: WriteAccess, } impl std::fmt::Debug for LixVersionProvider { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixVersionProvider").finish() } } impl LixVersionProvider { fn new(live_state: Arc, version_ref: Arc) -> Self { Self { schema: lix_version_schema(), live_state, version_ref, write_access: WriteAccess::read_only(), } } fn with_write(write_ctx: SqlWriteContext) -> Self { let live_state = Arc::new(WriteContextLiveStateReader::new(write_ctx.clone())); let version_ref = Arc::new(WriteContextVersionRefReader::new(write_ctx.clone())); Self { schema: lix_version_schema(), live_state, version_ref, write_access: WriteAccess::write(write_ctx), } } } #[async_trait] impl TableProvider for LixVersionProvider { fn as_any(&self) -> &dyn Any { self } fn schema(&self) -> SchemaRef { Arc::clone(&self.schema) } fn table_type(&self) -> TableType { TableType::Base } fn supports_filters_pushdown( &self, filters: &[&Expr], ) -> Result> { Ok(filters .iter() .map(|_| TableProviderFilterPushDown::Unsupported) .collect()) } async fn scan( &self, _state: &dyn Session, projection: Option<&Vec>, _filters: &[Expr], _limit: Option, ) -> Result> { Ok(Arc::new(LixVersionScanExec::new( Arc::clone(&self.live_state), Arc::clone(&self.version_ref), projected_schema(&self.schema, projection), projection.cloned(), ))) } async fn insert_into( &self, _state: &dyn Session, input: Arc, insert_op: InsertOp, ) -> Result> { if insert_op != InsertOp::Append { return not_impl_err!("{insert_op} not implemented for lix_version yet"); } let write_ctx = self.write_access.require_write("INSERT into lix_version")?; let sink = LixVersionInsertSink::new(input.schema(), write_ctx); Ok(Arc::new(InsertExec::new(input, Arc::new(sink)))) } async fn delete_from( &self, state: &dyn Session, filters: Vec, ) -> Result> { let write_ctx = self.write_access.require_write("DELETE FROM lix_version")?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; Ok(Arc::new(LixVersionDeleteExec::new( write_ctx, Arc::clone(&self.live_state), Arc::clone(&self.version_ref), Arc::clone(&self.schema), physical_filters, ))) } async fn update( &self, state: &dyn Session, assignments: Vec<(String, Expr)>, filters: Vec, ) -> Result> { let write_ctx = self.write_access.require_write("UPDATE lix_version")?; validate_lix_version_update_assignments(&assignments)?; let df_schema = DFSchema::try_from(Arc::clone(&self.schema))?; let physical_assignments = assignments .iter() .map(|(column_name, expr)| { Ok(( column_name.clone(), create_physical_expr(expr, &df_schema, state.execution_props())?, )) }) .collect::>>()?; let physical_filters = filters .iter() .map(|expr| create_physical_expr(expr, &df_schema, state.execution_props())) .collect::>>()?; Ok(Arc::new(LixVersionUpdateExec::new( write_ctx, Arc::clone(&self.live_state), Arc::clone(&self.version_ref), Arc::clone(&self.schema), physical_assignments, physical_filters, ))) } } struct LixVersionInsertSink { write_ctx: SqlWriteContext, } impl std::fmt::Debug for LixVersionInsertSink { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixVersionInsertSink").finish() } } impl LixVersionInsertSink { fn new(_schema: SchemaRef, write_ctx: SqlWriteContext) -> Self { Self { write_ctx } } } impl DisplayAs for LixVersionInsertSink { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixVersionInsertSink") } DisplayFormatType::TreeRender => write!(f, "LixVersionInsertSink"), } } } #[async_trait] impl InsertSink for LixVersionInsertSink { async fn write_batches( &self, batches: Vec, _context: &Arc, ) -> Result { let default_commit_id = self .write_ctx .load_version_head(&self.write_ctx.active_version_id()) .await .map_err(lix_error_to_datafusion_error)? .ok_or_else(|| { DataFusionError::Execution( "INSERT into lix_version could not resolve active version head".to_string(), ) })?; let mut rows = Vec::new(); let mut count = 0u64; for batch in batches { let version_rows = version_insert_rows_from_batch(&batch, &default_commit_id)?; count = count .checked_add(u64::try_from(version_rows.len()).map_err(|_| { DataFusionError::Execution("INSERT row count overflow".to_string()) })?) .ok_or_else(|| DataFusionError::Execution("INSERT row count overflow".into()))?; rows.extend(version_rows.into_iter().flat_map(version_insert_stage_rows)); } if !rows.is_empty() { self.write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok(count) } } struct LixVersionDeleteExec { write_ctx: SqlWriteContext, active_version_id: String, live_state: Arc, version_ref: Arc, table_schema: SchemaRef, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for LixVersionDeleteExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixVersionDeleteExec").finish() } } impl LixVersionDeleteExec { fn new( write_ctx: SqlWriteContext, live_state: Arc, version_ref: Arc, table_schema: SchemaRef, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = dml_plan_properties(Arc::clone(&result_schema)); let active_version_id = write_ctx.active_version_id(); Self { write_ctx, active_version_id, live_state, version_ref, table_schema, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for LixVersionDeleteExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixVersionDeleteExec(filters={})", self.filters.len()) } DisplayFormatType::TreeRender => write!(f, "LixVersionDeleteExec"), } } } impl ExecutionPlan for LixVersionDeleteExec { fn name(&self) -> &str { "LixVersionDeleteExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixVersionDeleteExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixVersionDeleteExec only exposes one partition, got {partition}" ))); } let write_ctx = self.write_ctx.clone(); let active_version_id = self.active_version_id.clone(); let live_state = Arc::clone(&self.live_state); let version_ref = Arc::clone(&self.version_ref); let filters = self.filters.clone(); let table_schema = Arc::clone(&self.table_schema); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = load_version_rows(live_state, version_ref) .await .map_err(lix_error_to_datafusion_error)?; let source_batch = version_record_batch(&version_projection_for_scan(None), &rows)?; let matched_batch = filter_version_batch(source_batch, &filters)?; let version_rows = version_rows_from_batch(&matched_batch)?; reject_protected_version_deletes(&version_rows, &active_version_id)?; let count = u64::try_from(version_rows.len()) .map_err(|_| DataFusionError::Execution("DELETE row count overflow".to_string()))?; let rows = version_rows .into_iter() .flat_map(version_tombstone_rows) .collect::>(); if !rows.is_empty() { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows, }) .await .map_err(lix_error_to_datafusion_error)?; } let _ = table_schema; Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } struct LixVersionUpdateExec { write_ctx: SqlWriteContext, live_state: Arc, version_ref: Arc, table_schema: SchemaRef, assignments: Vec<(String, Arc)>, filters: Vec>, result_schema: SchemaRef, properties: Arc, } impl std::fmt::Debug for LixVersionUpdateExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixVersionUpdateExec").finish() } } impl LixVersionUpdateExec { fn new( write_ctx: SqlWriteContext, live_state: Arc, version_ref: Arc, table_schema: SchemaRef, assignments: Vec<(String, Arc)>, filters: Vec>, ) -> Self { let result_schema = dml_count_schema(); let properties = dml_plan_properties(Arc::clone(&result_schema)); Self { write_ctx, live_state, version_ref, table_schema, assignments, filters, result_schema, properties: Arc::new(properties), } } } impl DisplayAs for LixVersionUpdateExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!( f, "LixVersionUpdateExec(assignments={}, filters={})", self.assignments.len(), self.filters.len() ) } DisplayFormatType::TreeRender => write!(f, "LixVersionUpdateExec"), } } } impl ExecutionPlan for LixVersionUpdateExec { fn name(&self) -> &str { "LixVersionUpdateExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixVersionUpdateExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixVersionUpdateExec only exposes one partition, got {partition}" ))); } let write_ctx = self.write_ctx.clone(); let live_state = Arc::clone(&self.live_state); let version_ref = Arc::clone(&self.version_ref); let table_schema = Arc::clone(&self.table_schema); let assignments = self.assignments.clone(); let filters = self.filters.clone(); let result_schema = Arc::clone(&self.result_schema); let stream_schema = Arc::clone(&result_schema); let stream = stream::once(async move { let rows = load_version_rows(live_state, version_ref) .await .map_err(lix_error_to_datafusion_error)?; let source_batch = version_record_batch(&version_projection_for_scan(None), &rows)?; let matched_batch = filter_version_batch(source_batch, &filters)?; let version_rows = version_update_rows_from_batch(&matched_batch, &assignments, &table_schema)?; let count = u64::try_from(version_rows.len()) .map_err(|_| DataFusionError::Execution("UPDATE row count overflow".to_string()))?; let rows = version_rows .into_iter() .flat_map(version_update_stage_rows) .collect::>(); if !rows.is_empty() { write_ctx .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows, }) .await .map_err(lix_error_to_datafusion_error)?; } Ok::<_, DataFusionError>(stream::iter(vec![Ok::( dml_count_batch(Arc::clone(&stream_schema), count)?, )])) }) .try_flatten(); Ok(Box::pin(RecordBatchStreamAdapter::new( result_schema, stream, ))) } } struct LixVersionScanExec { live_state: Arc, version_ref: Arc, schema: SchemaRef, projection: Option>, properties: Arc, } impl std::fmt::Debug for LixVersionScanExec { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("LixVersionScanExec").finish() } } impl LixVersionScanExec { fn new( live_state: Arc, version_ref: Arc, schema: SchemaRef, projection: Option>, ) -> Self { let properties = PlanProperties::new( EquivalenceProperties::new(schema.clone()), Partitioning::UnknownPartitioning(1), EmissionType::Incremental, Boundedness::Bounded, ); Self { live_state, version_ref, schema, projection, properties: Arc::new(properties), } } } impl DisplayAs for LixVersionScanExec { fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match t { DisplayFormatType::Default | DisplayFormatType::Verbose => { write!(f, "LixVersionScanExec") } DisplayFormatType::TreeRender => write!(f, "LixVersionScanExec"), } } } impl ExecutionPlan for LixVersionScanExec { fn name(&self) -> &str { "LixVersionScanExec" } fn as_any(&self) -> &dyn Any { self } fn properties(&self) -> &Arc { &self.properties } fn children(&self) -> Vec<&Arc> { Vec::new() } fn with_new_children( self: Arc, children: Vec>, ) -> Result> { if !children.is_empty() { return Err(DataFusionError::Execution( "LixVersionScanExec does not accept children".to_string(), )); } Ok(self) } fn execute( &self, partition: usize, _context: Arc, ) -> Result { if partition != 0 { return Err(DataFusionError::Execution(format!( "LixVersionScanExec only exposes one partition, got {partition}" ))); } let live_state = Arc::clone(&self.live_state); let version_ref = Arc::clone(&self.version_ref); let projection = version_projection_for_scan(self.projection.as_ref()); let schema = Arc::clone(&self.schema); let stream = stream::once(async move { let rows = load_version_rows(live_state, version_ref) .await .map_err(lix_error_to_datafusion_error)?; version_record_batch(&projection, &rows) }); Ok(Box::pin(RecordBatchStreamAdapter::new(schema, stream))) } } #[derive(Debug, Clone, PartialEq, Eq)] struct VersionRow { id: String, name: String, hidden: bool, commit_id: String, } #[derive(Debug, Clone, Copy)] enum VersionColumn { Id, Name, Hidden, CommitId, } async fn load_version_rows( live_state: Arc, version_ref: Arc, ) -> Result, LixError> { let descriptor_rows = live_state .scan_rows(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec!["lix_version_descriptor".to_string()], version_ids: vec![GLOBAL_VERSION_ID.to_string()], ..LiveStateFilter::default() }, projection: Default::default(), limit: None, }) .await?; let mut out = Vec::new(); for descriptor_row in descriptor_rows { let descriptor = parse_descriptor(&descriptor_row)?; let Some(commit_id) = version_ref.load_head_commit_id(&descriptor.id).await? else { continue; }; out.push(VersionRow { commit_id, id: descriptor.id, name: descriptor.name, hidden: descriptor.hidden, }); } Ok(out) } #[derive(Debug, Clone, PartialEq, Eq)] struct VersionDescriptor { id: String, name: String, hidden: bool, } fn parse_descriptor(row: &MaterializedLiveStateRow) -> Result { let snapshot = parse_snapshot(row, "lix_version_descriptor")?; let id = snapshot .get("id") .and_then(JsonValue::as_str) .ok_or_else(|| LixError::new("LIX_ERROR_UNKNOWN", "lix_version_descriptor is missing id"))? .to_string(); let name = snapshot .get("name") .and_then(JsonValue::as_str) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "lix_version_descriptor is missing name", ) })? .to_string(); let hidden = snapshot .get("hidden") .and_then(JsonValue::as_bool) .unwrap_or(false); Ok(VersionDescriptor { id, name, hidden }) } fn parse_snapshot(row: &MaterializedLiveStateRow, schema_key: &str) -> Result { let snapshot_content = row.snapshot_content.as_deref().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("{schema_key} row is missing snapshot_content"), ) })?; serde_json::from_str(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("{schema_key} snapshot_content is invalid JSON: {error}"), ) }) } fn validate_lix_version_update_assignments(assignments: &[(String, Expr)]) -> Result<()> { for (column_name, _) in assignments { match column_name.as_str() { "name" | "hidden" | "commit_id" => {} "id" => { return Err(DataFusionError::Execution( "UPDATE lix_version cannot change immutable column 'id'".to_string(), )); } other => { return Err(DataFusionError::Plan(format!( "UPDATE lix_version failed: column '{other}' does not exist" ))); } } } Ok(()) } fn filter_version_batch( batch: RecordBatch, filters: &[Arc], ) -> Result { let Some(mask) = evaluate_version_filters(&batch, filters)? else { return Ok(batch); }; Ok(filter_record_batch(&batch, &mask)?) } fn evaluate_version_filters( batch: &RecordBatch, filters: &[Arc], ) -> Result> { if filters.is_empty() { return Ok(None); } let mut combined_mask: Option = None; for filter in filters { let result = filter.evaluate(batch)?; let array = result.into_array(batch.num_rows())?; let bool_array = array .as_any() .downcast_ref::() .ok_or_else(|| { DataFusionError::Execution("lix_version filter was not boolean".to_string()) })?; let normalized = bool_array .iter() .map(|value| Some(value == Some(true))) .collect::(); combined_mask = Some(match combined_mask { Some(existing) => and(&existing, &normalized)?, None => normalized, }); } Ok(combined_mask) } fn version_insert_rows_from_batch( batch: &RecordBatch, default_commit_id: &str, ) -> Result> { (0..batch.num_rows()) .map(|row_index| { let id = required_string_value(batch, row_index, "id", "INSERT")?; let name = required_string_value(batch, row_index, "name", "INSERT")?; let hidden = optional_bool_value(batch, row_index, "hidden", "INSERT")?.unwrap_or(false); let commit_id = optional_string_value(batch, row_index, "commit_id", "INSERT")? .unwrap_or_else(|| default_commit_id.to_string()); Ok(VersionRow { id, name, hidden, commit_id, }) }) .collect() } fn version_rows_from_batch(batch: &RecordBatch) -> Result> { (0..batch.num_rows()) .map(|row_index| { Ok(VersionRow { id: required_string_value(batch, row_index, "id", "DELETE")?, name: required_string_value(batch, row_index, "name", "DELETE")?, hidden: required_bool_value(batch, row_index, "hidden", "DELETE")?, commit_id: required_string_value(batch, row_index, "commit_id", "DELETE")?, }) }) .collect() } fn reject_protected_version_deletes(rows: &[VersionRow], active_version_id: &str) -> Result<()> { for row in rows { if row.id == GLOBAL_VERSION_ID { return Err(DataFusionError::Execution( "DELETE FROM lix_version cannot delete the global version".to_string(), )); } if row.id == active_version_id { return Err(DataFusionError::Execution(format!( "DELETE FROM lix_version cannot delete active version '{}'", row.id ))); } } Ok(()) } fn version_update_rows_from_batch( batch: &RecordBatch, assignments: &[(String, Arc)], table_schema: &SchemaRef, ) -> Result> { let assignment_values = UpdateAssignmentValues::evaluate(batch, assignments)?; (0..batch.num_rows()) .map(|row_index| { Ok(VersionRow { id: required_string_value(batch, row_index, "id", "UPDATE")?, name: update_string_value( batch, &assignment_values, table_schema, row_index, "name", )?, hidden: update_bool_value( batch, &assignment_values, table_schema, row_index, "hidden", )?, commit_id: update_string_value( batch, &assignment_values, table_schema, row_index, "commit_id", )?, }) }) .collect() } fn version_stage_rows( row: VersionRow, origin: Option, ) -> Vec { vec![ with_origin( version_descriptor_stage_row(&row.id, &row.name, row.hidden), origin.clone(), ), with_origin(version_ref_stage_row(&row.id, &row.commit_id), origin), ] } fn version_tombstone_rows(row: VersionRow) -> Vec { let origin = Some(lix_version_origin( TransactionWriteOperation::Delete, &row.id, )); vec![ with_origin(version_descriptor_tombstone_row(&row.id), origin.clone()), with_origin(version_ref_tombstone_row(&row.id), origin), ] } fn version_insert_stage_rows(row: VersionRow) -> Vec { let origin = lix_version_origin(TransactionWriteOperation::Insert, &row.id); version_stage_rows(row, Some(origin)) } fn version_update_stage_rows(row: VersionRow) -> Vec { let origin = lix_version_origin(TransactionWriteOperation::Update, &row.id); version_stage_rows(row, Some(origin)) } fn with_origin( mut row: TransactionWriteRow, origin: Option, ) -> TransactionWriteRow { row.origin = origin; row } fn lix_version_origin( action: TransactionWriteOperation, version_id: &str, ) -> TransactionWriteOrigin { TransactionWriteOrigin { surface: "lix_version".to_string(), operation: action, primary_key: Some(LogicalPrimaryKey { columns: vec!["id".to_string()], values: vec![version_id.to_string()], }), } } fn update_string_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, table_schema: &SchemaRef, row_index: usize, column_name: &str, ) -> Result { let column_index = table_schema.index_of(column_name)?; match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? { InsertCell::Omitted => required_string_value(batch, row_index, column_name, "UPDATE"), InsertCell::Provided(SqlCell::Value( ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)), )) => Ok(value), InsertCell::Provided(SqlCell::Null) => Err(DataFusionError::Execution(format!( "UPDATE lix_version requires non-null text column '{column_name}'" ))), InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!( "UPDATE lix_version expected text-compatible column '{column_name}', got {other:?}" ))), } .or_else(|error| { if batch.column(column_index).is_null(row_index) { Err(DataFusionError::Execution(format!( "UPDATE lix_version requires non-null text column '{column_name}'" ))) } else { Err(error) } }) } fn update_bool_value( batch: &RecordBatch, assignment_values: &UpdateAssignmentValues, table_schema: &SchemaRef, row_index: usize, column_name: &str, ) -> Result { let column_index = table_schema.index_of(column_name)?; match assignment_values.assigned_or_existing_cell(batch, row_index, column_name)? { InsertCell::Omitted => required_bool_value(batch, row_index, column_name, "UPDATE"), InsertCell::Provided(SqlCell::Value(ScalarValue::Boolean(Some(value)))) => Ok(value), InsertCell::Provided(SqlCell::Null) => Err(DataFusionError::Execution(format!( "UPDATE lix_version requires non-null boolean column '{column_name}'" ))), InsertCell::Provided(SqlCell::Value(other)) => Err(DataFusionError::Execution(format!( "UPDATE lix_version expected boolean column '{column_name}', got {other:?}" ))), } .or_else(|error| { if batch.column(column_index).is_null(row_index) { Err(DataFusionError::Execution(format!( "UPDATE lix_version requires non-null boolean column '{column_name}'" ))) } else { Err(error) } }) } fn required_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, action: &str, ) -> Result { optional_string_value(batch, row_index, column_name, action)?.ok_or_else(|| { DataFusionError::Execution(format!( "{action} lix_version requires non-null text column '{column_name}'" )) }) } fn optional_string_value( batch: &RecordBatch, row_index: usize, column_name: &str, action: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Utf8(None)) | Some(ScalarValue::Utf8View(None)) | Some(ScalarValue::LargeUtf8(None)) => Ok(None), Some(ScalarValue::Utf8(Some(value))) | Some(ScalarValue::Utf8View(Some(value))) | Some(ScalarValue::LargeUtf8(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "{action} lix_version expected text-compatible column '{column_name}', got {other:?}" ))), } } fn required_bool_value( batch: &RecordBatch, row_index: usize, column_name: &str, action: &str, ) -> Result { optional_bool_value(batch, row_index, column_name, action)?.ok_or_else(|| { DataFusionError::Execution(format!( "{action} lix_version requires non-null boolean column '{column_name}'" )) }) } fn optional_bool_value( batch: &RecordBatch, row_index: usize, column_name: &str, action: &str, ) -> Result> { match optional_scalar_value(batch, row_index, column_name)? { None | Some(ScalarValue::Null) | Some(ScalarValue::Boolean(None)) => Ok(None), Some(ScalarValue::Boolean(Some(value))) => Ok(Some(value)), Some(other) => Err(DataFusionError::Execution(format!( "{action} lix_version expected boolean column '{column_name}', got {other:?}" ))), } } fn optional_scalar_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { let Ok(column_index) = batch.schema().index_of(column_name) else { return Ok(None); }; Ok(Some(ScalarValue::try_from_array( batch.column(column_index).as_ref(), row_index, )?)) } fn dml_count_schema() -> SchemaRef { Arc::new(Schema::new(vec![Field::new( "count", DataType::UInt64, false, )])) } fn dml_plan_properties(schema: SchemaRef) -> PlanProperties { PlanProperties::new( EquivalenceProperties::new(schema), Partitioning::UnknownPartitioning(1), EmissionType::Final, Boundedness::Bounded, ) } fn dml_count_batch(schema: SchemaRef, count: u64) -> Result { RecordBatch::try_new( schema, vec![Arc::new(UInt64Array::from(vec![count])) as ArrayRef], ) .map_err(DataFusionError::from) } fn lix_version_schema() -> SchemaRef { Arc::new(Schema::new(vec![ Field::new("id", DataType::Utf8, false), Field::new("name", DataType::Utf8, false), Field::new("hidden", DataType::Boolean, false), Field::new("commit_id", DataType::Utf8, false), ])) } fn version_projection_for_scan(projection: Option<&Vec>) -> Vec { let all_columns = vec![ VersionColumn::Id, VersionColumn::Name, VersionColumn::Hidden, VersionColumn::CommitId, ]; projection.map_or(all_columns.clone(), |indices| { indices .iter() .filter_map(|index| all_columns.get(*index).copied()) .collect() }) } fn projected_schema(schema: &SchemaRef, projection: Option<&Vec>) -> SchemaRef { match projection { Some(projection) => Arc::new(schema.project(projection).expect("projection is valid")), None => Arc::clone(schema), } } fn version_record_batch(projection: &[VersionColumn], rows: &[VersionRow]) -> Result { let arrays = projection .iter() .map(|column| match column { VersionColumn::Id => string_array(rows.iter().map(|row| Some(row.id.as_str()))), VersionColumn::Name => string_array(rows.iter().map(|row| Some(row.name.as_str()))), VersionColumn::Hidden => Arc::new(BooleanArray::from( rows.iter().map(|row| row.hidden).collect::>(), )) as ArrayRef, VersionColumn::CommitId => { string_array(rows.iter().map(|row| Some(row.commit_id.as_str()))) } }) .collect::>(); record_batch_with_row_count(version_schema(projection), arrays, rows.len()).map_err(|error| { DataFusionError::Execution(format!("failed to build lix_version batch: {error}")) }) } fn version_schema(projection: &[VersionColumn]) -> SchemaRef { Arc::new(Schema::new( projection .iter() .map(|column| match column { VersionColumn::Id => Field::new("id", DataType::Utf8, false), VersionColumn::Name => Field::new("name", DataType::Utf8, false), VersionColumn::Hidden => Field::new("hidden", DataType::Boolean, false), VersionColumn::CommitId => Field::new("commit_id", DataType::Utf8, false), }) .collect::>(), )) } fn string_array<'a>(values: impl Iterator>) -> ArrayRef { Arc::new(StringArray::from(values.collect::>())) as ArrayRef } fn datafusion_error_to_lix_error(error: DataFusionError) -> LixError { super::error::datafusion_error_to_lix_error(error) } fn lix_error_to_datafusion_error(error: LixError) -> DataFusionError { super::error::lix_error_to_datafusion_error(error) } ================================================ FILE: packages/engine/src/sql2/version_scope.rs ================================================ use std::collections::BTreeSet; use datafusion::error::DataFusionError; use datafusion::logical_expr::expr::InList; use datafusion::logical_expr::{BinaryExpr, Expr, Operator}; use datafusion::scalar::ScalarValue; use crate::version::VersionRefReader; use crate::LixError; use crate::GLOBAL_VERSION_ID; /// Version scope requested by a SQL surface. /// /// Active surfaces read through one session version. By-version surfaces either /// read explicitly filtered versions or, without a version predicate, enumerate /// every visible version scope before handing the request to live_state. pub(crate) enum SqlVersionScope { Active(String), Explicit(Vec), AllVisible, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) enum VersionBinding { Active { version_id: String }, Explicit, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct WriteVersionScope { pub(crate) version_id: String, pub(crate) global: bool, } impl VersionBinding { pub(crate) fn active(version_id: impl Into) -> Self { Self::Active { version_id: version_id.into(), } } pub(crate) fn explicit() -> Self { Self::Explicit } pub(crate) fn active_version_id(&self) -> Option<&str> { match self { Self::Active { version_id } => Some(version_id), Self::Explicit => None, } } pub(crate) fn require_active_version_id(&self, action: &str) -> Result { match self { Self::Active { version_id } => Ok(version_id.clone()), Self::Explicit => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("{action} is only supported for active-version SQL surfaces"), )), } } } pub(crate) fn resolve_write_version_scope( explicit_global: Option, explicit_version_id: Option, fallback_version_id: Option<&str>, action: &str, surface: &str, ) -> Result { if explicit_global == Some(true) { if explicit_version_id .as_deref() .is_some_and(|version_id| version_id != GLOBAL_VERSION_ID) { return Err(DataFusionError::Execution(format!( "{surface} cannot set lixcol_global=true with non-global lixcol_version_id" ))); } return Ok(WriteVersionScope { version_id: GLOBAL_VERSION_ID.to_string(), global: true, }); } let version_id = explicit_version_id .or_else(|| fallback_version_id.map(ToOwned::to_owned)) .ok_or_else(|| { DataFusionError::Execution(format!("{action} requires lixcol_version_id")) })?; if explicit_global == Some(false) && version_id == GLOBAL_VERSION_ID { return Err(DataFusionError::Execution(format!( "{surface} cannot set lixcol_global=false with global lixcol_version_id" ))); } Ok(WriteVersionScope { global: explicit_global.unwrap_or(version_id == GLOBAL_VERSION_ID), version_id, }) } impl SqlVersionScope { pub(crate) fn from_provider( binding: &VersionBinding, requested_version_ids: Vec, ) -> Self { match binding { VersionBinding::Active { version_id } => Self::Active(version_id.clone()), VersionBinding::Explicit if requested_version_ids.is_empty() => Self::AllVisible, VersionBinding::Explicit => Self::Explicit(requested_version_ids), } } } pub(crate) async fn resolve_sql_version_scope( version_ref: &dyn VersionRefReader, scope: SqlVersionScope, ) -> Result, LixError> { match scope { SqlVersionScope::Active(version_id) => Ok(vec![version_id]), SqlVersionScope::Explicit(version_ids) => Ok(version_ids), SqlVersionScope::AllVisible => visible_version_ids(version_ref).await, } } pub(crate) async fn resolve_provider_version_ids( version_ref: &dyn VersionRefReader, binding: &VersionBinding, requested_version_ids: Vec, ) -> Result, LixError> { resolve_sql_version_scope( version_ref, SqlVersionScope::from_provider(binding, requested_version_ids), ) .await } pub(crate) fn explicit_version_ids_from_dml_filters(filters: &[Expr]) -> Vec { filters .iter() .flat_map(version_ids_from_filter) .collect::>() .into_iter() .collect() } fn version_ids_from_filter(expr: &Expr) -> Vec { match expr { Expr::BinaryExpr(binary_expr) if binary_expr.op == Operator::And => { let mut values = version_ids_from_filter(&binary_expr.left); values.extend(version_ids_from_filter(&binary_expr.right)); values } Expr::BinaryExpr(binary_expr) => version_id_from_binary_filter(binary_expr) .map(|value| vec![value]) .unwrap_or_default(), Expr::InList(in_list) => version_ids_from_in_list_filter(in_list).unwrap_or_default(), _ => Vec::new(), } } fn version_id_from_binary_filter(binary_expr: &BinaryExpr) -> Option { if binary_expr.op != Operator::Eq { return None; } version_id_from_column_literal_filter(&binary_expr.left, &binary_expr.right) .or_else(|| version_id_from_column_literal_filter(&binary_expr.right, &binary_expr.left)) } fn version_ids_from_in_list_filter(in_list: &InList) -> Option> { if in_list.negated { return None; } let Expr::Column(column) = in_list.expr.as_ref() else { return None; }; if column.name != "lixcol_version_id" { return None; } let values = in_list .list .iter() .map(string_expr_literal) .collect::>>()?; if values.is_empty() { return None; } Some(values) } fn version_id_from_column_literal_filter( column_expr: &Expr, literal_expr: &Expr, ) -> Option { let Expr::Column(column) = column_expr else { return None; }; if column.name != "lixcol_version_id" { return None; } string_expr_literal(literal_expr) } fn string_expr_literal(expr: &Expr) -> Option { let Expr::Literal(literal, _) = expr else { return None; }; match literal { ScalarValue::Utf8(Some(value)) | ScalarValue::Utf8View(Some(value)) | ScalarValue::LargeUtf8(Some(value)) => Some(value.clone()), _ => None, } } async fn visible_version_ids(version_ref: &dyn VersionRefReader) -> Result, LixError> { let mut version_ids = version_ref .scan_heads() .await? .into_iter() .map(|head| head.version_id) .collect::>(); version_ids.insert(GLOBAL_VERSION_ID.to_string()); Ok(version_ids.into_iter().collect()) } #[cfg(test)] mod tests { use async_trait::async_trait; use super::*; use crate::version::VersionHead; #[tokio::test] async fn active_scope_uses_session_version() { let version_ref = RowsVersionRefReader::new(Vec::new()); let ids = resolve_provider_version_ids(&version_ref, &VersionBinding::active("main"), Vec::new()) .await .expect("scope should resolve"); assert_eq!(ids, vec!["main".to_string()]); } #[tokio::test] async fn explicit_scope_keeps_requested_versions() { let version_ref = RowsVersionRefReader::new(Vec::new()); let ids = resolve_provider_version_ids( &version_ref, &VersionBinding::explicit(), vec!["version-a".to_string(), "global".to_string()], ) .await .expect("scope should resolve"); assert_eq!(ids, vec!["version-a".to_string(), "global".to_string()]); } #[tokio::test] async fn all_visible_scope_loads_version_refs_and_global() { let version_ref = RowsVersionRefReader::new(vec![ VersionHead { version_id: "version-b".to_string(), commit_id: "commit-version-b".to_string(), }, VersionHead { version_id: "version-a".to_string(), commit_id: "commit-version-a".to_string(), }, ]); let ids = resolve_provider_version_ids(&version_ref, &VersionBinding::explicit(), Vec::new()) .await .expect("scope should resolve"); assert_eq!( ids, vec![ "global".to_string(), "version-a".to_string(), "version-b".to_string(), ] ); } #[test] fn write_scope_uses_fallback_version_when_version_is_implicit() { let scope = resolve_write_version_scope( None, None, Some("active-version"), "INSERT into surface", "surface", ) .expect("scope should resolve"); assert_eq!( scope, WriteVersionScope { version_id: "active-version".to_string(), global: false, } ); } #[test] fn write_scope_requires_version_without_fallback() { let error = resolve_write_version_scope(None, None, None, "INSERT into surface", "surface") .expect_err("missing version should be rejected"); assert!(error .to_string() .contains("INSERT into surface requires lixcol_version_id")); } #[test] fn write_scope_derives_global_from_global_version_id() { let scope = resolve_write_version_scope( None, Some(GLOBAL_VERSION_ID.to_string()), None, "INSERT into surface", "surface", ) .expect("scope should resolve"); assert_eq!( scope, WriteVersionScope { version_id: GLOBAL_VERSION_ID.to_string(), global: true, } ); } #[test] fn write_scope_rejects_non_global_with_global_version_id() { let error = resolve_write_version_scope( Some(false), Some(GLOBAL_VERSION_ID.to_string()), None, "INSERT into surface", "surface", ) .expect_err("conflicting global/version scope should be rejected"); assert!(error .to_string() .contains("surface cannot set lixcol_global=false with global lixcol_version_id")); } #[test] fn write_scope_rejects_global_with_non_global_version_id() { let error = resolve_write_version_scope( Some(true), Some("version-a".to_string()), None, "INSERT into surface", "surface", ) .expect_err("conflicting global/version scope should be rejected"); assert!(error .to_string() .contains("surface cannot set lixcol_global=true with non-global lixcol_version_id")); } struct RowsVersionRefReader { heads: Vec, } impl RowsVersionRefReader { fn new(heads: Vec) -> Self { Self { heads } } } #[async_trait] impl VersionRefReader for RowsVersionRefReader { async fn load_head(&self, version_id: &str) -> Result, LixError> { Ok(self .heads .iter() .find(|head| head.version_id == version_id) .cloned()) } async fn scan_heads(&self) -> Result, LixError> { Ok(self.heads.clone()) } } } ================================================ FILE: packages/engine/src/sql2/write_normalization.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use datafusion::arrow::array::ArrayRef; use datafusion::arrow::datatypes::DataType; use datafusion::arrow::record_batch::RecordBatch; use datafusion::common::{DataFusionError, Result, ScalarValue}; use datafusion::logical_expr::Expr; use datafusion::physical_expr::expressions::{CastExpr, Literal}; use datafusion::physical_expr::PhysicalExpr; use datafusion::physical_plan::projection::ProjectionExec; use datafusion::physical_plan::ExecutionPlan; use crate::LixError; #[derive(Debug, Clone)] pub(crate) enum SqlCell { Null, Value(ScalarValue), } impl SqlCell { pub(crate) fn from_scalar(value: ScalarValue) -> Self { if value.is_null() { Self::Null } else { Self::Value(value) } } } #[derive(Debug, Clone)] pub(crate) enum InsertCell { Omitted, Provided(SqlCell), } #[derive(Debug, Clone)] pub(crate) enum UpdateCell { Unassigned, Assigned(SqlCell), } #[derive(Debug, Clone)] pub(crate) struct InsertColumnIntents { explicit_columns: Option>, } impl InsertColumnIntents { pub(crate) fn all_explicit() -> Self { Self { explicit_columns: None, } } pub(crate) fn from_input(input: &Arc) -> Self { let Some(projection) = input.as_any().downcast_ref::() else { return Self { explicit_columns: None, }; }; let explicit_columns = projection .expr() .iter() .filter(|expr| !is_generated_null_default(expr.expr.as_ref())) .map(|expr| expr.alias.clone()) .collect(); Self { explicit_columns: Some(explicit_columns), } } pub(crate) fn includes_column(&self, column_name: &str) -> bool { self.explicit_columns .as_ref() .is_none_or(|columns| columns.contains(column_name)) } pub(crate) fn cell( &self, batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result { if !self.includes_column(column_name) { return Ok(InsertCell::Omitted); } optional_scalar_value(batch, row_index, column_name).map(|value| match value { None => InsertCell::Omitted, Some(value) => InsertCell::Provided(SqlCell::from_scalar(value)), }) } } pub(crate) fn reject_non_binary_casts_for_insert_column( input: &Arc, column_name: &str, context: &str, ) -> Result<()> { reject_non_binary_casts_for_insert_column_in_plan(input.as_ref(), column_name, context) } fn reject_non_binary_casts_for_insert_column_in_plan( input: &dyn ExecutionPlan, column_name: &str, context: &str, ) -> Result<()> { let Some(projection) = input.as_any().downcast_ref::() else { for child in input.children() { reject_non_binary_casts_for_insert_column_in_plan( child.as_ref(), column_name, context, )?; } return Ok(()); }; let Some(expr) = projection .expr() .iter() .find(|expr| expr.alias == column_name) else { return Ok(()); }; if contains_non_binary_cast_to_binary(expr.expr.as_ref()) { return Err(super::error::lix_error_to_datafusion_error( LixError::new( LixError::CODE_TYPE_MISMATCH, format!("{context} expected binary column '{column_name}'"), ) .with_hint("Use X'...' or a binary parameter for file contents."), )); } Ok(()) } fn contains_non_binary_cast_to_binary(expr: &dyn PhysicalExpr) -> bool { let Some(cast) = expr.as_any().downcast_ref::() else { return false; }; if is_binary_type(cast.cast_type()) && !physical_expr_is_binary_or_null(cast.expr().as_ref()) { return true; } contains_non_binary_cast_to_binary(cast.expr().as_ref()) } fn physical_expr_is_binary_or_null(expr: &dyn PhysicalExpr) -> bool { if let Some(literal) = expr.as_any().downcast_ref::() { return scalar_is_binary_or_null(literal.value()); } if let Some(cast) = expr.as_any().downcast_ref::() { return is_binary_type(cast.cast_type()) && physical_expr_is_binary_or_null(cast.expr().as_ref()); } false } pub(crate) fn scalar_is_binary_or_null(value: &ScalarValue) -> bool { value.is_null() || matches!( value, ScalarValue::Binary(_) | ScalarValue::LargeBinary(_) | ScalarValue::FixedSizeBinary(_, _) ) } pub(crate) fn logical_expr_is_binary_or_null(expr: &Expr) -> bool { match expr { Expr::Literal(value, _) => scalar_is_binary_or_null(value), Expr::Cast(cast) => { is_binary_type(&cast.data_type) && logical_expr_is_binary_or_null(&cast.expr) } Expr::Alias(alias) => logical_expr_is_binary_or_null(&alias.expr), _ => false, } } pub(crate) fn is_binary_type(data_type: &DataType) -> bool { matches!( data_type, DataType::Binary | DataType::LargeBinary | DataType::FixedSizeBinary(_) ) } pub(crate) fn lix_file_data_type_lix_error() -> LixError { LixError::new( LixError::CODE_TYPE_MISMATCH, "lix_file.data expects binary data", ) .with_hint("Use X'...' or a binary parameter for file contents.") } pub(crate) fn lix_file_data_type_error( context: &str, column_name: &str, instruction: &str, ) -> DataFusionError { super::error::lix_error_to_datafusion_error( LixError::new( LixError::CODE_TYPE_MISMATCH, format!("{context} expected binary column '{column_name}'"), ) .with_hint(instruction), ) } pub(crate) fn lix_file_data_type_error_with_value( context: &str, column_name: &str, value: &ScalarValue, instruction: &str, ) -> DataFusionError { super::error::lix_error_to_datafusion_error( LixError::new( LixError::CODE_TYPE_MISMATCH, format!("{context} expected binary column '{column_name}', got {value:?}"), ) .with_hint(instruction), ) } pub(crate) struct UpdateAssignmentValues { values: BTreeMap, } impl UpdateAssignmentValues { pub(crate) fn evaluate( batch: &RecordBatch, assignments: &[(String, Arc)], ) -> Result { let mut values = BTreeMap::new(); for (column_name, assignment) in assignments { values.insert( column_name.clone(), assignment.evaluate(batch)?.into_array(batch.num_rows())?, ); } Ok(Self { values }) } #[cfg(test)] pub(crate) fn from_batch_columns(batch: &RecordBatch, columns: &[&str]) -> Self { let values = columns .iter() .filter_map(|column_name| { let column_index = batch.schema().index_of(column_name).ok()?; Some(( (*column_name).to_string(), Arc::clone(batch.column(column_index)), )) }) .collect(); Self { values } } /// Returns only the value explicitly assigned by SQL UPDATE. /// /// Use this for document-patch semantics where `Unassigned` must remain /// distinct from `Assigned(NULL)`. pub(crate) fn assigned_cell(&self, row_index: usize, column_name: &str) -> Result { let Some(array) = self.values.get(column_name) else { return Ok(UpdateCell::Unassigned); }; ScalarValue::try_from_array(array.as_ref(), row_index) .map(SqlCell::from_scalar) .map(UpdateCell::Assigned) .map_err(|error| { DataFusionError::Execution(format!( "failed to decode SQL UPDATE assignment for column '{column_name}' at row {row_index}: {error}" )) }) } /// Returns the assigned SQL UPDATE value, or falls back to the existing row /// column value when the column was not assigned. /// /// Use this for scalar row-column semantics. Do not use it to reconstruct /// JSON documents from projected property columns, because projection can /// erase the difference between an absent property and an explicit null. pub(crate) fn assigned_or_existing_cell( &self, batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result { match self.assigned_cell(row_index, column_name)? { UpdateCell::Assigned(value) => Ok(InsertCell::Provided(value)), UpdateCell::Unassigned => { optional_scalar_value(batch, row_index, column_name).map(|value| match value { None => InsertCell::Omitted, Some(value) => InsertCell::Provided(SqlCell::from_scalar(value)), }) } } } } pub(crate) fn optional_scalar_value( batch: &RecordBatch, row_index: usize, column_name: &str, ) -> Result> { let schema = batch.schema(); let column_index = match schema.index_of(column_name) { Ok(column_index) => column_index, Err(_) => return Ok(None), }; if row_index >= batch.num_rows() { return Err(DataFusionError::Execution(format!( "row index {row_index} out of bounds for SQL write batch with {} rows", batch.num_rows() ))); } ScalarValue::try_from_array(batch.column(column_index).as_ref(), row_index) .map(Some) .map_err(|error| { DataFusionError::Execution(format!( "failed to decode SQL write column '{column_name}' at row {row_index}: {error}" )) }) } fn is_generated_null_default(expr: &dyn PhysicalExpr) -> bool { if let Some(literal) = expr.as_any().downcast_ref::() { return literal.value().is_null(); } if let Some(cast) = expr.as_any().downcast_ref::() { return is_generated_null_default(cast.expr().as_ref()); } false } ================================================ FILE: packages/engine/src/storage/context.rs ================================================ use std::sync::Arc; use async_trait::async_trait; use crate::backend::{Backend, BackendReadTransaction, BackendWriteTransaction}; use crate::storage::types::{KvWriteBatch, StorageWriter}; use crate::storage::{ KvEntryPage, KvExistsBatch, KvGetRequest, KvKeyPage, KvScanRequest, KvValueBatch, KvValuePage, KvWriteStats, StorageReadTransaction, StorageReader, StorageWriteTransaction, }; use crate::LixError; #[derive(Clone)] pub(crate) struct StorageContext { backend: Arc, } impl StorageContext { pub(crate) fn new(backend: Arc) -> Self { Self { backend } } pub(crate) async fn begin_read_transaction( &self, ) -> Result, LixError> { let transaction = self.backend.begin_read_transaction().await?; Ok(Box::new(StorageContextReadTransaction { transaction })) } pub(crate) async fn begin_write_transaction( &self, ) -> Result, LixError> { let transaction = self.backend.begin_write_transaction().await?; Ok(Box::new(StorageContextWriteTransaction { transaction })) } pub(crate) async fn close(&self) -> Result<(), LixError> { self.backend.close().await } pub(crate) async fn destroy(&self) -> Result<(), LixError> { self.backend.destroy().await } } #[cfg(any(test, feature = "storage-benches"))] #[async_trait] impl StorageReader for StorageContext { async fn get_values(&mut self, request: KvGetRequest) -> Result { let mut transaction = self.begin_read_transaction().await?; let result = transaction.get_values(request).await; match result { Ok(result) => { transaction.rollback().await?; Ok(result) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } async fn exists_many(&mut self, request: KvGetRequest) -> Result { let mut transaction = self.begin_read_transaction().await?; let result = transaction.exists_many(request).await; match result { Ok(result) => { transaction.rollback().await?; Ok(result) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } async fn scan_keys(&mut self, request: KvScanRequest) -> Result { let mut transaction = self.begin_read_transaction().await?; let result = transaction.scan_keys(request).await; match result { Ok(result) => { transaction.rollback().await?; Ok(result) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } async fn scan_values(&mut self, request: KvScanRequest) -> Result { let mut transaction = self.begin_read_transaction().await?; let result = transaction.scan_values(request).await; match result { Ok(result) => { transaction.rollback().await?; Ok(result) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } async fn scan_entries(&mut self, request: KvScanRequest) -> Result { let mut transaction = self.begin_read_transaction().await?; let result = transaction.scan_entries(request).await; match result { Ok(result) => { transaction.rollback().await?; Ok(result) } Err(error) => { let _ = transaction.rollback().await; Err(error) } } } } struct StorageContextReadTransaction { transaction: Box, } struct StorageContextWriteTransaction { transaction: Box, } #[async_trait] impl StorageReader for StorageContextReadTransaction { async fn get_values(&mut self, request: KvGetRequest) -> Result { self.transaction .get_values(request.into()) .await .map(Into::into) } async fn exists_many(&mut self, request: KvGetRequest) -> Result { self.transaction .exists_many(request.into()) .await .map(Into::into) } async fn scan_keys(&mut self, request: KvScanRequest) -> Result { self.transaction .scan_keys(request.into()) .await .map(Into::into) } async fn scan_values(&mut self, request: KvScanRequest) -> Result { self.transaction .scan_values(request.into()) .await .map(Into::into) } async fn scan_entries(&mut self, request: KvScanRequest) -> Result { self.transaction .scan_entries(request.into()) .await .map(Into::into) } } #[async_trait] impl StorageReadTransaction for StorageContextReadTransaction { async fn rollback(self: Box) -> Result<(), LixError> { self.transaction.rollback().await } } #[async_trait] impl StorageReader for StorageContextWriteTransaction { async fn get_values(&mut self, request: KvGetRequest) -> Result { self.transaction .get_values(request.into()) .await .map(Into::into) } async fn exists_many(&mut self, request: KvGetRequest) -> Result { self.transaction .exists_many(request.into()) .await .map(Into::into) } async fn scan_keys(&mut self, request: KvScanRequest) -> Result { self.transaction .scan_keys(request.into()) .await .map(Into::into) } async fn scan_values(&mut self, request: KvScanRequest) -> Result { self.transaction .scan_values(request.into()) .await .map(Into::into) } async fn scan_entries(&mut self, request: KvScanRequest) -> Result { self.transaction .scan_entries(request.into()) .await .map(Into::into) } } #[async_trait] impl StorageWriter for StorageContextWriteTransaction { async fn write_kv_batch(&mut self, batch: KvWriteBatch) -> Result { self.transaction .write_kv_batch(batch.into()) .await .map(Into::into) } } #[async_trait] impl StorageReadTransaction for StorageContextWriteTransaction { async fn rollback(self: Box) -> Result<(), LixError> { self.transaction.rollback().await } } #[async_trait] impl StorageWriteTransaction for StorageContextWriteTransaction { async fn commit(self: Box) -> Result<(), LixError> { self.transaction.commit().await } } #[cfg(test)] mod tests { use std::sync::Arc; use crate::backend::testing::UnitTestBackend; use crate::storage::types::KvWriteBatch; use crate::storage::{KvGetGroup, KvScanRange, StorageWriteSet}; use super::*; #[tokio::test] async fn storage_context_roundtrips_batched_writes_and_reads() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend); let mut tx = storage .begin_write_transaction() .await .expect("transaction opens"); let mut batch = KvWriteBatch::new(); batch.put("ns", b"a".to_vec(), b"1".to_vec()); batch.put("ns", b"b".to_vec(), b"2".to_vec()); let stats = tx.write_kv_batch(batch).await.expect("batch writes"); assert_eq!(stats.puts, 2); tx.commit().await.expect("commit succeeds"); let mut tx = storage .begin_read_transaction() .await .expect("read transaction opens"); let result = tx .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: "ns".to_string(), keys: vec![b"a".to_vec(), b"b".to_vec()], }], }) .await .expect("batch reads"); assert_eq!(result.groups[0].value(0), Some(Some(b"1".as_slice()))); assert_eq!(result.groups[0].value(1), Some(Some(b"2".as_slice()))); let exists = tx .exists_many(KvGetRequest { groups: vec![KvGetGroup { namespace: "ns".to_string(), keys: vec![b"a".to_vec(), b"missing".to_vec()], }], }) .await .expect("existence reads"); assert_eq!(exists.groups[0].exists, vec![true, false]); let result = tx .scan_entries(KvScanRequest { namespace: "ns".to_string(), range: KvScanRange::prefix(Vec::new()), after: Some(b"a".to_vec()), limit: 1, }) .await .expect("scan reads"); assert_eq!(result.key(0).expect("key exists"), b"b"); assert_eq!(result.value(0).expect("value exists"), b"2"); let key_only = tx .scan_keys(KvScanRequest { namespace: "ns".to_string(), range: KvScanRange::prefix(Vec::new()), after: None, limit: 2, }) .await .expect("key-only scan reads"); assert_eq!(key_only.keys.iter().collect::>(), vec![b"a", b"b"]); tx.rollback().await.expect("rollback succeeds"); } #[tokio::test] async fn storage_write_set_applies_as_one_batch() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend); let mut tx = storage .begin_write_transaction() .await .expect("transaction opens"); let mut writes = StorageWriteSet::new(); assert!(writes.is_empty()); writes.put("ns", b"a".to_vec(), b"1".to_vec()); writes.put("ns", b"b".to_vec(), b"2".to_vec()); writes.delete("ns", b"missing".to_vec()); assert!(!writes.is_empty()); let stats = writes.apply(tx.as_mut()).await.expect("write set applies"); assert_eq!(stats.puts, 2); assert_eq!(stats.deletes, 1); tx.commit().await.expect("commit succeeds"); let mut tx = storage .begin_read_transaction() .await .expect("read transaction opens"); let result = tx .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: "ns".to_string(), keys: vec![b"a".to_vec(), b"b".to_vec()], }], }) .await .expect("batch reads"); assert_eq!(result.groups[0].value(0), Some(Some(&b"1"[..]))); assert_eq!(result.groups[0].value(1), Some(Some(&b"2"[..]))); tx.rollback().await.expect("rollback succeeds"); } } ================================================ FILE: packages/engine/src/storage/mod.rs ================================================ mod context; mod read_scope; mod types; pub(crate) use context::StorageContext; pub(crate) use read_scope::{ScopedStorageReader, StorageReadScope}; pub(crate) use types::{ KvEntryPage, KvExistsBatch, KvExistsGroup, KvGetGroup, KvGetRequest, KvKeyPage, KvScanRange, KvScanRequest, KvValueBatch, KvValueGroup, KvValuePage, KvWriteStats, StorageReadTransaction, StorageReader, StorageWriteSet, StorageWriteTransaction, }; #[cfg(feature = "storage-benches")] pub(crate) use types::{KvWriteBatch, KvWriteGroup}; ================================================ FILE: packages/engine/src/storage/read_scope.rs ================================================ use std::sync::Arc; use crate::storage::{ KvEntryPage, KvExistsBatch, KvGetRequest, KvKeyPage, KvScanRequest, KvValueBatch, KvValuePage, StorageReadTransaction, StorageReader, }; use crate::LixError; use tokio::sync::Mutex; /// Shared read visibility over one KV store handle. /// /// This lets multiple subsystem readers share the same transaction/backend view /// even when the underlying handle itself is not cloneable. pub(crate) struct StorageReadScope { store: Arc>, } impl StorageReadScope where S: StorageReader, { pub(crate) fn new(store: S) -> Self { Self { store: Arc::new(Mutex::new(store)), } } pub(crate) fn store(&self) -> ScopedStorageReader { ScopedStorageReader { store: Arc::clone(&self.store), } } } impl StorageReadScope> { pub(crate) async fn rollback(self) -> Result<(), LixError> { let store = Arc::try_unwrap(self.store).map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "cannot close storage read scope while scoped readers are still alive", ) })?; store.into_inner().rollback().await } } pub(crate) struct ScopedStorageReader { store: Arc>, } impl Clone for ScopedStorageReader { fn clone(&self) -> Self { Self { store: Arc::clone(&self.store), } } } #[async_trait::async_trait] impl StorageReader for ScopedStorageReader where S: StorageReader, { async fn get_values(&mut self, request: KvGetRequest) -> Result { let mut store = self.store.lock().await; store.get_values(request).await } async fn exists_many(&mut self, request: KvGetRequest) -> Result { let mut store = self.store.lock().await; store.exists_many(request).await } async fn scan_keys(&mut self, request: KvScanRequest) -> Result { let mut store = self.store.lock().await; store.scan_keys(request).await } async fn scan_values(&mut self, request: KvScanRequest) -> Result { let mut store = self.store.lock().await; store.scan_values(request).await } async fn scan_entries(&mut self, request: KvScanRequest) -> Result { let mut store = self.store.lock().await; store.scan_entries(request).await } } ================================================ FILE: packages/engine/src/storage/types.rs ================================================ use async_trait::async_trait; use crate::backend; use crate::backend::BytePage; use crate::LixError; #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) enum KvScanRange { Prefix(Vec), Range { start: Vec, end: Vec }, } impl KvScanRange { pub(crate) fn prefix(prefix: impl Into>) -> Self { Self::Prefix(prefix.into()) } pub(crate) fn range(start: impl Into>, end: impl Into>) -> Self { Self::Range { start: start.into(), end: end.into(), } } } impl From for backend::BackendKvScanRange { fn from(range: KvScanRange) -> Self { match range { KvScanRange::Prefix(prefix) => Self::Prefix(prefix), KvScanRange::Range { start, end } => Self::Range { start, end }, } } } #[async_trait] pub(crate) trait StorageReader: Send { async fn get_values(&mut self, request: KvGetRequest) -> Result; async fn exists_many(&mut self, request: KvGetRequest) -> Result; async fn scan_keys(&mut self, request: KvScanRequest) -> Result; async fn scan_values(&mut self, request: KvScanRequest) -> Result; async fn scan_entries(&mut self, request: KvScanRequest) -> Result; } #[async_trait] pub(crate) trait StorageWriter: StorageReader { async fn write_kv_batch(&mut self, batch: KvWriteBatch) -> Result; } #[async_trait] pub(crate) trait StorageReadTransaction: StorageReader + Send + Sync { async fn rollback(self: Box) -> Result<(), LixError>; } #[async_trait] pub(crate) trait StorageWriteTransaction: StorageReadTransaction + StorageWriter + Send + Sync { async fn commit(self: Box) -> Result<(), LixError>; } #[async_trait] impl StorageReader for &mut T where T: StorageReader + ?Sized, { async fn get_values(&mut self, request: KvGetRequest) -> Result { (**self).get_values(request).await } async fn exists_many(&mut self, request: KvGetRequest) -> Result { (**self).exists_many(request).await } async fn scan_keys(&mut self, request: KvScanRequest) -> Result { (**self).scan_keys(request).await } async fn scan_values(&mut self, request: KvScanRequest) -> Result { (**self).scan_values(request).await } async fn scan_entries(&mut self, request: KvScanRequest) -> Result { (**self).scan_entries(request).await } } #[async_trait] impl StorageReader for Box where T: StorageReader + ?Sized, { async fn get_values(&mut self, request: KvGetRequest) -> Result { (**self).get_values(request).await } async fn exists_many(&mut self, request: KvGetRequest) -> Result { (**self).exists_many(request).await } async fn scan_keys(&mut self, request: KvScanRequest) -> Result { (**self).scan_keys(request).await } async fn scan_values(&mut self, request: KvScanRequest) -> Result { (**self).scan_values(request).await } async fn scan_entries(&mut self, request: KvScanRequest) -> Result { (**self).scan_entries(request).await } } #[async_trait] impl StorageWriter for &mut T where T: StorageWriter + ?Sized, { async fn write_kv_batch(&mut self, batch: KvWriteBatch) -> Result { (**self).write_kv_batch(batch).await } } #[async_trait] impl StorageWriter for Box where T: StorageWriter + ?Sized, { async fn write_kv_batch(&mut self, batch: KvWriteBatch) -> Result { (**self).write_kv_batch(batch).await } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvGetRequest { pub(crate) groups: Vec, } impl From for backend::BackendKvGetRequest { fn from(request: KvGetRequest) -> Self { Self { groups: request.groups.into_iter().map(Into::into).collect(), } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvGetGroup { pub(crate) namespace: String, pub(crate) keys: Vec>, } impl From for backend::BackendKvGetGroup { fn from(group: KvGetGroup) -> Self { Self { namespace: group.namespace, keys: group.keys, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvValueBatch { pub(crate) groups: Vec, } impl From for KvValueBatch { fn from(result: backend::BackendKvValueBatch) -> Self { Self { groups: result.groups.into_iter().map(Into::into).collect(), } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvValueGroup { namespace: String, values: BytePage, present: Vec, } impl From for KvValueGroup { fn from(group: backend::BackendKvValueGroup) -> Self { let (namespace, values, present) = group.into_parts(); Self { namespace, values, present, } } } impl KvValueGroup { pub(crate) fn len(&self) -> usize { self.present.len() } pub(crate) fn value(&self, index: usize) -> Option> { let present = *self.present.get(index)?; if present { Some(Some( self.values .get(index) .expect("storage value batch invariant violated"), )) } else { Some(None) } } pub(crate) fn values_iter(&self) -> impl Iterator> { (0..self.len()).filter_map(|index| self.value(index)) } pub(crate) fn single_value_owned(&self) -> Option> { if self.len() != 1 { return None; } self.value(0).flatten().map(<[u8]>::to_vec) } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvExistsBatch { pub(crate) groups: Vec, } impl From for KvExistsBatch { fn from(result: backend::BackendKvExistsBatch) -> Self { Self { groups: result.groups.into_iter().map(Into::into).collect(), } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvExistsGroup { pub(crate) namespace: String, pub(crate) exists: Vec, } impl From for KvExistsGroup { fn from(group: backend::BackendKvExistsGroup) -> Self { Self { namespace: group.namespace, exists: group.exists, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvScanRequest { pub(crate) namespace: String, pub(crate) range: KvScanRange, pub(crate) after: Option>, pub(crate) limit: usize, } impl From for backend::BackendKvScanRequest { fn from(request: KvScanRequest) -> Self { Self { namespace: request.namespace, range: request.range.into(), after: request.after, limit: request.limit, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvKeyPage { pub(crate) keys: BytePage, pub(crate) resume_after: Option>, } impl From for KvKeyPage { fn from(result: backend::BackendKvKeyPage) -> Self { Self { keys: result.keys, resume_after: result.resume_after, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvValuePage { pub(crate) values: BytePage, pub(crate) resume_after: Option>, } impl From for KvValuePage { fn from(result: backend::BackendKvValuePage) -> Self { Self { values: result.values, resume_after: result.resume_after, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvEntryPage { pub(crate) keys: BytePage, pub(crate) values: BytePage, pub(crate) resume_after: Option>, } impl From for KvEntryPage { fn from(result: backend::BackendKvEntryPage) -> Self { Self { keys: result.keys, values: result.values, resume_after: result.resume_after, } } } impl KvEntryPage { pub(crate) fn len(&self) -> usize { self.keys.len() } pub(crate) fn is_empty(&self) -> bool { self.keys.is_empty() } pub(crate) fn key(&self, index: usize) -> Option<&[u8]> { self.keys.get(index) } pub(crate) fn value(&self, index: usize) -> Option<&[u8]> { self.values.get(index) } } #[derive(Debug, Default)] pub(crate) struct StorageWriteSet { batch: KvWriteBatch, } impl StorageWriteSet { pub(crate) fn new() -> Self { Self::default() } pub(crate) fn put(&mut self, namespace: &'static str, key: Vec, value: Vec) { self.batch.put(namespace, key, value); } pub(crate) fn delete(&mut self, namespace: &'static str, key: Vec) { self.batch.delete(namespace, key); } pub(crate) fn is_empty(&self) -> bool { self.batch.is_empty() } pub(crate) async fn apply( self, writer: &mut (impl StorageWriter + ?Sized), ) -> Result { writer.write_kv_batch(self.batch).await } } #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct KvWriteBatch { pub(crate) groups: Vec, } impl KvWriteBatch { pub(crate) fn new() -> Self { Self::default() } pub(crate) fn put( &mut self, namespace: impl Into, key: impl Into>, value: impl Into>, ) { let namespace = namespace.into(); let group = self.group_mut(namespace); group.put(key.into(), value.into()); } pub(crate) fn delete(&mut self, namespace: impl Into, key: impl Into>) { let namespace = namespace.into(); let group = self.group_mut(namespace); group.delete(key.into()); } pub(crate) fn is_empty(&self) -> bool { self.groups .iter() .all(|group| group.put_count() == 0 && group.delete_count() == 0) } fn group_mut(&mut self, namespace: String) -> &mut KvWriteGroup { if let Some(index) = self .groups .iter() .position(|group| group.namespace == namespace) { return &mut self.groups[index]; } self.groups.push(KvWriteGroup { namespace, put_keys: backend::BytePageBuilder::new(), put_values: backend::BytePageBuilder::new(), deletes: backend::BytePageBuilder::new(), }); self.groups.last_mut().expect("group just pushed") } } impl From for backend::BackendKvWriteBatch { fn from(batch: KvWriteBatch) -> Self { Self { groups: batch.groups.into_iter().map(Into::into).collect(), } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct KvWriteGroup { namespace: String, put_keys: backend::BytePageBuilder, put_values: backend::BytePageBuilder, deletes: backend::BytePageBuilder, } impl From for backend::BackendKvWriteGroup { fn from(group: KvWriteGroup) -> Self { Self::from_pages( group.namespace, group.put_keys.finish(), group.put_values.finish(), group.deletes.finish(), ) } } impl KvWriteGroup { pub(crate) fn new(namespace: impl Into) -> Self { Self { namespace: namespace.into(), put_keys: backend::BytePageBuilder::new(), put_values: backend::BytePageBuilder::new(), deletes: backend::BytePageBuilder::new(), } } pub(crate) fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) { self.put_keys.push(key); self.put_values.push(value); } pub(crate) fn delete(&mut self, key: impl AsRef<[u8]>) { self.deletes.push(key); } pub(crate) fn put_count(&self) -> usize { self.put_keys.len() } pub(crate) fn delete_count(&self) -> usize { self.deletes.len() } pub(crate) fn put_key(&self, index: usize) -> Option<&[u8]> { self.put_keys.get(index) } pub(crate) fn put_value(&self, index: usize) -> Option<&[u8]> { self.put_values.get(index) } pub(crate) fn delete_key(&self, index: usize) -> Option<&[u8]> { self.deletes.get(index) } } #[derive(Debug, Clone, Copy, PartialEq, Eq, Default)] pub(crate) struct KvWriteStats { pub(crate) puts: usize, pub(crate) deletes: usize, pub(crate) bytes_written: usize, } impl From for KvWriteStats { fn from(stats: backend::BackendKvWriteStats) -> Self { Self { puts: stats.puts, deletes: stats.deletes, bytes_written: stats.bytes_written, } } } ================================================ FILE: packages/engine/src/storage_bench.rs ================================================ use crate::binary_cas::{BinaryCasContext, BlobHash, BlobWrite}; use crate::catalog::CatalogContext; use crate::commit_graph::CommitGraphChangeHistoryRequest; use crate::commit_store::{ Change, ChangeScanRequest, CommitDraftRef, CommitStoreContext, MaterializedChange, }; use crate::entity_identity::EntityIdentity; use crate::json_store::context::JsonStoreContext; use crate::json_store::types::{ JsonLoadRequestRef, JsonProjectionLoadRequestRef, JsonProjectionPath, JsonReadScopeRef, JsonRef, JsonWritePlacementRef, NormalizedJsonRef, }; use crate::live_state::LiveStateContext; use crate::session::SessionMode; use crate::storage::{ KvGetGroup, KvGetRequest, KvScanRange, KvScanRequest, KvWriteBatch, StorageContext, StorageWriteSet, }; use crate::tracked_state::{ MaterializedTrackedStateRow, TrackedStateContext, TrackedStateDeltaRef, TrackedStateDiffRequest, TrackedStateFilter, TrackedStateProjection, TrackedStateRowRequest, TrackedStateScanRequest, }; use crate::transaction::open_transaction; use crate::transaction::types::{ TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteRow, }; use crate::untracked_state::{ MaterializedUntrackedStateRow, UntrackedStateContext, UntrackedStateFilter, UntrackedStateProjection, UntrackedStateRowRequest, UntrackedStateScanRequest, }; use crate::version::VersionContext; use crate::{Backend, LixError, NullableKeyFilter}; use std::collections::{BTreeMap, HashSet}; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; use std::sync::Mutex; use std::sync::OnceLock; use std::time::{Duration, Instant}; fn prepare_json_ref(document: &[u8]) -> Result { let text = std::str::from_utf8(document).map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("benchmark JSON document is invalid UTF-8: {error}"), ) })?; Ok(JsonRef::for_content(text.as_bytes())) } #[derive(Debug, Clone, Copy)] pub struct StorageBenchConfig { pub rows: usize, pub blob_bytes: usize, pub state_payload_bytes: usize, pub key_pattern: StorageBenchKeyPattern, pub selectivity: StorageBenchSelectivity, pub update_fraction: StorageBenchUpdateFraction, } impl StorageBenchConfig { pub fn with_rows(mut self, rows: usize) -> Self { self.rows = rows; self } pub fn with_blob_bytes(mut self, blob_bytes: usize) -> Self { self.blob_bytes = blob_bytes; self } pub fn with_state_payload_bytes(mut self, state_payload_bytes: usize) -> Self { self.state_payload_bytes = state_payload_bytes; self } pub fn with_key_pattern(mut self, key_pattern: StorageBenchKeyPattern) -> Self { self.key_pattern = key_pattern; self } pub fn with_selectivity(mut self, selectivity: StorageBenchSelectivity) -> Self { self.selectivity = selectivity; self } pub fn with_update_fraction(mut self, update_fraction: StorageBenchUpdateFraction) -> Self { self.update_fraction = update_fraction; self } } #[derive(Debug, Clone, Copy)] pub enum StorageBenchKeyPattern { Sequential, Random, } #[derive(Debug, Clone, Copy)] pub enum StorageBenchSelectivity { Percent1, Percent10, Percent100, } impl StorageBenchSelectivity { fn matches(self, index: usize) -> bool { match self { Self::Percent1 => index % 100 == 0, Self::Percent10 => index % 10 == 0, Self::Percent100 => true, } } fn expected_rows(self, rows: usize) -> usize { (0..rows).filter(|index| self.matches(*index)).count() } } #[derive(Debug, Clone, Copy)] pub enum StorageBenchUpdateFraction { Percent10, Percent100, } impl StorageBenchUpdateFraction { fn rows(self, total_rows: usize) -> usize { match self { Self::Percent10 => total_rows.div_ceil(10), Self::Percent100 => total_rows, } } } #[derive(Debug, Clone, Copy)] pub struct StorageBenchReport { pub measured_rows: usize, pub verified_rows: usize, pub elapsed: Duration, } #[derive(Debug, Clone, Default, PartialEq, Eq)] pub struct TransactionBenchCounters { pub rows_staged: usize, pub untracked_rows: usize, pub validation_version_count: usize, pub schema_catalog_loads: usize, pub json_store_stage_bytes_calls: usize, pub unique_json_refs: usize, } #[derive(Debug, Clone, Default, PartialEq, Eq)] pub struct TransactionAccountingReport { pub counters: TransactionBenchCounters, pub storage_write_batches: usize, pub kv_puts_by_namespace: BTreeMap, pub bytes_by_namespace: BTreeMap, } pub struct StorageApiFixture { storage: StorageContext, rows: usize, } pub struct TransactionBenchFixture { storage: StorageContext, live_state: Arc, tracked_state: Arc, binary_cas: Arc, commit_store: Arc, version_ctx: Arc, catalog_context: Arc, rows: Vec, } pub struct TransactionCommitOnlyFixture { runtime_functions: crate::functions::FunctionContext, transaction: crate::transaction::Transaction, rows: usize, } static TRANSACTION_ROWS_STAGED: AtomicUsize = AtomicUsize::new(0); static TRANSACTION_UNTRACKED_ROWS: AtomicUsize = AtomicUsize::new(0); static TRANSACTION_VALIDATION_VERSION_COUNT: AtomicUsize = AtomicUsize::new(0); static TRANSACTION_SCHEMA_CATALOG_LOADS: AtomicUsize = AtomicUsize::new(0); static JSON_STORE_STAGE_BYTES_CALLS: AtomicUsize = AtomicUsize::new(0); static JSON_STORE_UNIQUE_REFS: OnceLock>> = OnceLock::new(); const STORAGE_API_NAMESPACE: &str = "bench.storage_api"; const STORAGE_API_ALT_NAMESPACE: &str = "bench.storage_api.alt"; const TRANSACTION_BENCH_SCHEMA_KEY: &str = "bench_transaction_entity"; pub fn reset_transaction_bench_counters() { TRANSACTION_ROWS_STAGED.store(0, Ordering::Relaxed); TRANSACTION_UNTRACKED_ROWS.store(0, Ordering::Relaxed); TRANSACTION_VALIDATION_VERSION_COUNT.store(0, Ordering::Relaxed); TRANSACTION_SCHEMA_CATALOG_LOADS.store(0, Ordering::Relaxed); JSON_STORE_STAGE_BYTES_CALLS.store(0, Ordering::Relaxed); json_store_unique_refs() .lock() .expect("json store unique ref counter mutex should lock") .clear(); } pub fn transaction_bench_counters() -> TransactionBenchCounters { TransactionBenchCounters { rows_staged: TRANSACTION_ROWS_STAGED.load(Ordering::Relaxed), untracked_rows: TRANSACTION_UNTRACKED_ROWS.load(Ordering::Relaxed), validation_version_count: TRANSACTION_VALIDATION_VERSION_COUNT.load(Ordering::Relaxed), schema_catalog_loads: TRANSACTION_SCHEMA_CATALOG_LOADS.load(Ordering::Relaxed), json_store_stage_bytes_calls: JSON_STORE_STAGE_BYTES_CALLS.load(Ordering::Relaxed), unique_json_refs: json_store_unique_refs() .lock() .expect("json store unique ref counter mutex should lock") .len(), } } pub(crate) fn record_transaction_rows_staged(rows: usize) { TRANSACTION_ROWS_STAGED.fetch_add(rows, Ordering::Relaxed); } pub(crate) fn record_transaction_untracked_rows(rows: usize) { TRANSACTION_UNTRACKED_ROWS.fetch_add(rows, Ordering::Relaxed); } pub(crate) fn record_transaction_validation_version() { TRANSACTION_VALIDATION_VERSION_COUNT.fetch_add(1, Ordering::Relaxed); } pub(crate) fn record_transaction_schema_catalog_load() { TRANSACTION_SCHEMA_CATALOG_LOADS.fetch_add(1, Ordering::Relaxed); } pub(crate) fn record_json_store_stage_bytes(hash: [u8; 32]) { JSON_STORE_STAGE_BYTES_CALLS.fetch_add(1, Ordering::Relaxed); json_store_unique_refs() .lock() .expect("json store unique ref counter mutex should lock") .insert(hash); } fn json_store_unique_refs() -> &'static Mutex> { JSON_STORE_UNIQUE_REFS.get_or_init(|| Mutex::new(HashSet::new())) } pub async fn prepare_transaction_commit_empty( backend: Arc, ) -> Result { prepare_transaction_fixture(backend, Vec::new()).await } pub async fn prepare_transaction_commit_schema_only( backend: Arc, ) -> Result { prepare_transaction_fixture(backend, vec![transaction_registered_schema_row()]).await } pub async fn prepare_transaction_commit_entities_no_payload( backend: Arc, rows: usize, ) -> Result { prepare_transaction_fixture( backend, transaction_entity_rows(TransactionEntityRows { rows, payload_bytes: 0, payload_pattern: TransactionPayloadPattern::Unique, metadata_pattern: TransactionPayloadPattern::None, untracked: false, key_prefix: "entity-no-payload", }), ) .await } pub async fn prepare_transaction_commit_entities_payload_1k_unique( backend: Arc, rows: usize, ) -> Result { prepare_transaction_payload_fixture( backend, rows, 1024, TransactionPayloadPattern::Unique, false, "entity-payload-1k-unique", ) .await } pub async fn prepare_transaction_commit_entities_payload_1k_same( backend: Arc, rows: usize, ) -> Result { prepare_transaction_payload_fixture( backend, rows, 1024, TransactionPayloadPattern::Same, false, "entity-payload-1k-same", ) .await } pub async fn prepare_transaction_commit_entities_payload_1k_half_duplicate( backend: Arc, rows: usize, ) -> Result { prepare_transaction_payload_fixture( backend, rows, 1024, TransactionPayloadPattern::HalfDuplicate, false, "entity-payload-1k-half-duplicate", ) .await } pub async fn prepare_transaction_commit_entities_metadata_1k_same( backend: Arc, rows: usize, ) -> Result { prepare_transaction_fixture( backend, transaction_entity_rows(TransactionEntityRows { rows, payload_bytes: 0, payload_pattern: TransactionPayloadPattern::Unique, metadata_pattern: TransactionPayloadPattern::Same, untracked: false, key_prefix: "entity-metadata-1k-same", }), ) .await } pub async fn prepare_transaction_commit_entities_payload_16k_unique( backend: Arc, rows: usize, ) -> Result { prepare_transaction_payload_fixture( backend, rows, 16 * 1024, TransactionPayloadPattern::Unique, false, "entity-payload-16k-unique", ) .await } pub async fn prepare_transaction_commit_untracked_payload_1k_same( backend: Arc, rows: usize, ) -> Result { prepare_transaction_payload_fixture( backend, rows, 1024, TransactionPayloadPattern::Same, true, "untracked-payload-1k-same", ) .await } pub async fn prepare_transaction_update_existing_payload_1k( backend: Arc, root_rows: usize, update_rows: usize, ) -> Result { let fixture = prepare_transaction_payload_fixture( backend, root_rows, 1024, TransactionPayloadPattern::Unique, false, "update-existing-root", ) .await?; transaction_commit_prepared(&fixture).await?; let rows = transaction_entity_rows(TransactionEntityRows { rows: update_rows, payload_bytes: 1024, payload_pattern: TransactionPayloadPattern::Unique, metadata_pattern: TransactionPayloadPattern::None, untracked: false, key_prefix: "update-existing-root", }); Ok(TransactionBenchFixture { rows, ..fixture }) } pub async fn transaction_commit_prepared( fixture: &TransactionBenchFixture, ) -> Result { let opened = open_transaction( &SessionMode::Pinned { version_id: crate::GLOBAL_VERSION_ID.to_string(), }, fixture.storage.clone(), Arc::clone(&fixture.live_state), Arc::clone(&fixture.tracked_state), Arc::clone(&fixture.binary_cas), Arc::clone(&fixture.commit_store), Arc::clone(&fixture.version_ctx), Arc::clone(&fixture.catalog_context), ) .await?; let mut transaction = opened.transaction; let runtime_functions = opened.runtime_functions; let started_at = Instant::now(); if !fixture.rows.is_empty() { transaction .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: fixture.rows.clone(), }) .await?; } transaction.commit(&runtime_functions).await?; Ok(StorageBenchReport { measured_rows: fixture.rows.len(), verified_rows: fixture.rows.len(), elapsed: started_at.elapsed(), }) } pub async fn transaction_open_empty_prepared( fixture: &TransactionBenchFixture, ) -> Result { let started_at = Instant::now(); let opened = open_transaction( &SessionMode::Pinned { version_id: crate::GLOBAL_VERSION_ID.to_string(), }, fixture.storage.clone(), Arc::clone(&fixture.live_state), Arc::clone(&fixture.tracked_state), Arc::clone(&fixture.binary_cas), Arc::clone(&fixture.commit_store), Arc::clone(&fixture.version_ctx), Arc::clone(&fixture.catalog_context), ) .await?; let elapsed = started_at.elapsed(); opened.transaction.rollback().await?; Ok(StorageBenchReport { measured_rows: 0, verified_rows: 0, elapsed, }) } pub async fn transaction_stage_only_prepared( fixture: &TransactionBenchFixture, ) -> Result { let opened = open_transaction( &SessionMode::Pinned { version_id: crate::GLOBAL_VERSION_ID.to_string(), }, fixture.storage.clone(), Arc::clone(&fixture.live_state), Arc::clone(&fixture.tracked_state), Arc::clone(&fixture.binary_cas), Arc::clone(&fixture.commit_store), Arc::clone(&fixture.version_ctx), Arc::clone(&fixture.catalog_context), ) .await?; let mut transaction = opened.transaction; let started_at = Instant::now(); if !fixture.rows.is_empty() { transaction .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: fixture.rows.clone(), }) .await?; } let elapsed = started_at.elapsed(); transaction.rollback().await?; Ok(StorageBenchReport { measured_rows: fixture.rows.len(), verified_rows: fixture.rows.len(), elapsed, }) } pub async fn prepare_transaction_commit_only( fixture: TransactionBenchFixture, ) -> Result { let opened = open_transaction( &SessionMode::Pinned { version_id: crate::GLOBAL_VERSION_ID.to_string(), }, fixture.storage.clone(), Arc::clone(&fixture.live_state), Arc::clone(&fixture.tracked_state), Arc::clone(&fixture.binary_cas), Arc::clone(&fixture.commit_store), Arc::clone(&fixture.version_ctx), Arc::clone(&fixture.catalog_context), ) .await?; let mut transaction = opened.transaction; let runtime_functions = opened.runtime_functions; let rows = fixture.rows.len(); if !fixture.rows.is_empty() { transaction .stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: fixture.rows, }) .await?; } Ok(TransactionCommitOnlyFixture { runtime_functions, transaction, rows, }) } pub async fn transaction_commit_only_prepared( fixture: TransactionCommitOnlyFixture, ) -> Result { let rows = fixture.rows; let started_at = Instant::now(); fixture .transaction .commit(&fixture.runtime_functions) .await?; Ok(StorageBenchReport { measured_rows: rows, verified_rows: rows, elapsed: started_at.elapsed(), }) } async fn prepare_transaction_payload_fixture( backend: Arc, rows: usize, payload_bytes: usize, payload_pattern: TransactionPayloadPattern, untracked: bool, key_prefix: &'static str, ) -> Result { prepare_transaction_fixture( backend, transaction_entity_rows(TransactionEntityRows { rows, payload_bytes, payload_pattern, metadata_pattern: TransactionPayloadPattern::None, untracked, key_prefix, }), ) .await } async fn prepare_transaction_fixture( backend: Arc, rows: Vec, ) -> Result { let storage = StorageContext::new(backend); let tracked_state = Arc::new(TrackedStateContext::new()); let untracked_state = Arc::new(UntrackedStateContext::new()); let commit_store = Arc::new(CommitStoreContext::new()); let live_state = Arc::new(LiveStateContext::new( tracked_state.as_ref().clone(), untracked_state.as_ref().clone(), crate::commit_graph::CommitGraphContext::new(), )); let binary_cas = Arc::new(BinaryCasContext::new()); let version_ctx = Arc::new(VersionContext::new(untracked_state)); let catalog_context = Arc::new(CatalogContext::new()); seed_transaction_visible_schema_rows(storage.clone()).await?; Ok(TransactionBenchFixture { storage, live_state, tracked_state, binary_cas, commit_store, version_ctx, catalog_context, rows, }) } async fn seed_transaction_visible_schema_rows(storage: StorageContext) -> Result<(), LixError> { let mut writes = StorageWriteSet::new(); let rows = crate::schema::seed_schema_definitions() .into_iter() .cloned() .chain(std::iter::once(transaction_entity_schema_definition())) .map(|schema| { let key = crate::schema::schema_key_from_definition(&schema) .expect("seed schema key should derive"); let snapshot_content = serde_json::json!({ "value": schema }).to_string(); Ok(crate::untracked_state::UntrackedStateRow { entity_id: crate::schema::registered_schema_entity_id(&key.schema_key) .expect("registered schema identity should derive"), schema_key: "lix_registered_schema".to_string(), file_id: None, version_id: crate::GLOBAL_VERSION_ID.to_string(), snapshot_content: Some(snapshot_content), metadata: None, created_at: "1970-01-01T00:00:00.000Z".to_string(), updated_at: "1970-01-01T00:00:00.000Z".to_string(), global: true, }) }) .collect::, LixError>>()?; let mut transaction = storage.begin_write_transaction().await?; UntrackedStateContext::new() .writer(&mut writes) .stage_rows(rows.iter().map(|row| row.as_ref()))?; writes.apply(&mut transaction.as_mut()).await?; transaction.commit().await } fn transaction_entity_schema_definition() -> serde_json::Value { serde_json::json!({ "x-lix-key": TRANSACTION_BENCH_SCHEMA_KEY, "type": "object", "properties": { "value": { "anyOf": [ { "type": "string" }, { "type": "object" }, { "type": "array" }, { "type": "number" }, { "type": "boolean" }, { "type": "null" } ] } }, "required": ["value"], "additionalProperties": false }) } #[derive(Debug, Clone, Copy)] enum TransactionPayloadPattern { None, Unique, Same, HalfDuplicate, } struct TransactionEntityRows { rows: usize, payload_bytes: usize, payload_pattern: TransactionPayloadPattern, metadata_pattern: TransactionPayloadPattern, untracked: bool, key_prefix: &'static str, } fn transaction_entity_rows(config: TransactionEntityRows) -> Vec { (0..config.rows) .map(|index| { let key = format!("{}-{index:06}", config.key_prefix); let value_index = payload_pattern_index(config.payload_pattern, index); let metadata_index = payload_pattern_index(config.metadata_pattern, index); TransactionWriteRow { entity_id: Some(EntityIdentity::single(key.clone())), schema_key: TRANSACTION_BENCH_SCHEMA_KEY.to_string(), file_id: None, snapshot: Some(transaction_snapshot_json( &key, value_index, config.payload_bytes, )), metadata: transaction_metadata(config.metadata_pattern, metadata_index), origin: None, created_at: None, updated_at: None, global: true, change_id: None, commit_id: None, untracked: config.untracked, version_id: crate::GLOBAL_VERSION_ID.to_string(), } }) .collect() } fn transaction_registered_schema_row() -> TransactionWriteRow { let schema = serde_json::json!({ "x-lix-key": "bench_transaction_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "value": { "type": "string" } }, "required": ["id", "value"], "additionalProperties": false }); let key = crate::schema::schema_key_from_definition(&schema).expect("seed schema key should derive"); TransactionWriteRow { entity_id: Some( crate::schema::registered_schema_entity_id(&key.schema_key) .expect("registered schema identity should derive"), ), schema_key: "lix_registered_schema".to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_unchecked( serde_json::json!({ "value": schema }), )), metadata: None, origin: None, created_at: None, updated_at: None, global: true, change_id: None, commit_id: None, untracked: false, version_id: crate::GLOBAL_VERSION_ID.to_string(), } } fn transaction_snapshot_json( _key: &str, payload_index: usize, target_bytes: usize, ) -> TransactionJson { let base_value = format!("/entities/{payload_index}/value"); let value = if target_bytes == 0 { base_value } else { let current = serde_json::json!({ "value": base_value, }) .to_string() .len(); let padding = target_bytes.saturating_sub(current); format!("{base_value}:{}", "x".repeat(padding)) }; let mut object = serde_json::Map::new(); object.insert("value".to_string(), serde_json::Value::String(value)); TransactionJson::from_value_unchecked(serde_json::Value::Object(object)) } fn transaction_metadata( pattern: TransactionPayloadPattern, metadata_index: usize, ) -> Option { match pattern { TransactionPayloadPattern::None => None, TransactionPayloadPattern::Unique | TransactionPayloadPattern::Same | TransactionPayloadPattern::HalfDuplicate => { let mut object = serde_json::Map::new(); object.insert( "source".to_string(), serde_json::Value::String("transaction-bench".to_string()), ); object.insert( "metadata_index".to_string(), serde_json::Value::String(metadata_index.to_string()), ); pad_json_object(&mut object, 1024); Some(TransactionJson::from_value_unchecked( serde_json::Value::Object(object), )) } } } fn payload_pattern_index(pattern: TransactionPayloadPattern, index: usize) -> usize { match pattern { TransactionPayloadPattern::None | TransactionPayloadPattern::Unique => index, TransactionPayloadPattern::Same => 0, TransactionPayloadPattern::HalfDuplicate => index % 2, } } pub async fn storage_api_write_kv_batch_puts( backend: Arc, rows: usize, ) -> Result { let storage = StorageContext::new(backend); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { batch.put( STORAGE_API_NAMESPACE, storage_api_key(index), storage_api_value(index), ); } let started_at = Instant::now(); let stats = transaction.write_kv_batch(batch).await?; transaction.commit().await?; Ok(StorageBenchReport { measured_rows: stats.puts, verified_rows: rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_write_kv_batch_mixed_put_delete( backend: Arc, rows: usize, ) -> Result { let fixture = prepare_storage_api_read(backend, rows).await?; let mut transaction = fixture.storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { if index % 2 == 0 { batch.put( STORAGE_API_NAMESPACE, storage_api_key(index), storage_api_updated_value(index), ); } else { batch.delete(STORAGE_API_NAMESPACE, storage_api_key(index)); } } let started_at = Instant::now(); let stats = transaction.write_kv_batch(batch).await?; transaction.commit().await?; Ok(StorageBenchReport { measured_rows: stats.puts + stats.deletes, verified_rows: rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_write_kv_batch_multi_namespace( backend: Arc, rows: usize, ) -> Result { let storage = StorageContext::new(backend); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { let namespace = if index % 2 == 0 { STORAGE_API_NAMESPACE } else { STORAGE_API_ALT_NAMESPACE }; batch.put(namespace, storage_api_key(index), storage_api_value(index)); } let started_at = Instant::now(); let stats = transaction.write_kv_batch(batch).await?; transaction.commit().await?; Ok(StorageBenchReport { measured_rows: stats.puts, verified_rows: rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_write_kv_batch_duplicate_keys( backend: Arc, rows: usize, ) -> Result { let storage = StorageContext::new(backend); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { batch.put( STORAGE_API_NAMESPACE, storage_api_key(index % 100), storage_api_value(index), ); } let started_at = Instant::now(); let stats = transaction.write_kv_batch(batch).await?; transaction.commit().await?; Ok(StorageBenchReport { measured_rows: stats.puts, verified_rows: rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_write_kv_batch_value_size( backend: Arc, rows: usize, value_bytes: usize, ) -> Result { let storage = StorageContext::new(backend); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { batch.put( STORAGE_API_NAMESPACE, storage_api_key(index), storage_api_value_with_bytes(index, value_bytes), ); } let started_at = Instant::now(); let stats = transaction.write_kv_batch(batch).await?; transaction.commit().await?; Ok(StorageBenchReport { measured_rows: stats.puts, verified_rows: rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_write_and_commit( backend: Arc, rows: usize, ) -> Result { let storage = StorageContext::new(backend); let started_at = Instant::now(); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { batch.put( STORAGE_API_NAMESPACE, storage_api_key(index), storage_api_value(index), ); } let stats = transaction.write_kv_batch(batch).await?; transaction.commit().await?; Ok(StorageBenchReport { measured_rows: stats.puts, verified_rows: rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_rollback_after_write( backend: Arc, rows: usize, ) -> Result { let storage = StorageContext::new(backend); let started_at = Instant::now(); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { batch.put( STORAGE_API_NAMESPACE, storage_api_key(index), storage_api_value(index), ); } let stats = transaction.write_kv_batch(batch).await?; transaction.rollback().await?; Ok(StorageBenchReport { measured_rows: stats.puts, verified_rows: rows, elapsed: started_at.elapsed(), }) } pub async fn prepare_storage_api_read( backend: Arc, rows: usize, ) -> Result { let storage = StorageContext::new(backend); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { batch.put( STORAGE_API_NAMESPACE, storage_api_key(index), storage_api_value(index), ); } transaction.write_kv_batch(batch).await?; transaction.commit().await?; Ok(StorageApiFixture { storage, rows }) } pub async fn storage_api_get_values_hits_prepared( fixture: &StorageApiFixture, reads: usize, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let keys = (0..reads) .map(|index| storage_api_key(index % fixture.rows)) .collect::>(); let started_at = Instant::now(); let result = transaction .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: STORAGE_API_NAMESPACE.to_string(), keys, }], }) .await?; transaction.rollback().await?; let verified_rows = result.groups[0] .values_iter() .filter(|value| value.is_some()) .count(); Ok(StorageBenchReport { measured_rows: reads, verified_rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_exists_many_prepared( fixture: &StorageApiFixture, reads: usize, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let keys = (0..reads) .map(|index| storage_api_key(index % fixture.rows)) .collect::>(); let started_at = Instant::now(); let result = transaction .exists_many(KvGetRequest { groups: vec![KvGetGroup { namespace: STORAGE_API_NAMESPACE.to_string(), keys, }], }) .await?; transaction.rollback().await?; let verified_rows = result.groups[0] .exists .iter() .filter(|exists| **exists) .count(); Ok(StorageBenchReport { measured_rows: reads, verified_rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_get_values_misses_prepared( fixture: &StorageApiFixture, reads: usize, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let keys = (0..reads) .map(|index| storage_api_missing_key(index)) .collect::>(); let started_at = Instant::now(); let result = transaction .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: STORAGE_API_NAMESPACE.to_string(), keys, }], }) .await?; transaction.rollback().await?; let verified_rows = result.groups[0] .values_iter() .filter(|value| value.is_none()) .count(); Ok(StorageBenchReport { measured_rows: reads, verified_rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_get_values_mixed_hit_miss_prepared( fixture: &StorageApiFixture, reads: usize, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let keys = (0..reads) .map(|index| { if index % 2 == 0 { storage_api_key(index % fixture.rows) } else { storage_api_missing_key(index) } }) .collect::>(); let started_at = Instant::now(); let result = transaction .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: STORAGE_API_NAMESPACE.to_string(), keys, }], }) .await?; transaction.rollback().await?; let verified_rows = result.groups[0] .values_iter() .filter(|value| value.is_some()) .count(); Ok(StorageBenchReport { measured_rows: reads, verified_rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_get_values_multi_namespace( backend: Arc, reads: usize, ) -> Result { let storage = StorageContext::new(backend); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..reads { let namespace = if index % 2 == 0 { STORAGE_API_NAMESPACE } else { STORAGE_API_ALT_NAMESPACE }; batch.put(namespace, storage_api_key(index), storage_api_value(index)); } transaction.write_kv_batch(batch).await?; transaction.commit().await?; let mut transaction = storage.begin_read_transaction().await?; let even_keys = (0..reads) .step_by(2) .map(storage_api_key) .collect::>(); let odd_keys = (1..reads) .step_by(2) .map(storage_api_key) .collect::>(); let started_at = Instant::now(); let result = transaction .get_values(KvGetRequest { groups: vec![ KvGetGroup { namespace: STORAGE_API_NAMESPACE.to_string(), keys: even_keys, }, KvGetGroup { namespace: STORAGE_API_ALT_NAMESPACE.to_string(), keys: odd_keys, }, ], }) .await?; transaction.rollback().await?; let verified_rows = result .groups .iter() .map(|group| group.values_iter().filter(|value| value.is_some()).count()) .sum(); Ok(StorageBenchReport { measured_rows: reads, verified_rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_get_values_duplicate_keys_prepared( fixture: &StorageApiFixture, reads: usize, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let keys = (0..reads) .map(|index| storage_api_key(index % 100)) .collect::>(); let started_at = Instant::now(); let result = transaction .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: STORAGE_API_NAMESPACE.to_string(), keys, }], }) .await?; transaction.rollback().await?; let verified_rows = result.groups[0] .values_iter() .filter(|value| value.is_some()) .count(); Ok(StorageBenchReport { measured_rows: reads, verified_rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_scan_keys_prefix_prepared( fixture: &StorageApiFixture, limit: usize, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let started_at = Instant::now(); let result = transaction .scan_keys(KvScanRequest { namespace: STORAGE_API_NAMESPACE.to_string(), range: KvScanRange::prefix(b"key/".to_vec()), after: None, limit, }) .await?; transaction.rollback().await?; Ok(StorageBenchReport { measured_rows: result.keys.len(), verified_rows: limit.min(fixture.rows), elapsed: started_at.elapsed(), }) } pub async fn storage_api_scan_keys_after_pages_prepared( fixture: &StorageApiFixture, page_size: usize, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let started_at = Instant::now(); let mut after = None; let mut measured_rows = 0usize; loop { let result = transaction .scan_keys(KvScanRequest { namespace: STORAGE_API_NAMESPACE.to_string(), range: KvScanRange::prefix(b"key/".to_vec()), after, limit: page_size, }) .await?; if result.keys.is_empty() { break; } measured_rows += result.keys.len(); let Some(resume_after) = result.resume_after else { break; }; after = Some(resume_after); } transaction.rollback().await?; Ok(StorageBenchReport { measured_rows, verified_rows: fixture.rows, elapsed: started_at.elapsed(), }) } pub async fn storage_api_scan_keys_empty_range_prepared( fixture: &StorageApiFixture, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let started_at = Instant::now(); let result = transaction .scan_keys(KvScanRequest { namespace: STORAGE_API_NAMESPACE.to_string(), range: KvScanRange::prefix(b"absent/".to_vec()), after: None, limit: fixture.rows, }) .await?; transaction.rollback().await?; Ok(StorageBenchReport { measured_rows: result.keys.len(), verified_rows: 0, elapsed: started_at.elapsed(), }) } pub async fn prepare_storage_api_selective_scan( backend: Arc, rows: usize, selectivity: StorageBenchSelectivity, ) -> Result { let storage = StorageContext::new(backend); let mut transaction = storage.begin_write_transaction().await?; let mut batch = KvWriteBatch::new(); for index in 0..rows { let key = if selectivity.matches(index) { storage_api_selective_key(index) } else { storage_api_key(index) }; batch.put(STORAGE_API_NAMESPACE, key, storage_api_value(index)); } transaction.write_kv_batch(batch).await?; transaction.commit().await?; Ok(StorageApiFixture { storage, rows }) } pub async fn storage_api_scan_keys_selective_prefix_prepared( fixture: &StorageApiFixture, selectivity: StorageBenchSelectivity, ) -> Result { let mut transaction = fixture.storage.begin_read_transaction().await?; let started_at = Instant::now(); let result = transaction .scan_keys(KvScanRequest { namespace: STORAGE_API_NAMESPACE.to_string(), range: KvScanRange::prefix(b"selective/".to_vec()), after: None, limit: fixture.rows, }) .await?; transaction.rollback().await?; Ok(StorageBenchReport { measured_rows: result.keys.len(), verified_rows: selectivity.expected_rows(fixture.rows), elapsed: started_at.elapsed(), }) } pub async fn storage_api_transaction_commit_empty( backend: Arc, ) -> Result { let storage = StorageContext::new(backend); let started_at = Instant::now(); let transaction = storage.begin_write_transaction().await?; transaction.commit().await?; Ok(StorageBenchReport { measured_rows: 0, verified_rows: 0, elapsed: started_at.elapsed(), }) } fn storage_api_key(index: usize) -> Vec { format!("key/{index:08}").into_bytes() } fn storage_api_selective_key(index: usize) -> Vec { format!("selective/{index:08}").into_bytes() } fn storage_api_missing_key(index: usize) -> Vec { format!("missing/{index:08}").into_bytes() } fn storage_api_value(index: usize) -> Vec { format!("value/{index:08}/{}", "x".repeat(64)).into_bytes() } fn storage_api_value_with_bytes(index: usize, value_bytes: usize) -> Vec { let prefix = format!("value/{index:08}/"); if value_bytes <= prefix.len() { return prefix.into_bytes(); } let mut value = prefix.into_bytes(); value.extend(std::iter::repeat_n(b'x', value_bytes - value.len())); value } fn storage_api_updated_value(index: usize) -> Vec { format!("updated/{index:08}/{}", "y".repeat(64)).into_bytes() } pub struct TrackedStateWriteRootFixture { context: TrackedStateContext, rows: Vec, } pub struct TrackedStateReadFixture { context: TrackedStateContext, rows: usize, commit_id: String, key_pattern: StorageBenchKeyPattern, selectivity: StorageBenchSelectivity, } pub struct TrackedStateUpdateFixture { context: TrackedStateContext, rows: Vec, } pub struct TrackedStateDiffFixture { context: TrackedStateContext, left_commit_id: String, right_commit_id: String, expected_entries: usize, } pub struct TrackedStateMaterializeFixture { context: TrackedStateContext, commit_id: String, expected_rows: usize, } #[derive(Clone)] pub struct JsonPointerStorageRow { pub path: String, pub value_json: String, pub updated_value_json: String, } pub struct JsonPointerTrackedStateReadFixture { context: TrackedStateContext, rows: Vec, commit_id: String, } pub struct JsonPointerTrackedStateDiffFixture { context: TrackedStateContext, left_commit_id: String, right_commit_id: String, expected_entries: usize, } pub struct UntrackedStateWriteFixture { context: UntrackedStateContext, rows: Vec, } pub struct UntrackedStateReadFixture { context: UntrackedStateContext, rows: usize, key_pattern: StorageBenchKeyPattern, selectivity: StorageBenchSelectivity, } pub struct ChangelogAppendFixture { context: CommitStoreContext, changes: Vec, } pub struct ChangelogReadFixture { context: CommitStoreContext, rows: usize, } pub struct ChangelogCodecFixture { changes: Vec, encoded_changes: Vec>, } pub struct CommitGraphReadFixture { head_commit_id: String, rows: usize, } pub struct BinaryCasWriteFixture { context: BinaryCasContext, file_ids: Vec, payloads: Vec>, } pub struct BinaryCasReadFixture { context: BinaryCasContext, rows: usize, hashes: Vec, } #[derive(Debug, Clone, Copy)] pub enum JsonStorePayloadShape { SmallRaw1k, MediumStructured16k, LargeStructured128k, LargeArray128k, } #[derive(Debug, Clone, Copy)] pub enum JsonStoreProjectionShape { TopLevelTarget, TopLevelTenProps, NestedTarget, ArrayItem999, Status, } pub struct JsonStoreWriteFixture { context: JsonStoreContext, documents: Vec>, } pub struct JsonStoreReadFixture { context: JsonStoreContext, refs: Vec, paths: Vec, } pub async fn prepare_tracked_state_write_root( config: StorageBenchConfig, ) -> Result { Ok(TrackedStateWriteRootFixture { context: TrackedStateContext::new(), rows: tracked_rows(config, "bench-tracked-commit"), }) } pub async fn tracked_state_write_root_prepared( backend: &Arc, fixture: &TrackedStateWriteRootFixture, ) -> Result { write_tracked_root( backend, &fixture.context, "bench-tracked-commit", None, &fixture.rows, ) .await?; Ok(report( fixture.rows.len(), fixture.rows.len(), Duration::ZERO, )) } pub async fn prepare_tracked_state_read( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-commit"); write_tracked_root(backend, &context, "bench-tracked-commit", None, &rows).await?; Ok(TrackedStateReadFixture { context, rows: config.rows, commit_id: "bench-tracked-commit".to_string(), key_pattern: config.key_pattern, selectivity: config.selectivity, }) } pub async fn prepare_tracked_state_read_file_selective( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows_file_selective(config, "bench-tracked-commit"); write_tracked_root(backend, &context, "bench-tracked-commit", None, &rows).await?; Ok(TrackedStateReadFixture { context, rows: config.rows, commit_id: "bench-tracked-commit".to_string(), key_pattern: config.key_pattern, selectivity: config.selectivity, }) } pub async fn prepare_tracked_state_read_after_update_rows( backend: &Arc, config: StorageBenchConfig, updated_rows: usize, ) -> Result { let fixture = prepare_tracked_state_update_rows(backend, config, updated_rows).await?; tracked_state_update_existing_prepared(backend, &fixture).await?; Ok(TrackedStateReadFixture { context: fixture.context, rows: config.rows, commit_id: "bench-tracked-child".to_string(), key_pattern: config.key_pattern, selectivity: config.selectivity, }) } pub async fn prepare_tracked_state_read_delta_chain( backend: &Arc, config: StorageBenchConfig, delta_commits: usize, updated_rows_per_commit: usize, ) -> Result { let (context, final_commit_id) = write_tracked_delta_chain(backend, config, delta_commits, updated_rows_per_commit).await?; Ok(TrackedStateReadFixture { context, rows: config.rows, commit_id: final_commit_id, key_pattern: config.key_pattern, selectivity: config.selectivity, }) } pub async fn prepare_tracked_state_read_materialized_delta_chain( backend: &Arc, config: StorageBenchConfig, delta_commits: usize, updated_rows_per_commit: usize, ) -> Result { let (context, final_commit_id) = write_tracked_delta_chain(backend, config, delta_commits, updated_rows_per_commit).await?; materialize_tracked_root(backend, &context, &final_commit_id).await?; Ok(TrackedStateReadFixture { context, rows: config.rows, commit_id: final_commit_id, key_pattern: config.key_pattern, selectivity: config.selectivity, }) } pub async fn prepare_tracked_state_materialize_delta_chain( backend: &Arc, config: StorageBenchConfig, delta_commits: usize, updated_rows_per_commit: usize, ) -> Result { let (context, final_commit_id) = write_tracked_delta_chain(backend, config, delta_commits, updated_rows_per_commit).await?; Ok(TrackedStateMaterializeFixture { context, commit_id: final_commit_id, expected_rows: config.rows, }) } fn tracked_point_hit_requests( rows: usize, key_pattern: StorageBenchKeyPattern, ) -> Vec { (0..rows) .map(|index| TrackedStateRowRequest { schema_key: tracked_schema_key(index, StorageBenchSelectivity::Percent100), entity_id: EntityIdentity::single(entity_id("tracked", index, key_pattern)), file_id: NullableKeyFilter::Value("bench.json".to_string()), }) .collect() } fn tracked_point_miss_requests( rows: usize, selectivity: StorageBenchSelectivity, ) -> Vec { (0..rows) .map(|index| TrackedStateRowRequest { schema_key: tracked_schema_key(index, selectivity), entity_id: EntityIdentity::single(format!("missing-{index}")), file_id: NullableKeyFilter::Value("bench.json".to_string()), }) .collect() } fn tracked_point_miss_requests_for_schema( rows: usize, schema_key: &str, ) -> Vec { (0..rows) .map(|index| TrackedStateRowRequest { schema_key: schema_key.to_string(), entity_id: EntityIdentity::single(format!("missing-{index}")), file_id: NullableKeyFilter::Value("bench.json".to_string()), }) .collect() } pub async fn tracked_state_read_point_hit_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let requests = tracked_point_hit_requests(fixture.rows, fixture.key_pattern); let verified_rows = reader .load_rows_at_commit(&fixture.commit_id, &requests) .await? .into_iter() .filter(Option::is_some) .count(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn tracked_state_read_point_hit_constant_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, measured_reads: usize, ) -> Result { let measured_rows = measured_reads.min(fixture.rows); let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let requests = tracked_point_hit_requests(measured_rows, fixture.key_pattern); let verified_rows = reader .load_rows_at_commit(&fixture.commit_id, &requests) .await? .into_iter() .filter(Option::is_some) .count(); Ok(report(measured_rows, verified_rows, Duration::ZERO)) } pub async fn tracked_state_read_point_miss_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let requests = tracked_point_miss_requests_for_schema(fixture.rows, TRACKED_MATCH_SCHEMA_KEY); let misses = reader .load_rows_at_commit(&fixture.commit_id, &requests) .await? .into_iter() .filter(Option::is_none) .count(); Ok(report(fixture.rows, misses, Duration::ZERO)) } pub async fn tracked_state_scan_all_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let verified_rows = scan_tracked(backend, &fixture.context, &fixture.commit_id) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn tracked_state_scan_keys_only_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( &fixture.commit_id, &TrackedStateScanRequest { projection: TrackedStateProjection { columns: vec!["entity_id".to_string()], }, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn tracked_state_scan_headers_only_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( &fixture.commit_id, &TrackedStateScanRequest { projection: TrackedStateProjection { columns: tracked_state_header_columns(), }, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn tracked_state_scan_full_rows_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { tracked_state_scan_all_prepared(backend, fixture).await } pub async fn tracked_state_scan_schema_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( &fixture.commit_id, &TrackedStateScanRequest { filter: TrackedStateFilter { schema_keys: vec![tracked_schema_key(0, StorageBenchSelectivity::Percent100)], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn tracked_state_scan_schema_selective_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( &fixture.commit_id, &TrackedStateScanRequest { filter: TrackedStateFilter { schema_keys: vec![TRACKED_MATCH_SCHEMA_KEY.to_string()], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report( fixture.selectivity.expected_rows(fixture.rows), verified_rows, Duration::ZERO, )) } pub async fn tracked_state_scan_file_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( &fixture.commit_id, &TrackedStateScanRequest { filter: TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("bench.json".to_string())], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn tracked_state_scan_file_selective_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( &fixture.commit_id, &TrackedStateScanRequest { filter: TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("bench-match.json".to_string())], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report( fixture.selectivity.expected_rows(fixture.rows), verified_rows, Duration::ZERO, )) } pub async fn tracked_state_scan_file_header_selective_prepared( backend: &Arc, fixture: &TrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( &fixture.commit_id, &TrackedStateScanRequest { filter: TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("bench-match.json".to_string())], ..Default::default() }, projection: TrackedStateProjection { columns: tracked_state_header_columns(), }, ..Default::default() }, ) .await? .len(); Ok(report( fixture.selectivity.expected_rows(fixture.rows), verified_rows, Duration::ZERO, )) } pub async fn prepare_tracked_state_update( backend: &Arc, config: StorageBenchConfig, ) -> Result { prepare_tracked_state_update_rows(backend, config, config.update_fraction.rows(config.rows)) .await } pub async fn prepare_tracked_state_update_rows( backend: &Arc, config: StorageBenchConfig, updated_rows: usize, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-parent"); write_tracked_root(backend, &context, "bench-tracked-parent", None, &rows).await?; let mut updated_rows = tracked_rows( config.with_rows(updated_rows.min(config.rows)), "bench-tracked-child", ); for (index, row) in updated_rows.iter_mut().enumerate() { row.snapshot_content = Some(updated_snapshot_content(index, config.state_payload_bytes)); } Ok(TrackedStateUpdateFixture { context, rows: updated_rows, }) } pub async fn prepare_tracked_state_partial_snapshot_update_rows( backend: &Arc, config: StorageBenchConfig, updated_rows: usize, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-parent"); write_tracked_root(backend, &context, "bench-tracked-parent", None, &rows).await?; let mut updated_rows = tracked_rows( config.with_rows(updated_rows.min(config.rows)), "bench-tracked-child", ); for (index, row) in updated_rows.iter_mut().enumerate() { row.snapshot_content = Some(partial_updated_snapshot_content( index, config.state_payload_bytes, )); } Ok(TrackedStateUpdateFixture { context, rows: updated_rows, }) } pub async fn prepare_tracked_state_append_child( backend: &Arc, config: StorageBenchConfig, ) -> Result { prepare_tracked_state_append_child_rows(backend, config, config.rows).await } pub async fn prepare_tracked_state_append_child_rows( backend: &Arc, config: StorageBenchConfig, appended_rows: usize, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-parent"); write_tracked_root(backend, &context, "bench-tracked-parent", None, &rows).await?; let mut appended_rows = tracked_rows( config.with_rows(appended_rows.min(config.rows)), "bench-tracked-child", ); for (index, row) in appended_rows.iter_mut().enumerate() { row.entity_id = EntityIdentity::single(entity_id("tracked-new", index, config.key_pattern)); row.change_id = format!("tracked-new-change-{index}"); } Ok(TrackedStateUpdateFixture { context, rows: appended_rows, }) } pub async fn prepare_tracked_state_tombstone_rows( backend: &Arc, config: StorageBenchConfig, tombstone_rows: usize, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-parent"); write_tracked_root(backend, &context, "bench-tracked-parent", None, &rows).await?; let mut tombstones = tracked_rows( config.with_rows(tombstone_rows.min(config.rows)), "bench-tracked-child", ); for row in &mut tombstones { row.snapshot_content = None; } Ok(TrackedStateUpdateFixture { context, rows: tombstones, }) } pub async fn tracked_state_update_existing_prepared( backend: &Arc, fixture: &TrackedStateUpdateFixture, ) -> Result { write_tracked_root( backend, &fixture.context, "bench-tracked-child", Some("bench-tracked-parent"), &fixture.rows, ) .await?; Ok(report( fixture.rows.len(), fixture.rows.len(), Duration::ZERO, )) } pub async fn prepare_tracked_state_diff_update_rows( backend: &Arc, config: StorageBenchConfig, updated_rows: usize, ) -> Result { let fixture = prepare_tracked_state_update_rows(backend, config, updated_rows).await?; tracked_state_update_existing_prepared(backend, &fixture).await?; Ok(TrackedStateDiffFixture { context: fixture.context, left_commit_id: "bench-tracked-parent".to_string(), right_commit_id: "bench-tracked-child".to_string(), expected_entries: fixture.rows.len(), }) } pub async fn prepare_tracked_state_diff_delta_chain( backend: &Arc, config: StorageBenchConfig, delta_commits: usize, updated_rows_per_commit: usize, ) -> Result { let (context, final_commit_id) = write_tracked_delta_chain(backend, config, delta_commits, updated_rows_per_commit).await?; Ok(TrackedStateDiffFixture { context, left_commit_id: "bench-tracked-base".to_string(), right_commit_id: final_commit_id, expected_entries: updated_rows_per_commit.min(config.rows), }) } pub async fn prepare_tracked_state_diff_tombstone_rows( backend: &Arc, config: StorageBenchConfig, tombstone_rows: usize, ) -> Result { let fixture = prepare_tracked_state_tombstone_rows(backend, config, tombstone_rows).await?; tracked_state_update_existing_prepared(backend, &fixture).await?; Ok(TrackedStateDiffFixture { context: fixture.context, left_commit_id: "bench-tracked-parent".to_string(), right_commit_id: "bench-tracked-child".to_string(), expected_entries: fixture.rows.len(), }) } pub async fn prepare_tracked_state_diff_equal( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-parent"); write_tracked_root(backend, &context, "bench-tracked-parent", None, &rows).await?; Ok(TrackedStateDiffFixture { context, left_commit_id: "bench-tracked-parent".to_string(), right_commit_id: "bench-tracked-parent".to_string(), expected_entries: 0, }) } pub async fn tracked_state_diff_commits_prepared( backend: &Arc, fixture: &TrackedStateDiffFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let diff = reader .diff_commits( &fixture.left_commit_id, &fixture.right_commit_id, &TrackedStateDiffRequest::default(), ) .await?; Ok(report( fixture.expected_entries, diff.entries.len(), Duration::ZERO, )) } pub async fn tracked_state_materialize_root_prepared( backend: &Arc, fixture: &TrackedStateMaterializeFixture, ) -> Result { materialize_tracked_root(backend, &fixture.context, &fixture.commit_id).await?; Ok(report( fixture.expected_rows, fixture.expected_rows, Duration::ZERO, )) } pub async fn prepare_json_pointer_tracked_state_write_root( rows: &[JsonPointerStorageRow], ) -> Result { Ok(TrackedStateWriteRootFixture { context: TrackedStateContext::new(), rows: json_pointer_tracked_rows(rows, "json-pointer-base", false), }) } pub async fn prepare_json_pointer_tracked_state_read( backend: &Arc, rows: &[JsonPointerStorageRow], ) -> Result { let context = TrackedStateContext::new(); let materialized_rows = json_pointer_tracked_rows(rows, "json-pointer-base", false); write_tracked_root( backend, &context, "json-pointer-base", None, &materialized_rows, ) .await?; Ok(JsonPointerTrackedStateReadFixture { context, rows: rows.to_vec(), commit_id: "json-pointer-base".to_string(), }) } pub async fn prepare_json_pointer_tracked_state_diff_update_rows( backend: &Arc, rows: &[JsonPointerStorageRow], updated_rows: usize, ) -> Result { let context = TrackedStateContext::new(); let base_rows = json_pointer_tracked_rows(rows, "json-pointer-base", false); write_tracked_root(backend, &context, "json-pointer-base", None, &base_rows).await?; let child_rows = json_pointer_tracked_rows( &rows[..updated_rows.min(rows.len())], "json-pointer-child", true, ); write_tracked_root( backend, &context, "json-pointer-child", Some("json-pointer-base"), &child_rows, ) .await?; Ok(JsonPointerTrackedStateDiffFixture { context, left_commit_id: "json-pointer-base".to_string(), right_commit_id: "json-pointer-child".to_string(), expected_entries: child_rows.len(), }) } pub async fn json_pointer_tracked_state_get_many_prepared( backend: &Arc, fixture: &JsonPointerTrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let requests = fixture .rows .iter() .map(|row| TrackedStateRowRequest { schema_key: "json_pointer".to_string(), entity_id: EntityIdentity::single(row.path.as_str()), file_id: NullableKeyFilter::Null, }) .collect::>(); let verified_rows = reader .load_rows_at_commit(&fixture.commit_id, &requests) .await? .into_iter() .filter(Option::is_some) .count(); Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO)) } pub async fn json_pointer_tracked_state_get_many_missing_prepared( backend: &Arc, fixture: &JsonPointerTrackedStateReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let requests = fixture .rows .iter() .map(|row| TrackedStateRowRequest { schema_key: "json_pointer".to_string(), entity_id: EntityIdentity::single(format!("missing{}", row.path)), file_id: NullableKeyFilter::Null, }) .collect::>(); let verified_rows = reader .load_rows_at_commit(&fixture.commit_id, &requests) .await? .into_iter() .filter(Option::is_none) .count(); Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO)) } pub async fn json_pointer_tracked_state_scan_keys_only_prepared( backend: &Arc, fixture: &JsonPointerTrackedStateReadFixture, ) -> Result { json_pointer_scan_with_projection( backend, fixture, TrackedStateProjection { columns: vec!["entity_id".to_string()], }, ) .await } pub async fn json_pointer_tracked_state_scan_headers_only_prepared( backend: &Arc, fixture: &JsonPointerTrackedStateReadFixture, ) -> Result { json_pointer_scan_with_projection( backend, fixture, TrackedStateProjection { columns: tracked_state_header_columns(), }, ) .await } pub async fn json_pointer_tracked_state_scan_full_rows_prepared( backend: &Arc, fixture: &JsonPointerTrackedStateReadFixture, ) -> Result { json_pointer_scan_with_projection(backend, fixture, TrackedStateProjection::default()).await } pub async fn json_pointer_tracked_state_prefix_scan_schema_prepared( backend: &Arc, fixture: &JsonPointerTrackedStateReadFixture, ) -> Result { json_pointer_scan_with_projection(backend, fixture, TrackedStateProjection::default()).await } pub async fn json_pointer_tracked_state_prefix_scan_schema_file_null_prepared( backend: &Arc, fixture: &JsonPointerTrackedStateReadFixture, ) -> Result { json_pointer_scan_with_projection(backend, fixture, TrackedStateProjection::default()).await } async fn json_pointer_scan_with_projection( backend: &Arc, fixture: &JsonPointerTrackedStateReadFixture, projection: TrackedStateProjection, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( &fixture.commit_id, &TrackedStateScanRequest { filter: TrackedStateFilter { schema_keys: vec!["json_pointer".to_string()], file_ids: vec![NullableKeyFilter::Null], ..Default::default() }, projection, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO)) } pub async fn prepare_json_pointer_tracked_state_update_rows( backend: &Arc, rows: &[JsonPointerStorageRow], updated_rows: usize, ) -> Result { let context = TrackedStateContext::new(); let base_rows = json_pointer_tracked_rows(rows, "json-pointer-base", false); write_tracked_root(backend, &context, "json-pointer-base", None, &base_rows).await?; let child_rows = json_pointer_tracked_rows( &rows[..updated_rows.min(rows.len())], "json-pointer-child", true, ); Ok(TrackedStateUpdateFixture { context, rows: child_rows, }) } pub async fn prepare_json_pointer_tracked_state_tombstone_rows( backend: &Arc, rows: &[JsonPointerStorageRow], tombstone_rows: usize, ) -> Result { let context = TrackedStateContext::new(); let base_rows = json_pointer_tracked_rows(rows, "json-pointer-base", false); write_tracked_root(backend, &context, "json-pointer-base", None, &base_rows).await?; let mut child_rows = json_pointer_tracked_rows( &rows[..tombstone_rows.min(rows.len())], "json-pointer-child", true, ); for row in &mut child_rows { row.snapshot_content = None; } Ok(TrackedStateUpdateFixture { context, rows: child_rows, }) } pub async fn prepare_json_pointer_tracked_state_diff_delta_chain( backend: &Arc, rows: &[JsonPointerStorageRow], delta_commits: usize, updated_rows_per_commit: usize, ) -> Result { let (context, final_commit_id) = write_json_pointer_delta_chain(backend, rows, delta_commits, updated_rows_per_commit) .await?; Ok(JsonPointerTrackedStateDiffFixture { context, left_commit_id: "json-pointer-base".to_string(), right_commit_id: final_commit_id, expected_entries: updated_rows_per_commit.min(rows.len()), }) } pub async fn prepare_json_pointer_tracked_state_materialize_delta_chain( backend: &Arc, rows: &[JsonPointerStorageRow], delta_commits: usize, updated_rows_per_commit: usize, ) -> Result { let (context, final_commit_id) = write_json_pointer_delta_chain(backend, rows, delta_commits, updated_rows_per_commit) .await?; Ok(TrackedStateMaterializeFixture { context, commit_id: final_commit_id, expected_rows: rows.len(), }) } pub async fn json_pointer_tracked_state_changed_keys_prepared( backend: &Arc, fixture: &JsonPointerTrackedStateDiffFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let diff = reader .diff_commits( &fixture.left_commit_id, &fixture.right_commit_id, &TrackedStateDiffRequest::default(), ) .await?; Ok(report( fixture.expected_entries, diff.entries.len(), Duration::ZERO, )) } pub async fn prepare_untracked_state_write_rows( config: StorageBenchConfig, ) -> Result { Ok(UntrackedStateWriteFixture { context: UntrackedStateContext::new(), rows: untracked_rows(config), }) } pub async fn untracked_state_write_rows_prepared( backend: &Arc, fixture: &UntrackedStateWriteFixture, ) -> Result { write_untracked_rows(backend, &fixture.context, &fixture.rows).await?; let verified_rows = scan_untracked( backend, &fixture.context, UntrackedStateScanRequest::default(), ) .await? .len(); Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO)) } pub async fn prepare_untracked_state_read( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; Ok(UntrackedStateReadFixture { context, rows: config.rows, key_pattern: config.key_pattern, selectivity: config.selectivity, }) } pub async fn untracked_state_read_point_hit_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { let mut verified_rows = 0; let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); for index in 0..fixture.rows { if reader .load_row(&UntrackedStateRowRequest { schema_key: untracked_schema_key(index, StorageBenchSelectivity::Percent100), version_id: "bench-version".to_string(), entity_id: EntityIdentity::single(entity_id( "untracked", index, fixture.key_pattern, )), file_id: NullableKeyFilter::Value("bench.json".to_string()), }) .await? .is_some() { verified_rows += 1; } } Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn untracked_state_read_point_hit_constant_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, measured_reads: usize, ) -> Result { let mut verified_rows = 0; let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); for index in 0..measured_reads.min(fixture.rows) { if reader .load_row(&UntrackedStateRowRequest { schema_key: untracked_schema_key(index, StorageBenchSelectivity::Percent100), version_id: "bench-version".to_string(), entity_id: EntityIdentity::single(entity_id( "untracked", index, fixture.key_pattern, )), file_id: NullableKeyFilter::Value("bench.json".to_string()), }) .await? .is_some() { verified_rows += 1; } } Ok(report( measured_reads.min(fixture.rows), verified_rows, Duration::ZERO, )) } pub async fn untracked_state_read_point_miss_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { let mut misses = 0; let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); for index in 0..fixture.rows { if reader .load_row(&UntrackedStateRowRequest { schema_key: "bench_untracked_entity".to_string(), version_id: "bench-version".to_string(), entity_id: EntityIdentity::single(format!("missing-{index}")), file_id: NullableKeyFilter::Value("bench.json".to_string()), }) .await? .is_none() { misses += 1; } } Ok(report(fixture.rows, misses, Duration::ZERO)) } pub async fn untracked_state_scan_all_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { let verified_rows = scan_untracked( backend, &fixture.context, UntrackedStateScanRequest::default(), ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn untracked_state_scan_keys_only_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { let verified_rows = scan_untracked( backend, &fixture.context, UntrackedStateScanRequest { projection: UntrackedStateProjection { columns: vec!["entity_id".to_string()], }, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn untracked_state_scan_headers_only_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { let verified_rows = scan_untracked( backend, &fixture.context, UntrackedStateScanRequest { projection: UntrackedStateProjection { columns: untracked_state_header_columns(), }, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn untracked_state_scan_full_rows_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { untracked_state_scan_all_prepared(backend, fixture).await } pub async fn untracked_state_scan_version_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { let verified_rows = scan_untracked( backend, &fixture.context, UntrackedStateScanRequest { filter: UntrackedStateFilter { version_ids: vec!["bench-version".to_string()], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn untracked_state_scan_schema_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { let verified_rows = scan_untracked( backend, &fixture.context, UntrackedStateScanRequest { filter: UntrackedStateFilter { schema_keys: vec![untracked_schema_key(0, StorageBenchSelectivity::Percent100)], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn untracked_state_scan_schema_selective_prepared( backend: &Arc, fixture: &UntrackedStateReadFixture, ) -> Result { let verified_rows = scan_untracked( backend, &fixture.context, UntrackedStateScanRequest { filter: UntrackedStateFilter { schema_keys: vec![UNTRACKED_MATCH_SCHEMA_KEY.to_string()], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report( fixture.selectivity.expected_rows(fixture.rows), verified_rows, Duration::ZERO, )) } pub async fn prepare_untracked_state_overwrite( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; let mut updated_rows = untracked_rows(config.with_rows(config.update_fraction.rows(config.rows))); for (index, row) in updated_rows.iter_mut().enumerate() { row.snapshot_content = Some(updated_snapshot_content(index, config.state_payload_bytes)); } Ok(UntrackedStateWriteFixture { context, rows: updated_rows, }) } pub async fn prepare_untracked_state_insert_new_keys( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; let mut new_rows = untracked_rows(config); for (index, row) in new_rows.iter_mut().enumerate() { row.entity_id = EntityIdentity::single(entity_id("untracked-new", index, config.key_pattern)); } Ok(UntrackedStateWriteFixture { context, rows: new_rows, }) } pub async fn untracked_state_overwrite_existing_prepared( backend: &Arc, fixture: &UntrackedStateWriteFixture, ) -> Result { write_untracked_rows(backend, &fixture.context, &fixture.rows).await?; let verified_rows = scan_untracked( backend, &fixture.context, UntrackedStateScanRequest::default(), ) .await? .len(); Ok(report(fixture.rows.len(), verified_rows, Duration::ZERO)) } pub async fn prepare_changelog_append_changes( config: StorageBenchConfig, ) -> Result { Ok(ChangelogAppendFixture { context: CommitStoreContext::new(), changes: changelog_materialized_changes(config), }) } pub async fn prepare_changelog_append_tombstones( config: StorageBenchConfig, ) -> Result { Ok(ChangelogAppendFixture { context: CommitStoreContext::new(), changes: changelog_tombstone_changes(config), }) } pub async fn prepare_changelog_append_metadata( config: StorageBenchConfig, ) -> Result { Ok(ChangelogAppendFixture { context: CommitStoreContext::new(), changes: changelog_metadata_changes(config), }) } pub async fn prepare_changelog_append_shared_payload( config: StorageBenchConfig, ) -> Result { Ok(ChangelogAppendFixture { context: CommitStoreContext::new(), changes: changelog_shared_payload_changes(config), }) } pub async fn prepare_changelog_append_shared_metadata( config: StorageBenchConfig, ) -> Result { Ok(ChangelogAppendFixture { context: CommitStoreContext::new(), changes: changelog_shared_metadata_changes(config), }) } pub async fn prepare_changelog_append_shared_payload_and_metadata( config: StorageBenchConfig, ) -> Result { Ok(ChangelogAppendFixture { context: CommitStoreContext::new(), changes: changelog_shared_payload_and_metadata_changes(config), }) } pub async fn prepare_changelog_append_composite_entity_ids( config: StorageBenchConfig, ) -> Result { Ok(ChangelogAppendFixture { context: CommitStoreContext::new(), changes: changelog_composite_entity_id_changes(config), }) } pub async fn prepare_changelog_codec( config: StorageBenchConfig, ) -> Result { let changes = changelog_changes(config); let encoded_changes = changes .iter() .map(|change| crate::commit_store::codec::encode_change_ref(change.as_ref())) .collect::, _>>()?; Ok(ChangelogCodecFixture { changes, encoded_changes, }) } pub async fn changelog_append_changes_prepared( backend: &Arc, fixture: &ChangelogAppendFixture, ) -> Result { append_changelog_changes(backend, &fixture.context, &fixture.changes).await?; let reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_changes(&ChangeScanRequest::default()) .await? .len(); Ok(report(fixture.changes.len(), verified_rows, Duration::ZERO)) } pub async fn prepare_changelog_read( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = CommitStoreContext::new(); let changes = changelog_materialized_changes(config); append_changelog_changes(backend, &context, &changes).await?; Ok(ChangelogReadFixture { context, rows: config.rows, }) } pub async fn prepare_changelog_read_with_selectivity( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = CommitStoreContext::new(); let changes = changelog_selective_changes(config); append_changelog_changes(backend, &context, &changes).await?; Ok(ChangelogReadFixture { context, rows: config.rows, }) } pub async fn prepare_changelog_read_entity_history( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = CommitStoreContext::new(); let changes = changelog_entity_history_changes(config); append_changelog_changes(backend, &context, &changes).await?; Ok(ChangelogReadFixture { context, rows: config.rows, }) } pub async fn changelog_encode_only_prepared( fixture: &ChangelogCodecFixture, ) -> Result { let mut verified_rows = 0; let mut encoded_bytes = 0; for change in &fixture.changes { encoded_bytes += crate::commit_store::codec::encode_change_ref(change.as_ref())?.len(); verified_rows += 1; } Ok(report( fixture.changes.len(), verified_rows + usize::from(encoded_bytes == 0), Duration::ZERO, )) } pub async fn changelog_decode_only_prepared( fixture: &ChangelogCodecFixture, ) -> Result { let mut verified_rows = 0; let mut decoded_bytes = 0; for bytes in &fixture.encoded_changes { let change = crate::commit_store::codec::decode_change(bytes)?; decoded_bytes += change.schema_key.len(); verified_rows += 1; } Ok(report( fixture.encoded_changes.len(), verified_rows + usize::from(decoded_bytes == 0), Duration::ZERO, )) } pub async fn changelog_load_changes_hit_prepared( backend: &Arc, fixture: &ChangelogReadFixture, ) -> Result { let reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let change_ids = (0..fixture.rows) .map(|index| format!("bench-change-{index}")) .collect::>(); let verified_rows = reader .load_changes(&change_ids) .await? .into_iter() .filter(Option::is_some) .count(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn changelog_load_changes_miss_prepared( backend: &Arc, fixture: &ChangelogReadFixture, ) -> Result { let reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let change_ids = (0..fixture.rows) .map(|index| format!("missing-change-{index}")) .collect::>(); let misses = reader .load_changes(&change_ids) .await? .into_iter() .filter(Option::is_none) .count(); Ok(report(fixture.rows, misses, Duration::ZERO)) } pub async fn changelog_scan_all_prepared( backend: &Arc, fixture: &ChangelogReadFixture, ) -> Result { let reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_changes(&ChangeScanRequest::default()) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn changelog_scan_full_changes_prepared( backend: &Arc, fixture: &ChangelogReadFixture, ) -> Result { changelog_scan_all_prepared(backend, fixture).await } pub async fn changelog_scan_limit_100_prepared( backend: &Arc, fixture: &ChangelogReadFixture, ) -> Result { let reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let expected = fixture.rows.min(100); let verified_rows = reader .scan_changes(&ChangeScanRequest { limit: Some(expected), }) .await? .len(); Ok(report(expected, verified_rows, Duration::ZERO)) } pub async fn changelog_scan_change_set_prepared( backend: &Arc, fixture: &ChangelogReadFixture, ) -> Result { let reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let change_ids = (0..fixture.rows) .map(|index| format!("bench-change-{index}")) .collect::>(); let verified_rows = reader .load_changes(&change_ids) .await? .into_iter() .filter(Option::is_some) .count(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn changelog_scan_schema_prepared( backend: &Arc, fixture: &ChangelogReadFixture, selectivity: StorageBenchSelectivity, ) -> Result { let reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let changes = reader.scan_changes(&ChangeScanRequest::default()).await?; let verified_rows = changes .iter() .filter(|change| change.record.schema_key == CHANGELOG_MATCH_SCHEMA_KEY) .count(); Ok(report( selectivity.expected_rows(fixture.rows), verified_rows, Duration::ZERO, )) } pub async fn changelog_scan_entity_history_prepared( backend: &Arc, fixture: &ChangelogReadFixture, ) -> Result { let reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let changes = reader.scan_changes(&ChangeScanRequest::default()).await?; let target = EntityIdentity::single(CHANGELOG_HISTORY_ENTITY_ID); let verified_rows = changes .iter() .filter(|change| change.record.entity_id == target) .count(); Ok(report( fixture.rows.div_ceil(10), verified_rows, Duration::ZERO, )) } pub async fn prepare_commit_graph_read( backend: &Arc, config: StorageBenchConfig, ) -> Result { let changelog = CommitStoreContext::new(); let mut changes = changelog_materialized_changes(config); let head_commit_id = "bench-commit-head".to_string(); changes.push(commit_graph_materialized_commit_change( &head_commit_id, config.rows, )); append_changelog_changes(backend, &changelog, &changes).await?; Ok(CommitGraphReadFixture { head_commit_id, rows: config.rows, }) } pub async fn commit_graph_change_history_from_commit_prepared( backend: &Arc, fixture: &CommitGraphReadFixture, ) -> Result { let graph = crate::commit_graph::CommitGraphContext::new(); let mut reader = graph.reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .change_history_from_commit( &fixture.head_commit_id, &CommitGraphChangeHistoryRequest::default(), ) .await? .len(); Ok(report(fixture.rows, verified_rows, Duration::ZERO)) } pub async fn prepare_binary_cas_write_blobs( config: StorageBenchConfig, ) -> Result { Ok(BinaryCasWriteFixture { context: BinaryCasContext::new(), file_ids: binary_file_ids(config.rows), payloads: binary_payloads(config.rows, config.blob_bytes), }) } pub async fn prepare_binary_cas_write_duplicate_payload( config: StorageBenchConfig, ) -> Result { let payload = binary_payload(0, config.blob_bytes); Ok(BinaryCasWriteFixture { context: BinaryCasContext::new(), file_ids: binary_file_ids(config.rows), payloads: (0..config.rows).map(|_| payload.clone()).collect(), }) } pub async fn prepare_binary_cas_write_half_duplicate_payload( config: StorageBenchConfig, ) -> Result { Ok(BinaryCasWriteFixture { context: BinaryCasContext::new(), file_ids: binary_file_ids(config.rows), payloads: binary_half_duplicate_payloads(config.rows, config.blob_bytes), }) } pub async fn binary_cas_write_blobs_prepared( backend: &Arc, fixture: &BinaryCasWriteFixture, ) -> Result { let writes = binary_blob_writes(&fixture.file_ids, &fixture.payloads); write_binary_blob_writes(backend, &fixture.context, &writes).await?; let verified_rows = count_binary_cas_manifests(backend).await?; Ok(report(writes.len(), verified_rows, Duration::ZERO)) } pub async fn prepare_binary_cas_read( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = BinaryCasContext::new(); let payloads = binary_payloads(config.rows, config.blob_bytes); let file_ids = binary_file_ids(config.rows); let writes = binary_blob_writes(&file_ids, &payloads); write_binary_blob_writes(backend, &context, &writes).await?; let hashes = payloads .iter() .map(|payload| BlobHash::from_content(payload)) .collect::>(); Ok(BinaryCasReadFixture { context, rows: config.rows, hashes, }) } pub async fn binary_cas_read_blob_hit_prepared( backend: &Arc, fixture: &BinaryCasReadFixture, ) -> Result { let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .load_bytes_many(&fixture.hashes) .await? .into_vec() .into_iter() .filter(|row| row.is_some()) .count(); Ok(report(fixture.hashes.len(), verified_rows, Duration::ZERO)) } pub async fn binary_cas_read_blob_miss_prepared( backend: &Arc, fixture: &BinaryCasReadFixture, ) -> Result { let mut misses = 0; let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); for index in 0..fixture.rows { let missing_hash = BlobHash::from_hex(&format!("{index:064x}"))?; if reader .load_bytes_many(&[missing_hash]) .await? .get(0) .is_none() { misses += 1; } } Ok(report(fixture.rows, misses, Duration::ZERO)) } pub async fn prepare_json_store_write( shape: JsonStorePayloadShape, rows: usize, ) -> Result { Ok(JsonStoreWriteFixture { context: JsonStoreContext::new(), documents: json_documents(shape, rows), }) } pub async fn prepare_json_store_write_dedupe( shape: JsonStorePayloadShape, rows: usize, ) -> Result { let document = json_document(shape, 0); Ok(JsonStoreWriteFixture { context: JsonStoreContext::new(), documents: (0..rows).map(|_| document.clone()).collect(), }) } pub async fn json_store_write_prepared( backend: &Arc, fixture: &JsonStoreWriteFixture, ) -> Result { let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; { let mut writes = StorageWriteSet::new(); let mut writer = fixture.context.writer(); writer.stage_batch( &mut writes, JsonWritePlacementRef::OutOfBand, fixture .documents .iter() .map(|document| { std::str::from_utf8(document) .map(NormalizedJsonRef::new) .map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("benchmark JSON document is invalid UTF-8: {error}"), ) }) }) .collect::, _>>()?, )?; writes.apply(&mut transaction.as_mut()).await?; } transaction.commit().await?; Ok(report( fixture.documents.len(), fixture.documents.len(), Duration::ZERO, )) } pub async fn prepare_json_store_read( backend: &Arc, shape: JsonStorePayloadShape, rows: usize, ) -> Result { prepare_json_store_projection_read( backend, shape, rows, JsonStoreProjectionShape::TopLevelTarget, ) .await } pub async fn prepare_json_store_projection_read( backend: &Arc, shape: JsonStorePayloadShape, rows: usize, projection: JsonStoreProjectionShape, ) -> Result { let context = JsonStoreContext::new(); let documents = json_documents(shape, rows); let mut refs = Vec::with_capacity(documents.len()); let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; { let mut writes = StorageWriteSet::new(); let mut writer = context.writer(); for document in &documents { refs.push(prepare_json_ref(document)?); } writer.stage_batch( &mut writes, JsonWritePlacementRef::OutOfBand, documents .iter() .map(|document| { std::str::from_utf8(document) .map(NormalizedJsonRef::new) .map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("benchmark JSON document is invalid UTF-8: {error}"), ) }) }) .collect::, _>>()?, )?; writes.apply(&mut transaction.as_mut()).await?; } transaction.commit().await?; Ok(JsonStoreReadFixture { context, refs, paths: json_projection_paths(projection), }) } pub async fn json_store_read_bytes_prepared( backend: &Arc, fixture: &JsonStoreReadFixture, ) -> Result { let mut verified_rows = 0; let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let batch = reader .load_bytes_many(JsonLoadRequestRef { refs: &fixture.refs, scope: JsonReadScopeRef::OutOfBand, }) .await?; for value in batch.values() { if value.is_some() { verified_rows += 1; } } Ok(report(fixture.refs.len(), verified_rows, Duration::ZERO)) } pub async fn json_store_read_value_prepared( backend: &Arc, fixture: &JsonStoreReadFixture, ) -> Result { let mut verified_rows = 0; let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let batch = reader .load_values_many(JsonLoadRequestRef { refs: &fixture.refs, scope: JsonReadScopeRef::OutOfBand, }) .await?; for value in batch.values() { if value.is_some() { verified_rows += 1; } } Ok(report(fixture.refs.len(), verified_rows, Duration::ZERO)) } pub async fn json_store_read_projection_prepared( backend: &Arc, fixture: &JsonStoreReadFixture, ) -> Result { let mut verified_rows = 0; let mut reader = fixture .context .reader(StorageContext::new(Arc::clone(backend))); let batch = reader .load_projections_many(JsonProjectionLoadRequestRef { refs: &fixture.refs, scope: JsonReadScopeRef::OutOfBand, paths: &fixture.paths, }) .await?; for value in batch.values() { if value.is_some() { verified_rows += 1; } } Ok(report(fixture.refs.len(), verified_rows, Duration::ZERO)) } pub async fn prepare_json_store_base_update_object( backend: &Arc, rows: usize, ) -> Result { prepare_json_store_base_update(backend, JsonStorePayloadShape::LargeStructured128k, rows).await } pub async fn prepare_json_store_base_update_array( backend: &Arc, rows: usize, ) -> Result { prepare_json_store_base_update(backend, JsonStorePayloadShape::LargeArray128k, rows).await } async fn prepare_json_store_base_update( backend: &Arc, shape: JsonStorePayloadShape, rows: usize, ) -> Result { let context = JsonStoreContext::new(); let documents = json_documents(shape, rows); let mut refs = Vec::with_capacity(documents.len()); let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; { let mut writes = StorageWriteSet::new(); let mut writer = context.writer(); for document in &documents { refs.push(prepare_json_ref(document)?); } writer.stage_batch( &mut writes, JsonWritePlacementRef::OutOfBand, documents .iter() .map(|document| { std::str::from_utf8(document) .map(NormalizedJsonRef::new) .map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("benchmark JSON document is invalid UTF-8: {error}"), ) }) }) .collect::, _>>()?, )?; writes.apply(&mut transaction.as_mut()).await?; } transaction.commit().await?; Ok(JsonStoreReadFixture { context, refs, paths: json_projection_paths(JsonStoreProjectionShape::TopLevelTarget), }) } pub async fn json_store_write_against_base_object_prepared( backend: &Arc, fixture: &JsonStoreReadFixture, ) -> Result { json_store_write_against_base_prepared( backend, fixture, JsonStorePayloadShape::LargeStructured128k, ) .await } pub async fn json_store_write_against_base_array_prepared( backend: &Arc, fixture: &JsonStoreReadFixture, ) -> Result { json_store_write_against_base_prepared(backend, fixture, JsonStorePayloadShape::LargeArray128k) .await } async fn json_store_write_against_base_prepared( backend: &Arc, fixture: &JsonStoreReadFixture, shape: JsonStorePayloadShape, ) -> Result { let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; { let mut writes = StorageWriteSet::new(); let mut writer = fixture.context.writer(); let mut updated_documents = Vec::with_capacity(fixture.refs.len()); for (index, _json_ref) in fixture.refs.iter().enumerate() { let updated = updated_json_document(shape, index); prepare_json_ref(&updated)?; updated_documents.push(updated); } writer.stage_batch( &mut writes, JsonWritePlacementRef::OutOfBand, updated_documents .iter() .map(|document| { std::str::from_utf8(document) .map(NormalizedJsonRef::new) .map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("benchmark JSON document is invalid UTF-8: {error}"), ) }) }) .collect::, _>>()?, )?; writes.apply(&mut transaction.as_mut()).await?; } transaction.commit().await?; Ok(report( fixture.refs.len(), fixture.refs.len(), Duration::ZERO, )) } pub async fn tracked_state_write_root( backend: &Arc, config: StorageBenchConfig, ) -> Result { let rows = tracked_rows(config, "bench-tracked-commit"); let context = TrackedStateContext::new(); let started = Instant::now(); write_tracked_root(backend, &context, "bench-tracked-commit", None, &rows).await?; let elapsed = started.elapsed(); let verified_rows = scan_tracked(backend, &context, "bench-tracked-commit") .await? .len(); Ok(report(rows.len(), verified_rows, elapsed)) } pub async fn tracked_state_read_point_hit( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-commit"); write_tracked_root(backend, &context, "bench-tracked-commit", None, &rows).await?; let started = Instant::now(); let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); let requests = tracked_point_hit_requests(config.rows, config.key_pattern); let verified_rows = reader .load_rows_at_commit("bench-tracked-commit", &requests) .await? .into_iter() .filter(Option::is_some) .count(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn tracked_state_read_point_miss( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-commit"); write_tracked_root(backend, &context, "bench-tracked-commit", None, &rows).await?; let started = Instant::now(); let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); let requests = tracked_point_miss_requests(config.rows, StorageBenchSelectivity::Percent100); let misses = reader .load_rows_at_commit("bench-tracked-commit", &requests) .await? .into_iter() .filter(Option::is_none) .count(); Ok(report(config.rows, misses, started.elapsed())) } pub async fn tracked_state_scan_all( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-commit"); write_tracked_root(backend, &context, "bench-tracked-commit", None, &rows).await?; let started = Instant::now(); let verified_rows = scan_tracked(backend, &context, "bench-tracked-commit") .await? .len(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn tracked_state_scan_schema( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-commit"); write_tracked_root(backend, &context, "bench-tracked-commit", None, &rows).await?; let started = Instant::now(); let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( "bench-tracked-commit", &TrackedStateScanRequest { filter: TrackedStateFilter { schema_keys: vec![tracked_schema_key(0, StorageBenchSelectivity::Percent100)], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn tracked_state_scan_file( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-commit"); write_tracked_root(backend, &context, "bench-tracked-commit", None, &rows).await?; let started = Instant::now(); let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_rows_at_commit( "bench-tracked-commit", &TrackedStateScanRequest { filter: TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("bench.json".to_string())], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn tracked_state_update_existing( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = TrackedStateContext::new(); let rows = tracked_rows(config, "bench-tracked-parent"); write_tracked_root(backend, &context, "bench-tracked-parent", None, &rows).await?; let mut updated_rows = tracked_rows(config, "bench-tracked-child"); for (index, row) in updated_rows.iter_mut().enumerate() { row.snapshot_content = Some(updated_snapshot_content(index, config.state_payload_bytes)); } let started = Instant::now(); write_tracked_root( backend, &context, "bench-tracked-child", Some("bench-tracked-parent"), &updated_rows, ) .await?; let elapsed = started.elapsed(); let verified_rows = scan_tracked(backend, &context, "bench-tracked-child") .await? .len(); Ok(report(updated_rows.len(), verified_rows, elapsed)) } pub async fn untracked_state_write_rows( backend: &Arc, config: StorageBenchConfig, ) -> Result { let rows = untracked_rows(config); let context = UntrackedStateContext::new(); let started = Instant::now(); write_untracked_rows(backend, &context, &rows).await?; let elapsed = started.elapsed(); let verified_rows = scan_untracked(backend, &context, UntrackedStateScanRequest::default()) .await? .len(); Ok(report(rows.len(), verified_rows, elapsed)) } pub async fn untracked_state_read_point_hit( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; let started = Instant::now(); let mut verified_rows = 0; let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); for index in 0..config.rows { if reader .load_row(&UntrackedStateRowRequest { schema_key: untracked_schema_key(index, StorageBenchSelectivity::Percent100), version_id: "bench-version".to_string(), entity_id: EntityIdentity::single(entity_id( "untracked", index, config.key_pattern, )), file_id: NullableKeyFilter::Value("bench.json".to_string()), }) .await? .is_some() { verified_rows += 1; } } Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn untracked_state_read_point_miss( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; let started = Instant::now(); let mut misses = 0; let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); for index in 0..config.rows { if reader .load_row(&UntrackedStateRowRequest { schema_key: "bench_untracked_entity".to_string(), version_id: "bench-version".to_string(), entity_id: EntityIdentity::single(format!("missing-{index}")), file_id: NullableKeyFilter::Value("bench.json".to_string()), }) .await? .is_none() { misses += 1; } } Ok(report(config.rows, misses, started.elapsed())) } pub async fn untracked_state_scan_all( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; let started = Instant::now(); let verified_rows = scan_untracked(backend, &context, UntrackedStateScanRequest::default()) .await? .len(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn untracked_state_scan_version( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; let started = Instant::now(); let verified_rows = scan_untracked( backend, &context, UntrackedStateScanRequest { filter: UntrackedStateFilter { version_ids: vec!["bench-version".to_string()], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn untracked_state_scan_schema( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; let started = Instant::now(); let verified_rows = scan_untracked( backend, &context, UntrackedStateScanRequest { filter: UntrackedStateFilter { schema_keys: vec![untracked_schema_key(0, StorageBenchSelectivity::Percent100)], ..Default::default() }, ..Default::default() }, ) .await? .len(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn untracked_state_overwrite_existing( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = UntrackedStateContext::new(); let rows = untracked_rows(config); write_untracked_rows(backend, &context, &rows).await?; let mut updated_rows = untracked_rows(config); for (index, row) in updated_rows.iter_mut().enumerate() { row.snapshot_content = Some(updated_snapshot_content(index, config.state_payload_bytes)); } let started = Instant::now(); write_untracked_rows(backend, &context, &updated_rows).await?; let elapsed = started.elapsed(); let verified_rows = scan_untracked(backend, &context, UntrackedStateScanRequest::default()) .await? .len(); Ok(report(updated_rows.len(), verified_rows, elapsed)) } pub async fn changelog_append_changes( backend: &Arc, config: StorageBenchConfig, ) -> Result { let changes = changelog_materialized_changes(config); let context = CommitStoreContext::new(); let started = Instant::now(); append_changelog_changes(backend, &context, &changes).await?; let elapsed = started.elapsed(); let reader = context.reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .scan_changes(&ChangeScanRequest::default()) .await? .len(); Ok(report(changes.len(), verified_rows, elapsed)) } pub async fn changelog_load_changes_hit( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = CommitStoreContext::new(); let changes = changelog_materialized_changes(config); append_changelog_changes(backend, &context, &changes).await?; let reader = context.reader(StorageContext::new(Arc::clone(backend))); let started = Instant::now(); let change_ids = (0..config.rows) .map(|index| format!("bench-change-{index}")) .collect::>(); let verified_rows = reader .load_changes(&change_ids) .await? .into_iter() .filter(Option::is_some) .count(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn changelog_load_changes_miss( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = CommitStoreContext::new(); let changes = changelog_materialized_changes(config); append_changelog_changes(backend, &context, &changes).await?; let reader = context.reader(StorageContext::new(Arc::clone(backend))); let started = Instant::now(); let change_ids = (0..config.rows) .map(|index| format!("missing-change-{index}")) .collect::>(); let misses = reader .load_changes(&change_ids) .await? .into_iter() .filter(Option::is_none) .count(); Ok(report(config.rows, misses, started.elapsed())) } pub async fn changelog_scan_all( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = CommitStoreContext::new(); let changes = changelog_materialized_changes(config); append_changelog_changes(backend, &context, &changes).await?; let reader = context.reader(StorageContext::new(Arc::clone(backend))); let started = Instant::now(); let verified_rows = reader .scan_changes(&ChangeScanRequest::default()) .await? .len(); Ok(report(config.rows, verified_rows, started.elapsed())) } pub async fn changelog_scan_limit_100( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = CommitStoreContext::new(); let changes = changelog_materialized_changes(config); append_changelog_changes(backend, &context, &changes).await?; let reader = context.reader(StorageContext::new(Arc::clone(backend))); let expected = config.rows.min(100); let started = Instant::now(); let verified_rows = reader .scan_changes(&ChangeScanRequest { limit: Some(expected), }) .await? .len(); Ok(report(expected, verified_rows, started.elapsed())) } pub async fn binary_cas_write_blobs( backend: &Arc, config: StorageBenchConfig, ) -> Result { let payloads = binary_payloads(config.rows, config.blob_bytes); let file_ids = binary_file_ids(config.rows); let writes = binary_blob_writes(&file_ids, &payloads); let context = BinaryCasContext::new(); let started = Instant::now(); write_binary_blob_writes(backend, &context, &writes).await?; let elapsed = started.elapsed(); let verified_rows = count_binary_cas_manifests(backend).await?; Ok(report(writes.len(), verified_rows, elapsed)) } pub async fn binary_cas_read_blob_hit( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = BinaryCasContext::new(); let payloads = binary_payloads(config.rows, config.blob_bytes); let file_ids = binary_file_ids(config.rows); let writes = binary_blob_writes(&file_ids, &payloads); write_binary_blob_writes(backend, &context, &writes).await?; let hashes = payloads .iter() .map(|payload| BlobHash::from_content(payload)) .collect::>(); let started = Instant::now(); let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); let verified_rows = reader .load_bytes_many(&hashes) .await? .into_vec() .into_iter() .filter(|row| row.is_some()) .count(); Ok(report(hashes.len(), verified_rows, started.elapsed())) } pub async fn binary_cas_read_blob_miss( backend: &Arc, config: StorageBenchConfig, ) -> Result { let context = BinaryCasContext::new(); let payloads = binary_payloads(config.rows, config.blob_bytes); let file_ids = binary_file_ids(config.rows); let writes = binary_blob_writes(&file_ids, &payloads); write_binary_blob_writes(backend, &context, &writes).await?; let started = Instant::now(); let mut misses = 0; let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); for index in 0..config.rows { let missing_hash = BlobHash::from_hex(&format!("{index:064x}"))?; if reader .load_bytes_many(&[missing_hash]) .await? .get(0) .is_none() { misses += 1; } } Ok(report(config.rows, misses, started.elapsed())) } pub async fn binary_cas_write_duplicate_payload( backend: &Arc, config: StorageBenchConfig, ) -> Result { let payload = binary_payload(0, config.blob_bytes); let payloads = (0..config.rows) .map(|_| payload.clone()) .collect::>(); let file_ids = binary_file_ids(config.rows); let writes = binary_blob_writes(&file_ids, &payloads); let context = BinaryCasContext::new(); let started = Instant::now(); write_binary_blob_writes(backend, &context, &writes).await?; let elapsed = started.elapsed(); let verified_rows = count_binary_cas_manifests(backend).await?; Ok(report(writes.len(), verified_rows, elapsed)) } async fn write_tracked_root( backend: &Arc, context: &TrackedStateContext, commit_id: &str, parent_commit_id: Option<&str>, rows: &[MaterializedTrackedStateRow], ) -> Result<(), LixError> { let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; let mut writes = StorageWriteSet::new(); let changes = rows .iter() .map(tracked_bench_change_from_materialized) .collect::, _>>()?; let payloads = tracked_bench_json_payloads(rows, &changes); let json_report = JsonStoreContext::new().writer().stage_batch_report( &mut writes, JsonWritePlacementRef::CommitPack { commit_id, pack_id: 0, }, payloads.iter().map(|(payload, json_ref)| match json_ref { Some(json_ref) => NormalizedJsonRef::trusted_prehashed(payload.as_str(), *json_ref), None => NormalizedJsonRef::new(payload.as_str()), }), )?; let parent_ids = parent_commit_id .map(|parent| vec![parent.to_string()]) .unwrap_or_default(); let commit_change_id = format!("{commit_id}:commit"); let commit = CommitDraftRef { id: commit_id, change_id: &commit_change_id, parent_ids: &parent_ids, author_account_ids: &[], created_at: rows .first() .map(|row| row.updated_at.as_str()) .unwrap_or("1970-01-01T00:00:00.000Z"), }; let commit_store = CommitStoreContext::new(); let authored_changes = changes.iter().map(Change::as_ref).collect::>(); let staged = commit_store .writer(&mut transaction.as_mut(), &mut writes) .stage_tracked_commit_draft(commit, authored_changes.clone(), Vec::new()) .await?; let mut deltas = Vec::with_capacity(changes.len()); deltas.extend( authored_changes .iter() .zip(&staged.authored_locators) .zip(rows) .map(|((change, locator), row)| TrackedStateDeltaRef { change: *change, locator: locator.as_ref(), created_at: row.created_at.as_str(), updated_at: row.updated_at.as_str(), }), ); context .writer(&mut transaction.as_mut(), &mut writes) .stage_delta_with_json_pack_indexes( commit_id, parent_commit_id, &deltas, crate::tracked_state::DeltaJsonPackIndexesRef { commit_id, pack_id: 0, indexes: &json_report.pack_indexes, }, ) .await?; writes.apply(&mut transaction.as_mut()).await?; transaction.commit().await } async fn materialize_tracked_root( backend: &Arc, context: &TrackedStateContext, commit_id: &str, ) -> Result<(), LixError> { let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; let mut writes = StorageWriteSet::new(); let commit_store = CommitStoreContext::new(); context .materializer(&mut transaction.as_mut(), &mut writes, &commit_store) .materialize_root_at(commit_id) .await?; writes.apply(&mut transaction.as_mut()).await?; transaction.commit().await } async fn write_tracked_delta_chain( backend: &Arc, config: StorageBenchConfig, delta_commits: usize, updated_rows_per_commit: usize, ) -> Result<(TrackedStateContext, String), LixError> { let context = TrackedStateContext::new(); let base_commit_id = "bench-tracked-base"; let rows = tracked_rows(config, base_commit_id); write_tracked_root(backend, &context, base_commit_id, None, &rows).await?; let mut parent_commit_id = base_commit_id.to_string(); for delta_index in 0..delta_commits { let commit_id = format!("bench-tracked-delta-{delta_index}"); let mut updated_rows = tracked_rows( config.with_rows(updated_rows_per_commit.min(config.rows)), &commit_id, ); for (row_index, row) in updated_rows.iter_mut().enumerate() { row.snapshot_content = Some(delta_chain_snapshot_content( delta_index, row_index, config.state_payload_bytes, )); row.updated_at = timestamp(config.rows + delta_index * config.rows + row_index); } write_tracked_root( backend, &context, &commit_id, Some(parent_commit_id.as_str()), &updated_rows, ) .await?; parent_commit_id = commit_id; } Ok((context, parent_commit_id)) } fn tracked_bench_change_from_materialized( row: &MaterializedTrackedStateRow, ) -> Result { Ok(Change { id: row.change_id.clone(), entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: row.file_id.clone(), snapshot_ref: row .snapshot_content .as_deref() .map(|value| prepare_json_ref(value.as_bytes())) .transpose()?, metadata_ref: row .metadata .as_ref() .map(|value| { let serialized = crate::serialize_row_metadata(value); prepare_json_ref(serialized.as_bytes()) }) .transpose()?, created_at: row.created_at.clone(), }) } fn tracked_bench_json_payloads( rows: &[MaterializedTrackedStateRow], changes: &[Change], ) -> Vec<(String, Option)> { let mut payloads = Vec::new(); for (row, change) in rows.iter().zip(changes) { if let Some(snapshot) = row.snapshot_content.as_deref() { payloads.push((snapshot.to_string(), change.snapshot_ref)); } if let Some(metadata) = row.metadata.as_ref() { payloads.push((crate::serialize_row_metadata(metadata), change.metadata_ref)); } } payloads } async fn scan_tracked( backend: &Arc, context: &TrackedStateContext, commit_id: &str, ) -> Result, LixError> { let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); reader .scan_rows_at_commit(commit_id, &TrackedStateScanRequest::default()) .await } async fn write_untracked_rows( backend: &Arc, context: &UntrackedStateContext, rows: &[MaterializedUntrackedStateRow], ) -> Result<(), LixError> { let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; { let mut writes = StorageWriteSet::new(); let canonical_rows = rows .iter() .map(|row| crate::test_support::untracked_state_row_from_materialized(&mut writes, row)) .collect::, _>>()?; let mut writer = context.writer(&mut writes); writer.stage_rows(canonical_rows.iter().map(|row| row.as_ref()))?; writes.apply(&mut transaction.as_mut()).await?; } transaction.commit().await } async fn scan_untracked( backend: &Arc, context: &UntrackedStateContext, request: UntrackedStateScanRequest, ) -> Result, LixError> { let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); reader.scan_rows(&request).await } async fn append_changelog_changes( backend: &Arc, context: &CommitStoreContext, changes: &[MaterializedChange], ) -> Result<(), LixError> { let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; { let mut writes = StorageWriteSet::new(); let canonical_changes = changes .iter() .map(canonical_changelog_bench_change) .collect::, _>>()?; let payloads = changelog_bench_json_payloads(changes); JsonStoreContext::new().writer().stage_batch( &mut writes, JsonWritePlacementRef::OutOfBand, payloads .iter() .map(|payload| NormalizedJsonRef::new(payload.as_str())), )?; let parent_ids = Vec::new(); let author_account_ids = vec!["bench-author".to_string()]; { let mut transaction_ref = transaction.as_mut(); let mut writer = context.writer(&mut transaction_ref, &mut writes); writer .stage_commit_draft( CommitDraftRef { id: "bench-changelog-commit-0", change_id: "bench-changelog-header-change-0", parent_ids: &parent_ids, author_account_ids: &author_account_ids, created_at: "2024-01-01T00:00:00.000Z", }, canonical_changes .iter() .map(|change| change.as_ref()) .collect(), Vec::new(), ) .await?; } writes.apply(&mut transaction.as_mut()).await?; } transaction.commit().await } async fn write_binary_blob_writes( backend: &Arc, context: &BinaryCasContext, writes: &[BlobWrite<'_>], ) -> Result<(), LixError> { let storage = StorageContext::new(Arc::clone(backend)); let mut transaction = storage.begin_write_transaction().await?; { let mut writeset = StorageWriteSet::new(); let mut writer = context.writer(&mut writeset); writer.stage_many(writes)?; writeset.apply(&mut transaction.as_mut()).await?; } transaction.commit().await } async fn count_binary_cas_manifests( backend: &Arc, ) -> Result { let context = BinaryCasContext::new(); let mut reader = context.reader(StorageContext::new(Arc::clone(backend))); reader.count_blob_manifests().await } fn report(measured_rows: usize, verified_rows: usize, elapsed: Duration) -> StorageBenchReport { StorageBenchReport { measured_rows, verified_rows, elapsed, } } const TRACKED_MATCH_SCHEMA_KEY: &str = "bench_tracked_entity"; const TRACKED_OTHER_SCHEMA_KEY: &str = "bench_tracked_other_entity"; const UNTRACKED_MATCH_SCHEMA_KEY: &str = "bench_untracked_entity"; const UNTRACKED_OTHER_SCHEMA_KEY: &str = "bench_untracked_other_entity"; const CHANGELOG_MATCH_SCHEMA_KEY: &str = "bench_changelog_entity"; const CHANGELOG_OTHER_SCHEMA_KEY: &str = "bench_changelog_other_entity"; const CHANGELOG_HISTORY_ENTITY_ID: &str = "change-entity-history-target"; fn tracked_rows(config: StorageBenchConfig, commit_id: &str) -> Vec { (0..config.rows) .map(|index| MaterializedTrackedStateRow { entity_id: EntityIdentity::single(entity_id("tracked", index, config.key_pattern)), schema_key: tracked_schema_key(index, config.selectivity), file_id: Some("bench.json".to_string()), snapshot_content: Some(snapshot_content(index, config.state_payload_bytes)), metadata: None, deleted: false, created_at: timestamp(index), updated_at: timestamp(index), change_id: tracked_change_id(commit_id, index), commit_id: commit_id.to_string(), }) .collect() } fn json_pointer_tracked_rows( rows: &[JsonPointerStorageRow], commit_id: &str, updated: bool, ) -> Vec { rows.iter() .enumerate() .map(|(index, row)| { let value_json = if updated { row.updated_value_json.as_str() } else { row.value_json.as_str() }; let value = serde_json::from_str::(value_json) .unwrap_or_else(|_| serde_json::Value::String(value_json.to_string())); let snapshot = serde_json::json!({ "path": row.path, "value": value, }) .to_string(); MaterializedTrackedStateRow { entity_id: EntityIdentity::single(row.path.as_str()), schema_key: "json_pointer".to_string(), file_id: None, snapshot_content: Some(snapshot), metadata: None, deleted: false, created_at: timestamp(index), updated_at: timestamp(index), change_id: tracked_change_id(commit_id, index), commit_id: commit_id.to_string(), } }) .collect() } async fn write_json_pointer_delta_chain( backend: &Arc, rows: &[JsonPointerStorageRow], delta_commits: usize, updated_rows_per_commit: usize, ) -> Result<(TrackedStateContext, String), LixError> { let context = TrackedStateContext::new(); let base_commit_id = "json-pointer-base"; let base_rows = json_pointer_tracked_rows(rows, base_commit_id, false); write_tracked_root(backend, &context, base_commit_id, None, &base_rows).await?; let mut parent_commit_id = base_commit_id.to_string(); for delta_index in 0..delta_commits { let commit_id = format!("json-pointer-delta-{delta_index}"); let mut child_rows = json_pointer_tracked_rows( &rows[..updated_rows_per_commit.min(rows.len())], &commit_id, true, ); for row in &mut child_rows { row.updated_at = timestamp(rows.len() + delta_index); } write_tracked_root( backend, &context, &commit_id, Some(parent_commit_id.as_str()), &child_rows, ) .await?; parent_commit_id = commit_id; } Ok((context, parent_commit_id)) } fn tracked_rows_file_selective( config: StorageBenchConfig, commit_id: &str, ) -> Vec { (0..config.rows) .map(|index| MaterializedTrackedStateRow { entity_id: EntityIdentity::single(entity_id("tracked", index, config.key_pattern)), schema_key: TRACKED_MATCH_SCHEMA_KEY.to_string(), file_id: Some( if config.selectivity.matches(index) { "bench-match.json" } else { "bench-other.json" } .to_string(), ), snapshot_content: Some(snapshot_content(index, config.state_payload_bytes)), metadata: None, deleted: false, created_at: timestamp(index), updated_at: timestamp(index), change_id: tracked_change_id(commit_id, index), commit_id: commit_id.to_string(), }) .collect() } fn tracked_change_id(commit_id: &str, index: usize) -> String { format!("{commit_id}:tracked-change-{index}") } fn untracked_rows(config: StorageBenchConfig) -> Vec { (0..config.rows) .map(|index| MaterializedUntrackedStateRow { entity_id: EntityIdentity::single(entity_id("untracked", index, config.key_pattern)), schema_key: untracked_schema_key(index, config.selectivity), file_id: Some("bench.json".to_string()), snapshot_content: Some(snapshot_content(index, config.state_payload_bytes)), metadata: None, deleted: false, created_at: timestamp(index), updated_at: timestamp(index), global: false, version_id: "bench-version".to_string(), }) .collect() } fn changelog_changes(config: StorageBenchConfig) -> Vec { changelog_materialized_changes(config) .into_iter() .map(changelog_bench_change_ref_only) .collect() } fn changelog_materialized_changes(config: StorageBenchConfig) -> Vec { (0..config.rows) .map(|index| MaterializedChange { id: format!("bench-change-{index}"), entity_id: EntityIdentity::single(entity_id( "change-entity", index, config.key_pattern, )), schema_key: "bench_changelog_entity".to_string(), file_id: Some("bench.json".to_string()), snapshot_content: Some(snapshot_content(index, config.state_payload_bytes)), metadata: None, created_at: timestamp(index), }) .collect() } fn commit_graph_materialized_commit_change(commit_id: &str, rows: usize) -> MaterializedChange { let snapshot_content = serde_json::json!({ "id": commit_id, }) .to_string(); MaterializedChange { id: format!("bench-commit-change-{commit_id}"), entity_id: EntityIdentity::single(commit_id.to_string()), schema_key: "lix_commit".to_string(), file_id: None, snapshot_content: Some(snapshot_content), metadata: None, created_at: timestamp(rows), } } fn canonical_changelog_bench_change(change: &MaterializedChange) -> Result { let snapshot_ref = change .snapshot_content .as_ref() .map(|value| prepare_json_ref(value.as_bytes())) .transpose()?; let metadata_ref = change .metadata .as_ref() .map(|value| prepare_json_ref(value.as_bytes())) .transpose()?; Ok(Change { id: change.id.clone(), entity_id: change.entity_id.clone(), schema_key: change.schema_key.clone(), file_id: change.file_id.clone(), snapshot_ref, metadata_ref, created_at: change.created_at.clone(), }) } fn changelog_bench_json_payloads(changes: &[MaterializedChange]) -> Vec { changes .iter() .flat_map(|change| { change .snapshot_content .iter() .chain(change.metadata.iter()) .cloned() .collect::>() }) .collect() } fn changelog_bench_change_ref_only(change: MaterializedChange) -> Change { let snapshot_ref = change .snapshot_content .as_ref() .map(|value| JsonRef::from_hash(blake3::hash(value.as_bytes()))); let metadata_ref = change .metadata .as_ref() .map(|value| JsonRef::from_hash(blake3::hash(value.as_bytes()))); Change { id: change.id, entity_id: change.entity_id, schema_key: change.schema_key, file_id: change.file_id, snapshot_ref, metadata_ref, created_at: change.created_at, } } fn changelog_tombstone_changes(config: StorageBenchConfig) -> Vec { changelog_materialized_changes(config) .into_iter() .map(|mut change| { change.snapshot_content = None; change.metadata = None; change }) .collect() } fn changelog_metadata_changes(config: StorageBenchConfig) -> Vec { changelog_materialized_changes(config) .into_iter() .enumerate() .map(|(index, mut change)| { change.metadata = Some(snapshot_metadata(index, config.state_payload_bytes)); change }) .collect() } fn changelog_shared_payload_changes(config: StorageBenchConfig) -> Vec { let shared_snapshot_content = snapshot_content(0, config.state_payload_bytes); changelog_materialized_changes(config) .into_iter() .map(|mut change| { change.snapshot_content = Some(shared_snapshot_content.clone()); change }) .collect() } fn changelog_shared_metadata_changes(config: StorageBenchConfig) -> Vec { let shared_metadata = snapshot_metadata(0, config.state_payload_bytes); changelog_materialized_changes(config) .into_iter() .map(|mut change| { change.snapshot_content = None; change.metadata = Some(shared_metadata.clone()); change }) .collect() } fn changelog_shared_payload_and_metadata_changes( config: StorageBenchConfig, ) -> Vec { let shared_snapshot_content = snapshot_content(0, config.state_payload_bytes); let shared_metadata = snapshot_metadata(1, config.state_payload_bytes); changelog_materialized_changes(config) .into_iter() .map(|mut change| { change.snapshot_content = Some(shared_snapshot_content.clone()); change.metadata = Some(shared_metadata.clone()); change }) .collect() } fn changelog_composite_entity_id_changes(config: StorageBenchConfig) -> Vec { changelog_materialized_changes(config) .into_iter() .enumerate() .map(|(index, mut change)| { change.entity_id = EntityIdentity { parts: vec![ entity_id("change-composite", index, config.key_pattern), index.to_string(), (index % 2 == 0).to_string(), ], }; change }) .collect() } fn changelog_selective_changes(config: StorageBenchConfig) -> Vec { changelog_materialized_changes(config) .into_iter() .enumerate() .map(|(index, mut change)| { change.schema_key = changelog_schema_key(index, config.selectivity); change }) .collect() } fn changelog_entity_history_changes(config: StorageBenchConfig) -> Vec { changelog_materialized_changes(config) .into_iter() .enumerate() .map(|(index, mut change)| { if index % 10 == 0 { change.entity_id = EntityIdentity::single(CHANGELOG_HISTORY_ENTITY_ID); } change }) .collect() } fn tracked_schema_key(index: usize, selectivity: StorageBenchSelectivity) -> String { if selectivity.matches(index) { TRACKED_MATCH_SCHEMA_KEY } else { TRACKED_OTHER_SCHEMA_KEY } .to_string() } fn untracked_schema_key(index: usize, selectivity: StorageBenchSelectivity) -> String { if selectivity.matches(index) { UNTRACKED_MATCH_SCHEMA_KEY } else { UNTRACKED_OTHER_SCHEMA_KEY } .to_string() } fn changelog_schema_key(index: usize, selectivity: StorageBenchSelectivity) -> String { if selectivity.matches(index) { CHANGELOG_MATCH_SCHEMA_KEY } else { CHANGELOG_OTHER_SCHEMA_KEY } .to_string() } fn entity_id(prefix: &str, index: usize, key_pattern: StorageBenchKeyPattern) -> String { match key_pattern { StorageBenchKeyPattern::Sequential => format!("{prefix}-{index}"), StorageBenchKeyPattern::Random => format!("{prefix}-{:016x}", randomish_index(index)), } } fn randomish_index(index: usize) -> u64 { let mut value = index as u64; value ^= value >> 30; value = value.wrapping_mul(0xbf58_476d_1ce4_e5b9); value ^= value >> 27; value = value.wrapping_mul(0x94d0_49bb_1331_11eb); value ^ (value >> 31) } fn binary_file_ids(rows: usize) -> Vec { (0..rows) .map(|index| format!("bench-file-{index}")) .collect() } fn binary_payloads(rows: usize, blob_bytes: usize) -> Vec> { (0..rows) .map(|index| binary_payload(index, blob_bytes)) .collect() } fn binary_half_duplicate_payloads(rows: usize, blob_bytes: usize) -> Vec> { (0..rows) .map(|index| { if index % 2 == 0 { binary_payload(0, blob_bytes) } else { binary_payload(index, blob_bytes) } }) .collect() } fn binary_blob_writes<'a>(_file_ids: &'a [String], payloads: &'a [Vec]) -> Vec> { payloads .iter() .map(|payload| BlobWrite { bytes: payload.as_slice(), }) .collect() } fn snapshot_content(index: usize, target_bytes: usize) -> String { let mut value = serde_json::json!({ "id": format!("entity-{index}"), "value": format!("value-{index}"), "index": index }); pad_snapshot_content(&mut value, target_bytes); value.to_string() } fn snapshot_metadata(index: usize, target_bytes: usize) -> String { snapshot_content(index, target_bytes) } fn tracked_state_header_columns() -> Vec { [ "entity_id", "schema_key", "file_id", "metadata", "created_at", "updated_at", "change_id", "commit_id", ] .into_iter() .map(str::to_string) .collect() } fn untracked_state_header_columns() -> Vec { [ "entity_id", "schema_key", "file_id", "metadata", "created_at", "updated_at", "global", "version_id", ] .into_iter() .map(str::to_string) .collect() } fn updated_snapshot_content(index: usize, target_bytes: usize) -> String { let mut value = serde_json::json!({ "id": format!("entity-{index}"), "value": format!("updated-{index}"), "index": index }); pad_snapshot_content(&mut value, target_bytes); value.to_string() } fn partial_updated_snapshot_content(index: usize, target_bytes: usize) -> String { let mut value = serde_json::json!({ "id": format!("entity-{index}"), "value": format!("value-{index}"), "index": index, "done": true }); pad_snapshot_content(&mut value, target_bytes); value.to_string() } fn delta_chain_snapshot_content( delta_index: usize, row_index: usize, target_bytes: usize, ) -> String { let mut value = serde_json::json!({ "id": format!("entity-{row_index}"), "value": format!("delta-{delta_index}-{row_index}"), "index": row_index, "delta": delta_index }); pad_snapshot_content(&mut value, target_bytes); value.to_string() } fn pad_snapshot_content(value: &mut serde_json::Value, target_bytes: usize) { let current = value.to_string().len(); if target_bytes <= current { return; } value["padding"] = serde_json::Value::String("x".repeat(target_bytes - current)); } fn timestamp(index: usize) -> String { format!( "2026-05-01T00:{:02}:{:02}.000Z", (index / 60) % 60, index % 60 ) } fn binary_payload(index: usize, len: usize) -> Vec { let mut payload = (0..len) .map(|offset| { ((index as u64) .wrapping_mul(31) .wrapping_add((offset as u64).wrapping_mul(17)) & 0xff) as u8 }) .collect::>(); for (offset, byte) in (index as u64).to_le_bytes().into_iter().enumerate() { if offset < payload.len() { payload[offset] = byte; } } payload } fn json_documents(shape: JsonStorePayloadShape, rows: usize) -> Vec> { (0..rows).map(|index| json_document(shape, index)).collect() } fn json_document(shape: JsonStorePayloadShape, index: usize) -> Vec { match shape { JsonStorePayloadShape::SmallRaw1k => json_object_document(index, 1_024, 8), JsonStorePayloadShape::MediumStructured16k => json_object_document(index, 16 * 1024, 128), JsonStorePayloadShape::LargeStructured128k => { json_object_document(index, 128 * 1024, 1_000) } JsonStorePayloadShape::LargeArray128k => json_array_document(index, 128 * 1024, 1_000), } } fn updated_json_document(shape: JsonStorePayloadShape, index: usize) -> Vec { let bytes = json_document(shape, index); let mut value: serde_json::Value = serde_json::from_slice(&bytes).expect("storage bench JSON document should parse"); match shape { JsonStorePayloadShape::LargeArray128k => { value["items"][999]["value"] = serde_json::Value::String(format!("updated-array-value-{index}")); } JsonStorePayloadShape::SmallRaw1k | JsonStorePayloadShape::MediumStructured16k | JsonStorePayloadShape::LargeStructured128k => { value["field_999"] = serde_json::Value::String(format!("updated-object-value-{index}")); } } serde_json::to_vec(&value).expect("storage bench updated JSON should serialize") } fn json_object_document(index: usize, target_bytes: usize, fields: usize) -> Vec { let mut object = serde_json::Map::new(); object.insert( "id".to_string(), serde_json::Value::String(format!("json-{index}")), ); object.insert( "target".to_string(), serde_json::Value::String(format!("target-{index}")), ); object.insert( "status".to_string(), serde_json::Value::String(if index % 2 == 0 { "open" } else { "closed" }.to_string()), ); object.insert( "nested".to_string(), serde_json::json!({ "target": format!("nested-target-{index}"), "revision": index, }), ); for field_index in 0..fields { object.insert( format!("field_{field_index}"), serde_json::Value::String(format!("value-{index}-{field_index}")), ); } pad_json_object(&mut object, target_bytes); serde_json::to_vec(&serde_json::Value::Object(object)) .expect("storage bench object JSON should serialize") } fn json_array_document(index: usize, target_bytes: usize, items: usize) -> Vec { let mut object = serde_json::Map::new(); object.insert( "id".to_string(), serde_json::Value::String(format!("json-array-{index}")), ); object.insert( "target".to_string(), serde_json::Value::String(format!("target-{index}")), ); object.insert( "status".to_string(), serde_json::Value::String(if index % 2 == 0 { "open" } else { "closed" }.to_string()), ); object.insert( "items".to_string(), serde_json::Value::Array( (0..items) .map(|item_index| { serde_json::json!({ "index": item_index, "status": if item_index % 2 == 0 { "ready" } else { "blocked" }, "value": format!("item-{index}-{item_index}"), }) }) .collect(), ), ); pad_json_object(&mut object, target_bytes); serde_json::to_vec(&serde_json::Value::Object(object)) .expect("storage bench array JSON should serialize") } fn pad_json_object(object: &mut serde_json::Map, target_bytes: usize) { let current = serde_json::to_vec(&serde_json::Value::Object(object.clone())) .expect("storage bench JSON should serialize") .len(); if target_bytes <= current { return; } object.insert( "padding".to_string(), serde_json::Value::String("x".repeat(target_bytes - current)), ); } fn json_projection_paths(projection: JsonStoreProjectionShape) -> Vec { match projection { JsonStoreProjectionShape::TopLevelTarget => vec![JsonProjectionPath::new("/target")], JsonStoreProjectionShape::TopLevelTenProps => (0..10) .map(|index| JsonProjectionPath::new(format!("/field_{index}"))) .collect(), JsonStoreProjectionShape::NestedTarget => vec![JsonProjectionPath::new("/nested/target")], JsonStoreProjectionShape::ArrayItem999 => { vec![JsonProjectionPath::new("/items/999/value")] } JsonStoreProjectionShape::Status => vec![JsonProjectionPath::new("/status")], } } ================================================ FILE: packages/engine/src/test_support.rs ================================================ use std::sync::Arc; use crate::commit_store::{Change, CommitDraftRef, CommitStoreContext}; use crate::json_store::{ JsonStoreContext, JsonWritePlacementRef, NormalizedJson, NormalizedJsonRef, }; use crate::storage::StorageContext; use crate::storage::StorageWriteSet; use crate::storage::StorageWriteTransaction; use crate::tracked_state::{ MaterializedTrackedStateRow, TrackedStateContext, TrackedStateDeltaRef, }; use crate::transaction::prepare_version_ref_row; use crate::untracked_state::{ MaterializedUntrackedStateRow, UntrackedStateContext, UntrackedStateRow, }; use crate::version::VersionContext; fn prepare_json_ref(value: &str) -> crate::json_store::JsonRef { crate::json_store::JsonRef::for_content(value.as_bytes()) } use crate::GLOBAL_VERSION_ID; pub(crate) const TEST_EMPTY_ROOT_COMMIT_ID: &str = "test-empty-root"; const TEST_TIMESTAMP: &str = "1970-01-01T00:00:00.000Z"; /// Seeds a version head and matching tracked root for unit tests. /// /// A version ref that points at a commit without a tracked root is invalid for /// the serving projection. This helper keeps that invariant in one place while /// still letting low-level tests use synthetic commit ids. pub(crate) async fn seed_version_head(storage: StorageContext, version_id: &str, commit_id: &str) { seed_version_head_with_rows(storage, version_id, commit_id, &[]).await; } /// Seeds the global version head to an empty tracked root for unit tests. pub(crate) async fn seed_global_version_head(storage: StorageContext) { seed_version_head(storage, GLOBAL_VERSION_ID, TEST_EMPTY_ROOT_COMMIT_ID).await; } /// Seeds a version head and writes the tracked root contents for its commit. pub(crate) async fn seed_version_head_with_rows( storage: StorageContext, version_id: &str, commit_id: &str, rows: &[MaterializedTrackedStateRow], ) { let mut transaction = storage .begin_write_transaction() .await .expect("seed transaction should open"); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); let mut writes = StorageWriteSet::new(); let canonical_row = prepare_version_ref_row(version_id, commit_id, TEST_TIMESTAMP) .expect("version ref should canonicalize"); version_ctx .stage_canonical_ref_rows(&mut writes, &[canonical_row.row]) .expect("version ref should stage"); writes .apply(&mut transaction.as_mut()) .await .expect("version ref should write"); stage_tracked_root_from_materialized( transaction.as_mut(), &TrackedStateContext::new(), commit_id, None, rows, ) .await .expect("tracked root should write"); transaction.commit().await.expect("seed should commit"); } pub(crate) async fn stage_tracked_root_from_materialized( transaction: &mut dyn StorageWriteTransaction, tracked_state: &TrackedStateContext, commit_id: &str, parent_commit_id: Option<&str>, rows: &[MaterializedTrackedStateRow], ) -> Result<(), crate::LixError> { let mut writes = StorageWriteSet::new(); let changes = rows .iter() .map(tracked_change_from_materialized) .collect::, _>>()?; let json_payloads = materialized_tracked_json_payloads(rows); JsonStoreContext::new().writer().stage_batch( &mut writes, JsonWritePlacementRef::CommitPack { commit_id, pack_id: 0, }, json_payloads .iter() .map(|json| NormalizedJsonRef::from(json)), )?; let parent_ids = parent_commit_id .map(|parent| vec![parent.to_string()]) .unwrap_or_default(); let commit_change_id = format!("{commit_id}:commit"); let commit = CommitDraftRef { id: commit_id, change_id: &commit_change_id, parent_ids: &parent_ids, author_account_ids: &[], created_at: rows .first() .map(|row| row.updated_at.as_str()) .unwrap_or(TEST_TIMESTAMP), }; let commit_store = CommitStoreContext::new(); let change_ids = changes .iter() .map(|change| change.id.clone()) .collect::>(); let existing_changes = commit_store .reader(&mut *transaction) .load_change_index_entries(&change_ids) .await?; let mut authored_changes = Vec::new(); let mut authored_created_at = Vec::new(); let mut authored_updated_at = Vec::new(); let mut adopted_changes = Vec::new(); let mut adopted_created_at = Vec::new(); let mut adopted_updated_at = Vec::new(); for ((change, row), existing) in changes.iter().zip(rows).zip(existing_changes) { if existing.is_some() { adopted_changes.push(change.as_ref()); adopted_created_at.push(row.created_at.as_str()); adopted_updated_at.push(row.updated_at.as_str()); } else { authored_changes.push(change.as_ref()); authored_created_at.push(row.created_at.as_str()); authored_updated_at.push(row.updated_at.as_str()); } } let staged = commit_store .writer(&mut *transaction, &mut writes) .stage_tracked_commit_draft(commit, authored_changes.clone(), adopted_changes.clone()) .await?; let mut deltas = Vec::with_capacity(changes.len()); deltas.extend( authored_changes .iter() .zip(&staged.authored_locators) .zip(authored_created_at) .zip(authored_updated_at) .map( |(((change, locator), created_at), updated_at)| TrackedStateDeltaRef { change: *change, locator: locator.as_ref(), created_at, updated_at, }, ), ); deltas.extend( adopted_changes .iter() .zip(&staged.adopted_locators) .zip(adopted_created_at) .zip(adopted_updated_at) .map( |(((change, locator), created_at), updated_at)| TrackedStateDeltaRef { change: *change, locator: locator.as_ref(), created_at, updated_at, }, ), ); tracked_state .writer(&mut *transaction, &mut writes) .stage_delta(commit_id, parent_commit_id, &deltas) .await?; writes.apply(&mut *transaction).await.map(|_| ()) } pub(crate) fn tracked_change_from_materialized( row: &MaterializedTrackedStateRow, ) -> Result { Ok(Change { id: row.change_id.clone(), entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: row.file_id.clone(), snapshot_ref: row.snapshot_content.as_deref().map(prepare_json_ref), metadata_ref: row.metadata.as_ref().map(|value| { let serialized = crate::serialize_row_metadata(value); prepare_json_ref(&serialized) }), created_at: row.created_at.clone(), }) } fn materialized_tracked_json_payloads(rows: &[MaterializedTrackedStateRow]) -> Vec { let mut payloads = Vec::new(); for row in rows { if let Some(snapshot) = row.snapshot_content.as_deref() { payloads.push(NormalizedJson::from_arc_unchecked(Arc::from(snapshot))); } if let Some(metadata) = row.metadata.as_ref() { payloads.push(NormalizedJson::from_arc_unchecked(Arc::from( crate::serialize_row_metadata(metadata), ))); } } payloads } pub(crate) fn untracked_state_row_from_materialized( _writes: &mut StorageWriteSet, row: &MaterializedUntrackedStateRow, ) -> Result { Ok(UntrackedStateRow { entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: row.file_id.clone(), snapshot_content: row.snapshot_content.clone(), metadata: row.metadata.as_ref().map(crate::serialize_row_metadata), created_at: row.created_at.clone(), updated_at: row.updated_at.clone(), global: row.global, version_id: row.version_id.clone(), }) } ================================================ FILE: packages/engine/src/tracked_state/by_file_index.rs ================================================ use crate::tracked_state::codec::{ encode_key_ref as encode_tracked_key_ref, encode_value_ref as encode_tracked_value_ref, }; use crate::tracked_state::types::{ TrackedStateIndexValueRef, TrackedStateKey, TrackedStateKeyRef, TrackedStateTreeScanRequest, }; use crate::tracked_state::TrackedStateScanRequest; use crate::NullableKeyFilter; const NULL_COMPONENT: &str = "\0"; const VALUE_PREFIX: &str = "\u{1}"; pub(crate) struct ByFileIndex; impl ByFileIndex { pub(crate) fn should_use(request: &TrackedStateScanRequest) -> bool { !request.filter.file_ids.is_empty() && request .filter .file_ids .iter() .all(|filter| matches!(filter, NullableKeyFilter::Value(_))) } pub(crate) fn scan_request_from_tracked( request: &TrackedStateScanRequest, ) -> TrackedStateTreeScanRequest { debug_assert!(Self::should_use(request)); let schema_keys = request .filter .file_ids .iter() .filter_map(|filter| match filter { NullableKeyFilter::Any | NullableKeyFilter::Null => None, NullableKeyFilter::Value(file_id) => Some(value_component(file_id)), }) .collect(); let file_ids = request .filter .schema_keys .iter() .cloned() .map(NullableKeyFilter::Value) .collect(); TrackedStateTreeScanRequest { schema_keys, entity_ids: request.filter.entity_ids.clone(), file_ids, include_tombstones: request.filter.include_tombstones, limit: None, } } pub(crate) fn encode_key_ref(row: TrackedStateKeyRef<'_>) -> Vec { debug_assert!(row.file_id.is_some()); let schema_key = component(row.file_id); encode_tracked_key_ref(TrackedStateKeyRef { schema_key: &schema_key, file_id: Some(row.schema_key), entity_id: row.entity_id, }) } pub(crate) fn primary_key_from_index_key( index_key: TrackedStateKey, ) -> Option { let schema_key = index_key.file_id?; Some(TrackedStateKey { schema_key, file_id: file_id_from_component(&index_key.schema_key)?, entity_id: index_key.entity_id, }) } pub(crate) fn encode_header_value_ref(value: TrackedStateIndexValueRef<'_>) -> Vec { encode_tracked_value_ref(value) } } fn component(file_id: Option<&str>) -> String { match file_id { Some(file_id) => value_component(file_id), None => NULL_COMPONENT.to_string(), } } fn value_component(file_id: &str) -> String { format!("{VALUE_PREFIX}{file_id}") } fn file_id_from_component(component: &str) -> Option> { if component == NULL_COMPONENT { return Some(None); } component .strip_prefix(VALUE_PREFIX) .map(|file_id| Some(file_id.to_string())) } ================================================ FILE: packages/engine/src/tracked_state/codec.rs ================================================ use std::collections::HashMap; use xxhash_rust::xxh3::xxh3_64_with_seed; use crate::commit_store::ChangeLocator; use crate::entity_identity::EntityIdentity; use crate::json_store::JsonRef; use crate::tracked_state::types::{ TrackedStateDeltaEntry, TrackedStateDeltaRef, TrackedStateIndexValue, TrackedStateIndexValueRef, TrackedStateKey, TrackedStateKeyRef, TRACKED_STATE_HASH_BYTES, }; use crate::LixError; const NODE_VERSION: u8 = 2; const VALUE_VERSION: u8 = 7; const VALUE_DELETED_FLAG: u8 = 0b1000_0000; const VALUE_VERSION_MASK: u8 = 0b0111_1111; const DELTA_PACK_VERSION: u8 = 7; const DELTA_LOCATOR_SAME_COMMIT: u8 = 0; const DELTA_LOCATOR_FULL: u8 = 1; const DELTA_JSON_REFS_INLINE: u8 = 0; const DELTA_JSON_REFS_MIXED_PACK_INDEX: u8 = 1; const DELTA_JSON_REF_NONE: u8 = 0; const DELTA_JSON_REF_PACK_INDEX: u8 = 1; const DELTA_JSON_REF_INLINE: u8 = 2; const DELTA_CHANGE_ID_FULL: u8 = 0; const DELTA_CHANGE_ID_COMMIT_SUFFIX: u8 = 1; const TIMESTAMP_UPDATED_SAME: u8 = 0; const TIMESTAMP_UPDATED_DISTINCT: u8 = 1; const NODE_KIND_LEAF: u8 = 1; const NODE_KIND_INTERNAL: u8 = 2; const WEIBULL_K: i32 = 4; const ENTITY_IDENTITY_END: u8 = 0; const ENTITY_IDENTITY_STRING: u8 = 1; #[derive(Debug, Clone, Copy, PartialEq, Eq)] struct DeltaKeyPrefixRef<'a> { schema_key: &'a str, file_id: Option<&'a str>, } #[derive(Debug, Clone, PartialEq, Eq)] struct DeltaKeyPrefix { schema_key: String, file_id: Option, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct EncodedLeafEntry { pub(crate) key: Vec, pub(crate) value: Vec, } #[derive(Debug, Clone, Copy)] pub(crate) struct EncodedLeafEntryRef<'a> { pub(crate) key: &'a [u8], pub(crate) value: &'a [u8], } impl EncodedLeafEntry { pub(crate) fn as_ref(&self) -> EncodedLeafEntryRef<'_> { EncodedLeafEntryRef { key: &self.key, value: &self.value, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct PendingChunkWrite { pub(crate) hash: [u8; TRACKED_STATE_HASH_BYTES], pub(crate) data: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct ChildSummary { pub(crate) first_key: Vec, pub(crate) last_key: Vec, pub(crate) child_hash: [u8; TRACKED_STATE_HASH_BYTES], pub(crate) subtree_count: u64, } #[derive(Debug, Clone, Copy)] pub(crate) struct ChildSummaryRef<'a> { pub(crate) first_key: &'a [u8], pub(crate) last_key: &'a [u8], pub(crate) child_hash: [u8; TRACKED_STATE_HASH_BYTES], pub(crate) subtree_count: u64, } impl ChildSummary { pub(crate) fn as_ref(&self) -> ChildSummaryRef<'_> { ChildSummaryRef { first_key: &self.first_key, last_key: &self.last_key, child_hash: self.child_hash, subtree_count: self.subtree_count, } } } #[derive(Debug, Clone)] pub(crate) enum DecodedNode { Leaf(DecodedLeafNode), Internal(DecodedInternalNode), } #[derive(Debug, Clone)] pub(crate) enum DecodedNodeRef<'a> { Leaf(DecodedLeafNodeRef<'a>), Internal(DecodedInternalNode), } #[derive(Debug, Clone)] pub(crate) struct DecodedLeafNode { entries: Vec, } impl DecodedLeafNode { pub(crate) fn entries(&self) -> &[EncodedLeafEntry] { &self.entries } } #[derive(Debug, Clone)] pub(crate) struct DecodedLeafNodeRef<'a> { bytes: &'a [u8], payload_start: usize, offsets: Vec, } impl<'a> DecodedLeafNodeRef<'a> { pub(crate) fn len(&self) -> usize { self.offsets.len().saturating_sub(1) } pub(crate) fn entry(&self, index: usize) -> Result>, LixError> { if index >= self.len() { return Ok(None); } let start = self.payload_start + self.offsets[index]; let end = self.payload_start + self.offsets[index + 1]; let record = self.bytes.get(start..end).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf offset points outside node payload", ) })?; let mut cursor = 0usize; let key = read_sized_slice(record, &mut cursor, "leaf key")?; let value = read_sized_slice(record, &mut cursor, "leaf value")?; if cursor != record.len() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf entry decode found trailing bytes", )); } Ok(Some(EncodedLeafEntryRef { key, value })) } pub(crate) fn key(&self, index: usize) -> Result, LixError> { let Some(entry) = self.entry(index)? else { return Ok(None); }; Ok(Some(entry.key)) } } #[derive(Debug, Clone)] pub(crate) struct DecodedInternalNode { children: Vec, } impl DecodedInternalNode { pub(crate) fn children(&self) -> &[ChildSummary] { &self.children } } pub(crate) fn hash_bytes(bytes: &[u8]) -> [u8; TRACKED_STATE_HASH_BYTES] { *blake3::hash(bytes).as_bytes() } pub(crate) fn encode_key(key: &TrackedStateKey) -> Vec { encode_key_ref(TrackedStateKeyRef { schema_key: &key.schema_key, file_id: key.file_id.as_deref(), entity_id: &key.entity_id, }) } pub(crate) fn encode_key_ref(key: TrackedStateKeyRef<'_>) -> Vec { let mut out = Vec::new(); append_key_ref(&mut out, key); out } fn append_key_ref(out: &mut Vec, key: TrackedStateKeyRef<'_>) { push_sized_bytes(out, key.schema_key.as_bytes()); match key.file_id { Some(file_id) => { out.push(1); push_sized_bytes(out, file_id.as_bytes()); } None => out.push(0), } push_entity_identity(out, key.entity_id); } pub(crate) fn encode_schema_key_prefix(schema_key: &str) -> Vec { let mut out = Vec::new(); push_sized_bytes(&mut out, schema_key.as_bytes()); out } pub(crate) fn encode_schema_file_prefix(schema_key: &str, file_id: Option<&str>) -> Vec { let mut out = encode_schema_key_prefix(schema_key); match file_id { Some(file_id) => { out.push(1); push_sized_bytes(&mut out, file_id.as_bytes()); } None => out.push(0), } out } pub(crate) fn decode_key(bytes: &[u8]) -> Result { let mut cursor = 0usize; let schema_key = read_sized_string(bytes, &mut cursor, "schema_key")?; let file_id = match read_u8(bytes, &mut cursor, "file_id presence")? { 0 => None, 1 => Some(read_sized_string(bytes, &mut cursor, "file_id")?), other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree key has invalid file_id presence byte {other}"), )) } }; let entity_id = read_entity_identity(bytes, &mut cursor)?; if cursor != bytes.len() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree key decode found trailing bytes", )); } Ok(TrackedStateKey { schema_key, file_id, entity_id, }) } /// Decodes a key after the caller has already proven the schema/file prefix. /// /// This is for scan paths that have matched an encoded prefix range and only /// need to materialize the entity suffix plus the known projection fields. pub(crate) fn decode_key_with_trusted_prefix( bytes: &[u8], schema_key: &str, file_id: Option<&str>, prefix_len: usize, ) -> Result { let mut cursor = prefix_len; let entity_id = read_entity_identity(bytes, &mut cursor)?; if cursor != bytes.len() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree key decode found trailing bytes", )); } Ok(TrackedStateKey { schema_key: schema_key.to_string(), file_id: file_id.map(str::to_string), entity_id, }) } #[cfg(test)] pub(crate) fn encode_value(value: &TrackedStateIndexValue) -> Vec { encode_value_ref(TrackedStateIndexValueRef { change_locator: value.change_locator.as_ref(), deleted: value.deleted, snapshot_ref: value.snapshot_ref.as_ref(), metadata_ref: value.metadata_ref.as_ref(), created_at: &value.created_at, updated_at: &value.updated_at, }) } pub(crate) fn encode_value_ref(value: TrackedStateIndexValueRef<'_>) -> Vec { let mut out = Vec::new(); append_value_ref(&mut out, value); out } fn append_value_ref(out: &mut Vec, value: TrackedStateIndexValueRef<'_>) { out.push(VALUE_VERSION | if value.deleted { VALUE_DELETED_FLAG } else { 0 }); push_sized_bytes(out, value.change_locator.source_commit_id.as_bytes()); out.extend_from_slice(&value.change_locator.source_pack_id.to_be_bytes()); out.extend_from_slice(&value.change_locator.source_ordinal.to_be_bytes()); push_sized_bytes(out, value.change_locator.change_id.as_bytes()); push_timestamp_pair(out, value.created_at, value.updated_at); push_optional_json_ref(out, value.snapshot_ref); push_optional_json_ref(out, value.metadata_ref); } #[cfg(test)] pub(crate) fn encoded_value_len(value: &TrackedStateIndexValue) -> usize { 1 + sized_bytes_len(value.change_locator.source_commit_id.as_bytes()) + 4 + 4 + sized_bytes_len(value.change_locator.change_id.as_bytes()) + timestamp_pair_len(&value.created_at, &value.updated_at) + optional_json_ref_len(value.snapshot_ref.as_ref()) + optional_json_ref_len(value.metadata_ref.as_ref()) } pub(crate) fn decode_value(bytes: &[u8]) -> Result { let mut cursor = 0usize; let value_header = read_u8(bytes, &mut cursor, "value header")?; let deleted = decode_value_header(value_header)?; decode_value_after_header(bytes, cursor, deleted) } pub(crate) fn decode_visible_value( bytes: &[u8], include_tombstones: bool, ) -> Result, LixError> { let mut cursor = 0usize; let value_header = read_u8(bytes, &mut cursor, "value header")?; let deleted = decode_value_header(value_header)?; if deleted && !include_tombstones { return Ok(None); } decode_value_after_header(bytes, cursor, deleted).map(Some) } fn decode_value_header(value_header: u8) -> Result { let version = value_header & VALUE_VERSION_MASK; let deleted = value_header & VALUE_DELETED_FLAG != 0; if version != VALUE_VERSION { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("unsupported tracked-state tree value version {version}"), )); } Ok(deleted) } fn decode_value_after_header( bytes: &[u8], mut cursor: usize, deleted: bool, ) -> Result { let source_commit_id = read_sized_string(bytes, &mut cursor, "source_commit_id")?; let source_pack_id = u32::try_from(read_u32(bytes, &mut cursor, "source_pack_id")?).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state source_pack_id exceeds u32", ) })?; let source_ordinal = u32::try_from(read_u32(bytes, &mut cursor, "source_ordinal")?).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state source_ordinal exceeds u32", ) })?; let change_id = read_sized_string(bytes, &mut cursor, "change_id")?; let (created_at, updated_at) = read_timestamp_pair(bytes, &mut cursor)?; let snapshot_ref = read_optional_json_ref(bytes, &mut cursor, "snapshot_ref")?; let metadata_ref = read_optional_json_ref(bytes, &mut cursor, "metadata_ref")?; if cursor != bytes.len() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree value decode found trailing bytes", )); } Ok(TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id, source_pack_id, source_ordinal, change_id, }, deleted, snapshot_ref, metadata_ref, created_at, updated_at, }) } pub(crate) fn encode_delta_pack_refs( commit_id: &str, deltas: &[TrackedStateDeltaRef<'_>], ) -> Result, LixError> { encode_delta_pack_refs_with_json_pack_indexes(commit_id, deltas, None) } pub(crate) fn encode_delta_pack_refs_with_json_pack_indexes( commit_id: &str, deltas: &[TrackedStateDeltaRef<'_>], json_pack_indexes: Option<&HashMap<[u8; TRACKED_STATE_HASH_BYTES], usize>>, ) -> Result, LixError> { let json_pack_indexes = json_pack_indexes.filter(|indexes| !indexes.is_empty()); let mut out = Vec::new(); out.extend_from_slice(b"LXTD"); out.push(DELTA_PACK_VERSION); push_var_sized_bytes(&mut out, commit_id.as_bytes(), "delta pack commit_id")?; let (key_prefixes, delta_prefix_indexes) = delta_key_prefixes(deltas); push_var_u32(&mut out, key_prefixes.len(), "delta key prefix count")?; for prefix in &key_prefixes { append_delta_key_prefix_ref(&mut out, *prefix)?; } push_var_u32(&mut out, deltas.len(), "delta pack entry count")?; out.push(if json_pack_indexes.is_some() { DELTA_JSON_REFS_MIXED_PACK_INDEX } else { DELTA_JSON_REFS_INLINE }); for (delta, prefix_index) in deltas.iter().zip(delta_prefix_indexes) { append_delta_key_ref( &mut out, &key_prefixes, prefix_index, TrackedStateKeyRef { schema_key: delta.change.schema_key, file_id: delta.change.file_id, entity_id: delta.change.entity_id, }, )?; append_delta_value_ref( &mut out, commit_id, json_pack_indexes, TrackedStateIndexValueRef { change_locator: delta.locator, deleted: delta.change.snapshot_ref.is_none(), snapshot_ref: delta.change.snapshot_ref, metadata_ref: delta.change.metadata_ref, created_at: delta.created_at, updated_at: delta.updated_at, }, )?; } Ok(out) } fn delta_key_prefixes<'a>( deltas: &'a [TrackedStateDeltaRef<'a>], ) -> (Vec>, Vec) { let mut prefixes = Vec::new(); let mut delta_prefix_indexes = Vec::with_capacity(deltas.len()); for delta in deltas { let prefix = DeltaKeyPrefixRef { schema_key: delta.change.schema_key, file_id: delta.change.file_id, }; let prefix_index = match prefixes.iter().position(|candidate| *candidate == prefix) { Some(prefix_index) => prefix_index, None => { let prefix_index = prefixes.len(); prefixes.push(prefix); prefix_index } }; delta_prefix_indexes.push(prefix_index); } (prefixes, delta_prefix_indexes) } fn append_delta_key_prefix_ref( out: &mut Vec, prefix: DeltaKeyPrefixRef<'_>, ) -> Result<(), LixError> { push_var_sized_bytes( out, prefix.schema_key.as_bytes(), "delta key prefix schema_key", )?; match prefix.file_id { Some(file_id) => { out.push(1); push_var_sized_bytes(out, file_id.as_bytes(), "delta key prefix file_id")?; } None => out.push(0), } Ok(()) } fn decode_delta_key_prefix(bytes: &[u8], cursor: &mut usize) -> Result { let schema_key = read_var_sized_string(bytes, cursor, "delta key prefix schema_key")?; let file_id = match read_u8(bytes, cursor, "delta key prefix file_id presence")? { 0 => None, 1 => Some(read_var_sized_string( bytes, cursor, "delta key prefix file_id", )?), other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta key prefix has invalid file_id presence byte {other}"), )) } }; Ok(DeltaKeyPrefix { schema_key, file_id, }) } fn append_delta_key_ref( out: &mut Vec, prefixes: &[DeltaKeyPrefixRef<'_>], prefix_index: usize, key: TrackedStateKeyRef<'_>, ) -> Result<(), LixError> { let prefix = DeltaKeyPrefixRef { schema_key: key.schema_key, file_id: key.file_id, }; debug_assert_eq!(prefixes.get(prefix_index), Some(&prefix)); push_var_u32(out, prefix_index, "delta key prefix index")?; push_var_entity_identity(out, key.entity_id)?; Ok(()) } fn append_delta_value_ref( out: &mut Vec, pack_commit_id: &str, json_pack_indexes: Option<&HashMap<[u8; TRACKED_STATE_HASH_BYTES], usize>>, value: TrackedStateIndexValueRef<'_>, ) -> Result<(), LixError> { out.push(VALUE_VERSION | if value.deleted { VALUE_DELETED_FLAG } else { 0 }); if value.change_locator.source_commit_id == pack_commit_id { out.push(DELTA_LOCATOR_SAME_COMMIT); } else { out.push(DELTA_LOCATOR_FULL); push_var_sized_bytes( out, value.change_locator.source_commit_id.as_bytes(), "source_commit_id", )?; } push_var_u32( out, value.change_locator.source_pack_id as usize, "source_pack_id", )?; push_var_u32( out, value.change_locator.source_ordinal as usize, "source_ordinal", )?; push_var_delta_change_id( out, value.change_locator.source_commit_id, value.change_locator.change_id, )?; push_var_timestamp_pair(out, value.created_at, value.updated_at)?; match json_pack_indexes { Some(indexes) => { push_mixed_optional_json_ref(out, indexes, value.snapshot_ref)?; push_mixed_optional_json_ref(out, indexes, value.metadata_ref)?; } None => { push_optional_json_ref(out, value.snapshot_ref); push_optional_json_ref(out, value.metadata_ref); } } Ok(()) } pub(crate) fn decode_delta_pack( bytes: &[u8], pack_json_refs: Option<&[JsonRef]>, ) -> Result<(String, Vec), LixError> { let mut cursor = 0usize; let magic = bytes.get(0..4).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state delta pack is truncated before magic", ) })?; if magic != b"LXTD" { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state delta pack has invalid magic", )); } cursor += 4; let version = read_u8(bytes, &mut cursor, "delta pack version")?; if version != DELTA_PACK_VERSION { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("unsupported tracked-state delta pack version {version}"), )); } let commit_id = read_var_sized_string(bytes, &mut cursor, "delta pack commit_id")?; let prefix_count = read_var_u32(bytes, &mut cursor, "delta key prefix count")?; let mut key_prefixes = Vec::new(); for _ in 0..prefix_count { key_prefixes.push(decode_delta_key_prefix(bytes, &mut cursor)?); } let count = read_var_u32(bytes, &mut cursor, "delta pack entry count")?; let json_ref_mode = decode_delta_json_ref_mode(bytes, &mut cursor, pack_json_refs)?; let mut entries = Vec::new(); for _ in 0..count { let key = decode_delta_key(bytes, &mut cursor, &key_prefixes)?; let value = decode_delta_value(bytes, &mut cursor, &commit_id, &json_ref_mode)?; entries.push(TrackedStateDeltaEntry { key, value }); } if cursor != bytes.len() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state delta pack decode found trailing bytes", )); } Ok((commit_id, entries)) } pub(crate) fn delta_pack_uses_json_pack_indexes(bytes: &[u8]) -> Result { let mut cursor = 0usize; let magic = bytes.get(0..4).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state delta pack is truncated before magic", ) })?; if magic != b"LXTD" { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state delta pack has invalid magic", )); } cursor += 4; let version = read_u8(bytes, &mut cursor, "delta pack version")?; if version != DELTA_PACK_VERSION { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("unsupported tracked-state delta pack version {version}"), )); } let _commit_id = read_var_sized_string(bytes, &mut cursor, "delta pack commit_id")?; let prefix_count = read_var_u32(bytes, &mut cursor, "delta key prefix count")?; for _ in 0..prefix_count { let _ = decode_delta_key_prefix(bytes, &mut cursor)?; } let _count = read_var_u32(bytes, &mut cursor, "delta pack entry count")?; match read_u8(bytes, &mut cursor, "delta JSON ref mode")? { DELTA_JSON_REFS_INLINE => Ok(false), DELTA_JSON_REFS_MIXED_PACK_INDEX => Ok(true), other => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta pack has invalid JSON ref mode {other}"), )), } } fn decode_delta_key( bytes: &[u8], cursor: &mut usize, prefixes: &[DeltaKeyPrefix], ) -> Result { let prefix_index = read_var_u32(bytes, cursor, "delta key prefix index")?; let prefix = prefixes.get(prefix_index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta key prefix index {prefix_index} is out of bounds"), ) })?; let entity_id = read_var_entity_identity(bytes, cursor)?; Ok(TrackedStateKey { schema_key: prefix.schema_key.clone(), file_id: prefix.file_id.clone(), entity_id, }) } enum DeltaJsonRefDecodeMode<'a> { Inline, MixedPackIndex(&'a [JsonRef]), } fn decode_delta_json_ref_mode<'a>( bytes: &[u8], cursor: &mut usize, pack_json_refs: Option<&'a [JsonRef]>, ) -> Result, LixError> { match read_u8(bytes, cursor, "delta JSON ref mode")? { DELTA_JSON_REFS_INLINE => Ok(DeltaJsonRefDecodeMode::Inline), DELTA_JSON_REFS_MIXED_PACK_INDEX => { let refs = pack_json_refs.ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state delta pack needs JSON pack refs but none were provided", ) })?; Ok(DeltaJsonRefDecodeMode::MixedPackIndex(refs)) } other => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta pack has invalid JSON ref mode {other}"), )), } } fn decode_delta_value( bytes: &[u8], cursor: &mut usize, pack_commit_id: &str, json_ref_mode: &DeltaJsonRefDecodeMode<'_>, ) -> Result { let value_header = read_u8(bytes, cursor, "delta value header")?; let deleted = decode_value_header(value_header)?; let source_commit_id = match read_u8(bytes, cursor, "delta locator tag")? { DELTA_LOCATOR_SAME_COMMIT => pack_commit_id.to_string(), DELTA_LOCATOR_FULL => read_var_sized_string(bytes, cursor, "source_commit_id")?, other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta value has invalid locator tag {other}"), )) } }; let source_pack_id = u32::try_from(read_var_u32(bytes, cursor, "source_pack_id")?).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state source_pack_id exceeds u32", ) })?; let source_ordinal = u32::try_from(read_var_u32(bytes, cursor, "source_ordinal")?).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state source_ordinal exceeds u32", ) })?; let change_id = read_var_delta_change_id(bytes, cursor, &source_commit_id)?; let (created_at, updated_at) = read_var_timestamp_pair(bytes, cursor)?; let (snapshot_ref, metadata_ref) = match json_ref_mode { DeltaJsonRefDecodeMode::Inline => ( read_optional_json_ref(bytes, cursor, "snapshot_ref")?, read_optional_json_ref(bytes, cursor, "metadata_ref")?, ), DeltaJsonRefDecodeMode::MixedPackIndex(refs) => ( read_mixed_optional_json_ref(bytes, cursor, refs, "snapshot_ref")?, read_mixed_optional_json_ref(bytes, cursor, refs, "metadata_ref")?, ), }; Ok(TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id, source_pack_id, source_ordinal, change_id, }, deleted, snapshot_ref, metadata_ref, created_at, updated_at, }) } #[cfg(test)] fn sized_bytes_len(bytes: &[u8]) -> usize { 4 + bytes.len() } pub(crate) fn encode_leaf_node(entries: &[EncodedLeafEntry]) -> Vec { let entries = entries .iter() .map(EncodedLeafEntry::as_ref) .collect::>(); encode_leaf_node_refs(&entries) } pub(crate) fn encode_leaf_node_refs(entries: &[EncodedLeafEntryRef<'_>]) -> Vec { let mut out = Vec::new(); out.push(NODE_KIND_LEAF); out.push(NODE_VERSION); push_u32(&mut out, entries.len()); let mut offsets = Vec::with_capacity(entries.len().saturating_add(1)); let mut payload = Vec::new(); offsets.push(0usize); for entry in entries { push_sized_bytes(&mut payload, entry.key); push_sized_bytes(&mut payload, entry.value); offsets.push(payload.len()); } for offset in offsets { push_u32(&mut out, offset); } out.extend_from_slice(&payload); out } pub(crate) fn encode_internal_node(children: &[ChildSummary]) -> Vec { let children = children .iter() .map(ChildSummary::as_ref) .collect::>(); encode_internal_node_refs(&children) } pub(crate) fn encode_internal_node_refs(children: &[ChildSummaryRef<'_>]) -> Vec { let mut out = Vec::new(); out.push(NODE_KIND_INTERNAL); out.push(NODE_VERSION); push_u32(&mut out, children.len()); for child in children { push_sized_bytes(&mut out, child.first_key); push_sized_bytes(&mut out, child.last_key); out.extend_from_slice(&child.child_hash); out.extend_from_slice(&child.subtree_count.to_be_bytes()); } out } pub(crate) fn decode_node(bytes: &[u8]) -> Result { match decode_node_ref(bytes)? { DecodedNodeRef::Leaf(leaf) => { let mut entries = Vec::with_capacity(leaf.len()); for index in 0..leaf.len() { let entry = leaf.entry(index)?.ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf entry disappeared during owned decode", ) })?; entries.push(EncodedLeafEntry { key: entry.key.to_vec(), value: entry.value.to_vec(), }); } Ok(DecodedNode::Leaf(DecodedLeafNode { entries })) } DecodedNodeRef::Internal(internal) => Ok(DecodedNode::Internal(internal)), } } pub(crate) fn decode_node_ref(bytes: &[u8]) -> Result, LixError> { let mut cursor = 0usize; let kind = read_u8(bytes, &mut cursor, "node kind")?; let version = read_u8(bytes, &mut cursor, "node version")?; if version != NODE_VERSION { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("unsupported tracked-state tree node version {version}"), )); } let count = read_u32(bytes, &mut cursor, "entry count")?; let node = match kind { NODE_KIND_LEAF => { let leaf = decode_leaf_node_ref_after_count(bytes, &mut cursor, count)?; DecodedNodeRef::Leaf(leaf) } NODE_KIND_INTERNAL => { let mut children = Vec::with_capacity(count); for _ in 0..count { let first_key = read_sized_bytes(bytes, &mut cursor, "internal first_key")?; let last_key = read_sized_bytes(bytes, &mut cursor, "internal last_key")?; let child_hash = read_fixed_hash(bytes, &mut cursor, "internal child_hash")?; let subtree_count = read_u64(bytes, &mut cursor, "internal subtree_count")?; children.push(ChildSummary { first_key, last_key, child_hash, subtree_count, }); } DecodedNodeRef::Internal(DecodedInternalNode { children }) } other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("unknown tracked-state tree node kind {other}"), )) } }; if cursor != bytes.len() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree node decode found trailing bytes", )); } Ok(node) } fn decode_leaf_node_ref_after_count<'a>( bytes: &'a [u8], cursor: &mut usize, count: usize, ) -> Result, LixError> { let mut offsets = Vec::with_capacity(count.saturating_add(1)); for _ in 0..=count { offsets.push(read_u32(bytes, cursor, "leaf entry offset")?); } if offsets.first().copied() != Some(0) { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf offset table must start at zero", )); } for window in offsets.windows(2) { if window[0] > window[1] { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf offsets must be monotonic", )); } } let payload_len = bytes.len().checked_sub(*cursor).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf payload start is past node end", ) })?; if offsets.last().copied().unwrap_or_default() != payload_len { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf offset table does not cover full payload", )); } let payload_start = *cursor; *cursor = bytes.len(); Ok(DecodedLeafNodeRef { bytes, payload_start, offsets, }) } pub(crate) fn child_summary_from_node( node_bytes: Vec, first_key: Vec, last_key: Vec, subtree_count: u64, ) -> (PendingChunkWrite, ChildSummary) { let hash = hash_bytes(&node_bytes); ( PendingChunkWrite { hash, data: node_bytes, }, ChildSummary { first_key, last_key, child_hash: hash, subtree_count, }, ) } pub(crate) fn boundary_trigger( encoded_key: &[u8], level: usize, chunk_size: usize, item_size: usize, target_chunk_bytes: usize, ) -> bool { if item_size == 0 || target_chunk_bytes == 0 { return false; } let start = weibull_cdf(chunk_size.saturating_sub(item_size) as f64 / target_chunk_bytes as f64); let end = weibull_cdf(chunk_size as f64 / target_chunk_bytes as f64); let remaining = 1.0 - start; if remaining <= 0.0 { return true; } let split_probability = ((end - start) / remaining).clamp(0.0, 1.0); let hash = xxh3_64_with_seed(encoded_key, level_salt(level)); (hash as f64) < split_probability * (u64::MAX as f64) } fn weibull_cdf(normalized_size: f64) -> f64 { if normalized_size <= 0.0 { return 0.0; } -f64::exp_m1(-normalized_size.powi(WEIBULL_K)) } fn level_salt(level: usize) -> u64 { let mut value = (level as u64).wrapping_add(0x9e37_79b9_7f4a_7c15); value = (value ^ (value >> 30)).wrapping_mul(0xbf58_476d_1ce4_e5b9); value = (value ^ (value >> 27)).wrapping_mul(0x94d0_49bb_1331_11eb); value ^ (value >> 31) } fn push_entity_identity(out: &mut Vec, identity: &EntityIdentity) { assert!( !identity.parts.is_empty(), "tracked-state key entity identity must contain at least one part" ); for part in &identity.parts { out.push(ENTITY_IDENTITY_STRING); push_sized_bytes(out, part.as_bytes()); } out.push(ENTITY_IDENTITY_END); } fn read_entity_identity(bytes: &[u8], cursor: &mut usize) -> Result { let mut parts = Vec::new(); loop { let tag = read_u8(bytes, cursor, "entity identity part tag")?; match tag { ENTITY_IDENTITY_END => break, ENTITY_IDENTITY_STRING => { parts.push(read_sized_string( bytes, cursor, "entity identity string part", )?); } other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree key has invalid entity identity part tag {other}"), )) } } } if parts.is_empty() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree key entity identity must contain at least one part", )); } Ok(EntityIdentity { parts }) } fn push_sized_bytes(out: &mut Vec, bytes: &[u8]) { push_u32(out, bytes.len()); out.extend_from_slice(bytes); } fn push_var_u32(out: &mut Vec, value: usize, field_name: &str) -> Result<(), LixError> { let (encoded, len) = var_u32_bytes(value, field_name)?; out.extend_from_slice(&encoded[..len]); Ok(()) } fn var_u32_bytes(value: usize, field_name: &str) -> Result<([u8; 5], usize), LixError> { let mut value = u32::try_from(value).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("tracked-state delta pack field '{field_name}' exceeds u32"), ) })?; let mut encoded = [0_u8; 5]; let mut len = 0usize; while value >= 0x80 { encoded[len] = (value as u8 & 0x7f) | 0x80; len += 1; value >>= 7; } encoded[len] = value as u8; len += 1; Ok((encoded, len)) } fn push_var_sized_bytes(out: &mut Vec, bytes: &[u8], field_name: &str) -> Result<(), LixError> { push_var_u32(out, bytes.len(), field_name)?; out.extend_from_slice(bytes); Ok(()) } fn push_var_entity_identity(out: &mut Vec, identity: &EntityIdentity) -> Result<(), LixError> { assert!( !identity.parts.is_empty(), "tracked-state delta key entity identity must contain at least one part" ); push_var_u32(out, identity.parts.len(), "entity identity part count")?; for part in &identity.parts { push_var_sized_bytes(out, part.as_bytes(), "entity identity string part")?; } Ok(()) } fn push_optional_json_ref(out: &mut Vec, json_ref: Option<&JsonRef>) { match json_ref { Some(json_ref) => { out.push(1); out.extend_from_slice(json_ref.as_hash_bytes()); } None => out.push(0), } } fn push_mixed_optional_json_ref( out: &mut Vec, indexes: &HashMap<[u8; TRACKED_STATE_HASH_BYTES], usize>, json_ref: Option<&JsonRef>, ) -> Result<(), LixError> { let Some(json_ref) = json_ref else { out.push(DELTA_JSON_REF_NONE); return Ok(()); }; if let Some(index) = indexes.get(json_ref.as_hash_array()).copied() { out.push(DELTA_JSON_REF_PACK_INDEX); push_var_u32(out, index, "json ref pack index") } else { out.push(DELTA_JSON_REF_INLINE); out.extend_from_slice(json_ref.as_hash_bytes()); Ok(()) } } fn push_var_delta_change_id( out: &mut Vec, source_commit_id: &str, change_id: &str, ) -> Result<(), LixError> { if let Some(suffix) = change_id.strip_prefix(source_commit_id) { out.push(DELTA_CHANGE_ID_COMMIT_SUFFIX); push_var_sized_bytes(out, suffix.as_bytes(), "change_id") } else { out.push(DELTA_CHANGE_ID_FULL); push_var_sized_bytes(out, change_id.as_bytes(), "change_id") } } fn read_var_delta_change_id( bytes: &[u8], cursor: &mut usize, source_commit_id: &str, ) -> Result { let tag = read_u8(bytes, cursor, "delta change_id tag")?; let value = read_var_sized_string(bytes, cursor, "change_id")?; match tag { DELTA_CHANGE_ID_FULL => Ok(value), DELTA_CHANGE_ID_COMMIT_SUFFIX => Ok(format!("{source_commit_id}{value}")), other => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta value has invalid change_id tag {other}"), )), } } #[cfg(test)] fn optional_json_ref_len(json_ref: Option<&JsonRef>) -> usize { 1 + json_ref.map_or(0, |_| TRACKED_STATE_HASH_BYTES) } fn push_timestamp_pair(out: &mut Vec, created_at: &str, updated_at: &str) { push_sized_bytes(out, created_at.as_bytes()); if updated_at == created_at { out.push(TIMESTAMP_UPDATED_SAME); } else { out.push(TIMESTAMP_UPDATED_DISTINCT); push_sized_bytes(out, updated_at.as_bytes()); } } fn push_var_timestamp_pair( out: &mut Vec, created_at: &str, updated_at: &str, ) -> Result<(), LixError> { push_var_sized_bytes(out, created_at.as_bytes(), "created_at")?; if updated_at == created_at { out.push(TIMESTAMP_UPDATED_SAME); } else { out.push(TIMESTAMP_UPDATED_DISTINCT); push_var_sized_bytes(out, updated_at.as_bytes(), "updated_at")?; } Ok(()) } #[cfg(test)] fn timestamp_pair_len(created_at: &str, updated_at: &str) -> usize { sized_bytes_len(created_at.as_bytes()) + 1 + if updated_at == created_at { 0 } else { sized_bytes_len(updated_at.as_bytes()) } } fn read_timestamp_pair(bytes: &[u8], cursor: &mut usize) -> Result<(String, String), LixError> { let created_at = read_sized_string(bytes, cursor, "created_at")?; let updated_at = match read_u8(bytes, cursor, "updated_at tag")? { TIMESTAMP_UPDATED_SAME => created_at.clone(), TIMESTAMP_UPDATED_DISTINCT => read_sized_string(bytes, cursor, "updated_at")?, other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state timestamp pair has invalid updated_at tag {other}"), )) } }; Ok((created_at, updated_at)) } fn read_var_timestamp_pair(bytes: &[u8], cursor: &mut usize) -> Result<(String, String), LixError> { let created_at = read_var_sized_string(bytes, cursor, "created_at")?; let updated_at = match read_u8(bytes, cursor, "updated_at tag")? { TIMESTAMP_UPDATED_SAME => created_at.clone(), TIMESTAMP_UPDATED_DISTINCT => read_var_sized_string(bytes, cursor, "updated_at")?, other => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state timestamp pair has invalid updated_at tag {other}"), )) } }; Ok((created_at, updated_at)) } fn push_u32(out: &mut Vec, value: usize) { out.extend_from_slice(&(value as u32).to_be_bytes()); } fn read_sized_string( bytes: &[u8], cursor: &mut usize, field_name: &str, ) -> Result { String::from_utf8(read_sized_bytes(bytes, cursor, field_name)?).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' is invalid UTF-8: {error}"), ) }) } fn read_sized_bytes( bytes: &[u8], cursor: &mut usize, field_name: &str, ) -> Result, LixError> { read_sized_slice(bytes, cursor, field_name).map(<[u8]>::to_vec) } fn read_sized_slice<'a>( bytes: &'a [u8], cursor: &mut usize, field_name: &str, ) -> Result<&'a [u8], LixError> { let len = read_u32(bytes, cursor, field_name)?; let end = cursor.checked_add(len).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' length overflow"), ) })?; let slice = bytes.get(*cursor..end).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' is truncated"), ) })?; *cursor = end; Ok(slice) } fn read_var_sized_string( bytes: &[u8], cursor: &mut usize, field_name: &str, ) -> Result { String::from_utf8(read_var_sized_slice(bytes, cursor, field_name)?.to_vec()).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta pack field '{field_name}' is invalid UTF-8: {error}"), ) }) } fn read_var_sized_slice<'a>( bytes: &'a [u8], cursor: &mut usize, field_name: &str, ) -> Result<&'a [u8], LixError> { let len = read_var_u32(bytes, cursor, field_name)?; let end = cursor.checked_add(len).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta pack field '{field_name}' length overflow"), ) })?; let slice = bytes.get(*cursor..end).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta pack field '{field_name}' is truncated"), ) })?; *cursor = end; Ok(slice) } fn read_var_entity_identity(bytes: &[u8], cursor: &mut usize) -> Result { let count = read_var_u32(bytes, cursor, "entity identity part count")?; let mut parts = Vec::new(); for _ in 0..count { parts.push(read_var_sized_string( bytes, cursor, "entity identity string part", )?); } if parts.is_empty() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state delta key entity identity must contain at least one part", )); } Ok(EntityIdentity { parts }) } fn read_fixed_hash( bytes: &[u8], cursor: &mut usize, field_name: &str, ) -> Result<[u8; TRACKED_STATE_HASH_BYTES], LixError> { let end = *cursor + TRACKED_STATE_HASH_BYTES; let slice = bytes.get(*cursor..end).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' is truncated"), ) })?; let mut out = [0_u8; TRACKED_STATE_HASH_BYTES]; out.copy_from_slice(slice); *cursor = end; Ok(out) } fn read_optional_json_ref( bytes: &[u8], cursor: &mut usize, field_name: &str, ) -> Result, LixError> { match read_u8(bytes, cursor, field_name)? { 0 => Ok(None), 1 => Ok(Some(JsonRef::from_hash_bytes(read_fixed_hash( bytes, cursor, field_name, )?))), other => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' has invalid JSON ref tag {other}"), )), } } fn read_mixed_optional_json_ref( bytes: &[u8], cursor: &mut usize, refs: &[JsonRef], field_name: &str, ) -> Result, LixError> { match read_u8(bytes, cursor, field_name)? { DELTA_JSON_REF_NONE => Ok(None), DELTA_JSON_REF_PACK_INDEX => { let index = read_var_u32(bytes, cursor, field_name)?; refs.get(index).copied().map(Some).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta JSON ref index {index} is out of bounds"), ) }) } DELTA_JSON_REF_INLINE => { let hash = read_fixed_hash(bytes, cursor, field_name)?; Ok(Some(JsonRef::from_hash_bytes(hash))) } other => Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' has invalid JSON ref tag {other}"), )), } } fn read_u8(bytes: &[u8], cursor: &mut usize, field_name: &str) -> Result { let value = *bytes.get(*cursor).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' is truncated"), ) })?; *cursor += 1; Ok(value) } fn read_var_u32(bytes: &[u8], cursor: &mut usize, field_name: &str) -> Result { let mut value = 0u32; let mut shift = 0u32; for byte_index in 0..5 { let byte = read_u8(bytes, cursor, field_name)?; if shift == 28 && (byte & 0x80 != 0 || byte & 0x70 != 0) { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta pack field '{field_name}' varint exceeds u32"), )); } if byte_index > 0 && byte & 0x80 == 0 && byte == 0 { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta pack field '{field_name}' has non-canonical varint"), )); } value |= ((byte & 0x7f) as u32) << shift; if byte & 0x80 == 0 { return Ok(value as usize); } shift += 7; } Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state delta pack field '{field_name}' varint exceeds u32"), )) } fn read_u32(bytes: &[u8], cursor: &mut usize, field_name: &str) -> Result { let end = *cursor + 4; let slice = bytes.get(*cursor..end).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' is truncated"), ) })?; let mut out = [0_u8; 4]; out.copy_from_slice(slice); *cursor = end; Ok(u32::from_be_bytes(out) as usize) } fn read_u64(bytes: &[u8], cursor: &mut usize, field_name: &str) -> Result { let end = *cursor + 8; let slice = bytes.get(*cursor..end).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("tracked-state tree field '{field_name}' is truncated"), ) })?; let mut out = [0_u8; 8]; out.copy_from_slice(slice); *cursor = end; Ok(u64::from_be_bytes(out)) } #[cfg(test)] mod tests { use super::*; #[test] fn key_codec_distinguishes_null_and_value_file_id() { let null_key = encode_key(&TrackedStateKey { schema_key: "schema".to_string(), file_id: None, entity_id: EntityIdentity::single("entity"), }); let file_key = encode_key(&TrackedStateKey { schema_key: "schema".to_string(), file_id: Some("file".to_string()), entity_id: EntityIdentity::single("entity"), }); assert_ne!(null_key, file_key); assert_eq!( decode_key(&null_key).expect("null key"), TrackedStateKey { schema_key: "schema".to_string(), file_id: None, entity_id: EntityIdentity::single("entity"), } ); assert_eq!( decode_key(&file_key).expect("file key"), TrackedStateKey { schema_key: "schema".to_string(), file_id: Some("file".to_string()), entity_id: EntityIdentity::single("entity"), } ); } #[test] fn key_codec_encodes_composite_identity_as_string_tuple_parts() { let key = TrackedStateKey { schema_key: "schema".to_string(), file_id: None, entity_id: EntityIdentity { parts: vec![ "namespace".to_string(), "true".to_string(), "42".to_string(), ], }, }; let encoded = encode_key(&key); assert_eq!(decode_key(&encoded).expect("key should decode"), key); } #[test] fn key_codec_decodes_entity_suffix_with_trusted_prefix() { let key = TrackedStateKey { schema_key: "schema".to_string(), file_id: Some("file".to_string()), entity_id: EntityIdentity { parts: vec!["namespace".to_string(), "id".to_string()], }, }; let encoded = encode_key(&key); let prefix = encode_schema_file_prefix("schema", Some("file")); assert_eq!( decode_key_with_trusted_prefix(&encoded, "schema", Some("file"), prefix.len()) .expect("key suffix should decode"), key ); } #[test] fn key_codec_rejects_non_string_identity_part_tags() { let mut encoded = encode_key(&TrackedStateKey { schema_key: "schema".to_string(), file_id: None, entity_id: EntityIdentity { parts: vec!["true".to_string()], }, }); let schema_key_len = "schema".len(); let file_scope_offset = 4 + schema_key_len; let entity_tag_offset = file_scope_offset + 1; encoded[entity_tag_offset] = 2; let error = decode_key(&encoded).expect_err("non-string identity tag should reject"); assert!(error .to_string() .contains("invalid entity identity part tag 2")); } #[test] fn key_codec_preserves_tuple_prefix_ordering() { let prefix = encode_key(&TrackedStateKey { schema_key: "schema".to_string(), file_id: None, entity_id: EntityIdentity { parts: vec!["a".to_string()], }, }); let extended = encode_key(&TrackedStateKey { schema_key: "schema".to_string(), file_id: None, entity_id: EntityIdentity { parts: vec!["a".to_string(), "b".to_string()], }, }); assert!(prefix < extended); } #[test] fn value_codec_roundtrips_locator_value() { let value = TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id: "commit".to_string(), source_pack_id: 7, source_ordinal: 11, change_id: "change".to_string(), }, deleted: false, snapshot_ref: Some(JsonRef::from_hash_bytes([1; 32])), metadata_ref: Some(JsonRef::from_hash_bytes([2; 32])), created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-02T00:00:00Z".to_string(), }; let encoded = encode_value(&value); assert_eq!(decode_value(&encoded).expect("value"), value); } #[test] fn value_codec_roundtrips_second_locator_value() { let value = TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id: "other-commit".to_string(), source_pack_id: 0, source_ordinal: 1, change_id: "other-change".to_string(), }, deleted: true, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-02T00:00:00Z".to_string(), }; let encoded = encode_value(&value); assert_eq!(decode_value(&encoded).expect("value"), value); } #[test] fn value_codec_compacts_matching_timestamps() { let mut compact = TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id: "commit".to_string(), source_pack_id: 0, source_ordinal: 1, change_id: "change".to_string(), }, deleted: false, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), }; let compact_len = encode_value(&compact).len(); assert_eq!( decode_value(&encode_value(&compact)).expect("value"), compact ); compact.updated_at = "2026-01-02T00:00:00Z".to_string(); let distinct_len = encode_value(&compact).len(); assert!(compact_len < distinct_len); assert_eq!( distinct_len - compact_len, sized_bytes_len(compact.updated_at.as_bytes()) ); } #[test] fn delta_pack_ref_encoder_roundtrips_entries() { let entity_id = EntityIdentity { parts: vec!["entity-a".to_string()], }; let snapshot_ref = JsonRef::from_hash_bytes([1; 32]); let metadata_ref = JsonRef::from_hash_bytes([2; 32]); let live_change = crate::commit_store::ChangeRef { id: "commit-a:change-live", entity_id: &entity_id, schema_key: "schema", file_id: Some("file-a"), snapshot_ref: Some(&snapshot_ref), metadata_ref: Some(&metadata_ref), created_at: "2026-01-01T00:00:00Z", }; let tombstone_change = crate::commit_store::ChangeRef { id: "change-deleted", entity_id: &entity_id, schema_key: "schema", file_id: None, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z", }; let live_locator = crate::commit_store::ChangeLocatorRef { source_commit_id: "commit-a", source_pack_id: 3, source_ordinal: 5, change_id: "commit-a:change-live", }; let tombstone_locator = crate::commit_store::ChangeLocatorRef { source_commit_id: "source-commit", source_pack_id: 3, source_ordinal: 6, change_id: "commit-a:borrowed", }; let encoded = encode_delta_pack_refs( "commit-a", &[ TrackedStateDeltaRef { change: live_change, locator: live_locator, created_at: "2026-01-01T00:00:00Z", updated_at: "2026-01-02T00:00:00Z", }, TrackedStateDeltaRef { change: tombstone_change, locator: tombstone_locator, created_at: "2026-01-03T00:00:00Z", updated_at: "2026-01-04T00:00:00Z", }, ], ) .expect("delta pack should encode"); let mut cursor = 5usize; assert_eq!( read_var_sized_string(&encoded, &mut cursor, "delta pack commit_id") .expect("commit id should decode"), "commit-a" ); assert_eq!( read_var_u32(&encoded, &mut cursor, "delta key prefix count") .expect("prefix count should decode"), 2 ); let (decoded_commit_id, decoded) = decode_delta_pack(&encoded, None).expect("delta pack should decode"); assert_eq!(decoded_commit_id, "commit-a"); assert_eq!( decoded, vec![ TrackedStateDeltaEntry { key: TrackedStateKey { schema_key: "schema".to_string(), file_id: Some("file-a".to_string()), entity_id: entity_id.clone(), }, value: TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id: "commit-a".to_string(), source_pack_id: 3, source_ordinal: 5, change_id: "commit-a:change-live".to_string(), }, deleted: false, snapshot_ref: Some(snapshot_ref), metadata_ref: Some(metadata_ref), created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-02T00:00:00Z".to_string(), }, }, TrackedStateDeltaEntry { key: TrackedStateKey { schema_key: "schema".to_string(), file_id: None, entity_id, }, value: TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id: "source-commit".to_string(), source_pack_id: 3, source_ordinal: 6, change_id: "commit-a:borrowed".to_string(), }, deleted: true, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-03T00:00:00Z".to_string(), updated_at: "2026-01-04T00:00:00Z".to_string(), }, }, ] ); } #[test] fn delta_pack_ref_encoder_roundtrips_mixed_json_pack_indexes() { let entity_id = EntityIdentity::single("entity-a"); let snapshot_ref = JsonRef::from_hash_bytes([1; 32]); let metadata_ref = JsonRef::from_hash_bytes([2; 32]); let change = crate::commit_store::ChangeRef { id: "commit-a:change-live", entity_id: &entity_id, schema_key: "schema", file_id: Some("file-a"), snapshot_ref: Some(&snapshot_ref), metadata_ref: Some(&metadata_ref), created_at: "2026-01-01T00:00:00Z", }; let locator = crate::commit_store::ChangeLocatorRef { source_commit_id: "commit-a", source_pack_id: 0, source_ordinal: 0, change_id: "commit-a:change-live", }; let delta = TrackedStateDeltaRef { change, locator, created_at: "2026-01-01T00:00:00Z", updated_at: "2026-01-01T00:00:00Z", }; let mut pack_indexes = HashMap::new(); pack_indexes.insert(*snapshot_ref.as_hash_array(), 1); let pack_refs = vec![JsonRef::from_hash_bytes([9; 32]), snapshot_ref]; let inline = encode_delta_pack_refs("commit-a", &[delta]).expect("inline delta pack"); assert!(!delta_pack_uses_json_pack_indexes(&inline).expect("inline mode should peek")); let empty_indexes = HashMap::new(); let empty_index_pack = encode_delta_pack_refs_with_json_pack_indexes( "commit-a", &[delta], Some(&empty_indexes), ) .expect("empty-index delta pack"); assert_eq!(empty_index_pack, inline); assert!(!delta_pack_uses_json_pack_indexes(&empty_index_pack) .expect("empty index mode should peek")); decode_delta_pack(&empty_index_pack, None).expect("empty index pack should decode inline"); let mixed = encode_delta_pack_refs_with_json_pack_indexes( "commit-a", &[delta], Some(&pack_indexes), ) .expect("mixed delta pack"); assert!(delta_pack_uses_json_pack_indexes(&mixed).expect("mixed mode should peek")); assert!( mixed.len() < inline.len(), "pack-index refs should be smaller than inline refs" ); assert!(decode_delta_pack(&mixed, None) .expect_err("mixed refs require JSON pack refs") .to_string() .contains("needs JSON pack refs")); let (_, decoded) = decode_delta_pack(&mixed, Some(&pack_refs)).expect("mixed delta pack should decode"); assert_eq!(decoded[0].value.snapshot_ref, Some(snapshot_ref)); assert_eq!(decoded[0].value.metadata_ref, Some(metadata_ref)); } #[test] fn delta_pack_stream_decoder_rejects_trailing_entry_bytes() { let entity_id = EntityIdentity::single("entity"); let change = crate::commit_store::ChangeRef { id: "commit-a:change-0", entity_id: &entity_id, schema_key: "schema", file_id: None, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z", }; let locator = crate::commit_store::ChangeLocatorRef { source_commit_id: "commit-a", source_pack_id: 0, source_ordinal: 0, change_id: "commit-a:change-0", }; let mut encoded = encode_delta_pack_refs( "commit-a", &[TrackedStateDeltaRef { change, locator, created_at: "2026-01-01T00:00:00Z", updated_at: "2026-01-01T00:00:00Z", }], ) .expect("delta pack should encode"); let mut cursor = 5usize; let _ = read_var_sized_string(&encoded, &mut cursor, "delta pack commit_id") .expect("commit id should decode"); assert_eq!( read_var_u32(&encoded, &mut cursor, "delta key prefix count") .expect("prefix count should decode"), 1 ); let _ = decode_delta_key_prefix(&encoded, &mut cursor).expect("delta key prefix should decode"); encoded[cursor] = 0; let error = decode_delta_pack(&encoded, None).expect_err("trailing entry bytes should reject"); assert!( error.to_string().contains("trailing bytes"), "error should mention trailing bytes: {error}" ); } #[test] fn delta_pack_rejects_overlong_varint() { let mut encoded = Vec::new(); encoded.extend_from_slice(b"LXTD"); encoded.push(DELTA_PACK_VERSION); encoded.extend_from_slice(&[0x80, 0x80, 0x80, 0x80, 0x80]); let error = decode_delta_pack(&encoded, None).expect_err("overlong varint should reject"); assert!( error.to_string().contains("varint exceeds u32"), "error should mention overlong varint: {error}" ); } #[test] fn delta_pack_rejects_varint_above_u32() { let mut encoded = Vec::new(); encoded.extend_from_slice(b"LXTD"); encoded.push(DELTA_PACK_VERSION); encoded.extend_from_slice(&[0xff, 0xff, 0xff, 0xff, 0x1f]); let error = decode_delta_pack(&encoded, None).expect_err("too-large varint should reject"); assert!( error.to_string().contains("varint exceeds u32"), "error should mention oversized varint: {error}" ); } #[test] fn delta_pack_rejects_non_canonical_varint() { let mut encoded = Vec::new(); encoded.extend_from_slice(b"LXTD"); encoded.push(DELTA_PACK_VERSION); encoded.extend_from_slice(&[0x80, 0x00]); let error = decode_delta_pack(&encoded, None).expect_err("non-canonical varint should reject"); assert!( error.to_string().contains("non-canonical varint"), "error should mention non-canonical varint: {error}" ); } #[test] fn delta_key_decoder_rejects_out_of_bounds_prefix_index() { let mut encoded_key = Vec::new(); push_var_u32(&mut encoded_key, 1, "delta key prefix index").expect("prefix index"); push_var_entity_identity(&mut encoded_key, &EntityIdentity::single("entity")) .expect("entity identity"); let mut cursor = 0usize; let err = decode_delta_key( &encoded_key, &mut cursor, &[DeltaKeyPrefix { schema_key: "schema".to_string(), file_id: None, }], ) .expect_err("out-of-bounds prefix index should reject"); assert!(err .to_string() .contains("tracked-state delta key prefix index 1 is out of bounds")); } #[test] fn encoded_value_len_matches_encoded_value_bytes() { let values = [ TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id: "commit".to_string(), source_pack_id: 0, source_ordinal: 0, change_id: "change".to_string(), }, deleted: false, snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-02T00:00:00Z".to_string(), }, TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id: "commit".to_string(), source_pack_id: 1, source_ordinal: 2, change_id: "change-2".to_string(), }, deleted: true, snapshot_ref: Some(JsonRef::from_hash_bytes([3; 32])), metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-02T00:00:00Z".to_string(), }, TrackedStateIndexValue { change_locator: ChangeLocator { source_commit_id: "other".to_string(), source_pack_id: 4, source_ordinal: 8, change_id: "change-3".to_string(), }, deleted: false, snapshot_ref: None, metadata_ref: Some(JsonRef::from_hash_bytes([4; 32])), created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-02T00:00:00Z".to_string(), }, ]; for value in values { assert_eq!(encoded_value_len(&value), encode_value(&value).len()); } } #[test] fn leaf_node_codec_uses_indexable_offset_table() { let entries = vec![ EncodedLeafEntry { key: b"alpha".to_vec(), value: b"one".to_vec(), }, EncodedLeafEntry { key: b"bravo".to_vec(), value: b"two-two".to_vec(), }, ]; let encoded = encode_leaf_node(&entries); assert_eq!(encoded[0], NODE_KIND_LEAF); assert_eq!(encoded[1], NODE_VERSION); assert_eq!(&encoded[2..6], 2u32.to_be_bytes().as_slice()); assert_eq!(&encoded[6..10], 0u32.to_be_bytes().as_slice()); let DecodedNodeRef::Leaf(leaf) = decode_node_ref(&encoded).expect("leaf ref") else { panic!("expected leaf node"); }; assert_eq!(leaf.len(), 2); assert_eq!(leaf.key(1).expect("second key"), Some(b"bravo".as_slice())); let second = leaf .entry(1) .expect("second entry") .expect("second entry exists"); assert_eq!(second.key, b"bravo"); assert_eq!(second.value, b"two-two"); let DecodedNode::Leaf(owned) = decode_node(&encoded).expect("owned leaf") else { panic!("expected owned leaf node"); }; assert_eq!(owned.entries(), entries.as_slice()); } #[test] fn leaf_node_codec_roundtrips_empty_leaf() { let encoded = encode_leaf_node(&[]); assert_eq!(encoded.len(), 10); let DecodedNodeRef::Leaf(leaf) = decode_node_ref(&encoded).expect("leaf ref") else { panic!("expected leaf node"); }; assert_eq!(leaf.len(), 0); assert!(leaf.entry(0).expect("missing entry").is_none()); } #[test] fn leaf_node_codec_rejects_malformed_offsets() { let entries = vec![ EncodedLeafEntry { key: b"alpha".to_vec(), value: b"one".to_vec(), }, EncodedLeafEntry { key: b"bravo".to_vec(), value: b"two".to_vec(), }, ]; let encoded = encode_leaf_node(&entries); let mut non_zero_first = encoded.clone(); non_zero_first[6..10].copy_from_slice(&1u32.to_be_bytes()); assert!(decode_node_ref(&non_zero_first) .expect_err("non-zero first offset should reject") .to_string() .contains("offset table must start at zero")); let mut non_monotonic = encoded.clone(); non_monotonic[10..14].copy_from_slice(&100u32.to_be_bytes()); assert!(decode_node_ref(&non_monotonic) .expect_err("non-monotonic offsets should reject") .to_string() .contains("offsets must be monotonic")); let mut short_coverage = encoded; let payload_len = short_coverage.len() - 18; short_coverage[14..18].copy_from_slice(&((payload_len - 1) as u32).to_be_bytes()); assert!(decode_node_ref(&short_coverage) .expect_err("short offset coverage should reject") .to_string() .contains("offset table does not cover full payload")); } #[test] fn content_hash_is_blake3() { assert_eq!(hash_bytes(b"abc"), *blake3::hash(b"abc").as_bytes()); } #[test] fn boundary_decisions_are_xxh3_based_and_deterministic() { let left = boundary_trigger(b"key", 0, 4096, 128, 4096); let right = boundary_trigger(b"key", 0, 4096, 128, 4096); assert_eq!(left, right); } } ================================================ FILE: packages/engine/src/tracked_state/context.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use crate::commit_store::CommitStoreContext; use crate::storage::{StorageReader, StorageWriteSet}; use crate::tracked_state::by_file_index::ByFileIndex; use crate::tracked_state::codec::{encode_key_ref, encode_value_ref}; use crate::tracked_state::diff::{diff_commits, TrackedStateDiff, TrackedStateDiffRequest}; use crate::tracked_state::materialize_index_entries; use crate::tracked_state::merge::{self, TrackedStateMergePlan}; use crate::tracked_state::storage; use crate::tracked_state::storage::DeltaJsonPackIndexesRef; use crate::tracked_state::tree::TrackedStateTree; use crate::tracked_state::types::{ TrackedStateIndexValue, TrackedStateKey, TrackedStateKeyRef, TrackedStateMutation, TrackedStateTreeDiffEntry, TrackedStateTreeScanRequest, }; use crate::tracked_state::{ MaterializedTrackedStateRow, TrackedStateDeltaRef, TrackedStateRowRequest, TrackedStateScanRequest, }; use crate::LixError; /// Factory for tracked-state readers, delta writers, and projection-root materializers. /// /// Tracked state is stored as content-addressed roots. Version refs /// choose which commit/root to read; this context only owns root operations. #[derive(Clone)] pub(crate) struct TrackedStateContext { tree: TrackedStateTree, commit_store: CommitStoreContext, } impl TrackedStateContext { pub(crate) fn new() -> Self { Self { tree: TrackedStateTree::new(), commit_store: CommitStoreContext::new(), } } /// Creates a commit-id-addressed tracked-state reader. pub(crate) fn reader(&self, store: S) -> TrackedStateStoreReader where S: StorageReader, { TrackedStateStoreReader { store, tree: self.tree.clone(), commit_store: self.commit_store, } } /// Creates a tracked-state writer over a caller-owned transaction and write set. pub(crate) fn writer<'a, S>( &'a self, store: &'a mut S, writes: &'a mut StorageWriteSet, ) -> TrackedStateWriter<'a, S> where S: StorageReader + ?Sized, { TrackedStateWriter { tree: self.tree.clone(), store, writes, } } /// Creates an explicit tracked-state projection-root materializer. /// /// Normal commits should use `writer(...).stage_delta(...)`. Materializing a /// projection root is a caller-chosen maintenance/read-acceleration step. pub(crate) fn materializer<'a, S>( &'a self, store: &'a mut S, writes: &'a mut StorageWriteSet, commit_store: &'a CommitStoreContext, ) -> TrackedStateMaterializer<'a, S> where S: StorageReader + ?Sized, { TrackedStateMaterializer { tracked_state: self, store, writes, commit_store, } } } /// Store-backed tracked-state reader created by `TrackedStateContext`. pub(crate) struct TrackedStateStoreReader { store: S, tree: TrackedStateTree, commit_store: CommitStoreContext, } impl TrackedStateStoreReader where S: StorageReader, { pub(crate) async fn scan_rows_at_commit( &mut self, commit_id: &str, request: &TrackedStateScanRequest, ) -> Result, LixError> { let root_id = self.tree.load_root(&mut self.store, commit_id).await?; let rows = if let Some(root_id) = root_id { if ByFileIndex::should_use(request) { if let Some(by_file_root_id) = storage::load_by_file_root(&mut self.store, commit_id).await? { self.scan_rows_at_commit_by_file_index(&root_id, &by_file_root_id, request) .await? } else { self.tree .scan( &mut self.store, &root_id, &tree_scan_request_from_tracked(request), ) .await? } } else { self.tree .scan( &mut self.store, &root_id, &tree_scan_request_from_tracked(request), ) .await? } } else { self.projection_entries_at_commit(commit_id, &tree_scan_request_from_tracked(request)) .await? }; let projection = crate::tracked_state::TrackedMaterializationProjection::from_columns( &request.projection.columns, ); let mut rows = materialize_index_entries(&mut self.store, rows, &projection).await?; if !request.filter.include_tombstones { rows.retain(|row| !row.deleted); } if let Some(limit) = request.limit { rows.truncate(limit); } Ok(rows) } pub(crate) async fn load_rows_at_commit( &mut self, commit_id: &str, requests: &[TrackedStateRowRequest], ) -> Result>, LixError> { if requests.is_empty() { return Ok(Vec::new()); } let keys = requests .iter() .map(tracked_key_from_request) .collect::, _>>()?; let values = self .projection_values_at_commit_for_keys(commit_id, &keys) .await?; let mut entry_indices = Vec::new(); let mut entries = Vec::new(); for (index, (key, value)) in keys.into_iter().zip(values).enumerate() { if let Some(value) = value { entry_indices.push(index); entries.push((key, value)); } } let materialized = materialize_index_entries( &mut self.store, entries, &crate::tracked_state::TrackedMaterializationProjection::full(), ) .await?; let mut rows = vec![None; requests.len()]; for (index, row) in entry_indices.into_iter().zip(materialized) { rows[index] = Some(row); } Ok(rows) } pub(crate) async fn diff_commits( &mut self, left_commit_id: &str, right_commit_id: &str, request: &TrackedStateDiffRequest, ) -> Result { diff_commits(self, left_commit_id, right_commit_id, request).await } pub(crate) async fn diff_tree_entries_at_commits( &mut self, left_commit_id: &str, right_commit_id: &str, request: &TrackedStateTreeScanRequest, ) -> Result, LixError> { if !self.projection_has_pending_deltas(left_commit_id).await? && !self.projection_has_pending_deltas(right_commit_id).await? && self.projection_root_exists(left_commit_id).await? && self.projection_root_exists(right_commit_id).await? { let left_root = self.tree.load_root(&mut self.store, left_commit_id).await?; let right_root = self .tree .load_root(&mut self.store, right_commit_id) .await?; let entries = self .tree .diff( &mut self.store, left_root.as_ref(), right_root.as_ref(), request, ) .await?; return Ok(entries); } if let Some(entries) = self .diff_pending_delta_suffix(left_commit_id, right_commit_id, request) .await? { return Ok(entries); } let left = self .projection_entries_at_commit(left_commit_id, request) .await? .into_iter() .collect::>(); let right = self .projection_entries_at_commit(right_commit_id, request) .await? .into_iter() .collect::>(); let keys = left .keys() .chain(right.keys()) .cloned() .collect::>(); let entries = keys .into_iter() .filter_map(|key| { let before = left.get(&key).cloned().map(|value| (key.clone(), value)); let after = right.get(&key).cloned().map(|value| (key, value)); if before == after { None } else { Some(TrackedStateTreeDiffEntry { before, after }) } }) .collect(); Ok(entries) } async fn diff_pending_delta_suffix( &mut self, left_commit_id: &str, right_commit_id: &str, request: &TrackedStateTreeScanRequest, ) -> Result>, LixError> { let left_delta_ids = self .delta_commit_ids_since_projection_root(left_commit_id) .await?; let right_delta_ids = self .delta_commit_ids_since_projection_root(right_commit_id) .await?; let left_base_commit_id = self .projection_base_commit_id(left_commit_id, &left_delta_ids) .await?; let right_base_commit_id = self .projection_base_commit_id(right_commit_id, &right_delta_ids) .await?; if left_base_commit_id != right_base_commit_id { return Ok(None); } if right_delta_ids.starts_with(&left_delta_ids) { let suffix = &right_delta_ids[left_delta_ids.len()..]; return self .diff_pending_delta_suffix_from_base(left_commit_id, suffix, request, true) .await .map(Some); } if left_delta_ids.starts_with(&right_delta_ids) { let suffix = &left_delta_ids[right_delta_ids.len()..]; return self .diff_pending_delta_suffix_from_base(right_commit_id, suffix, request, false) .await .map(Some); } Ok(None) } async fn diff_pending_delta_suffix_from_base( &mut self, base_commit_id: &str, suffix_commit_ids: &[String], request: &TrackedStateTreeScanRequest, suffix_is_after: bool, ) -> Result, LixError> { if suffix_commit_ids.is_empty() { return Ok(Vec::new()); } let mut changed = BTreeMap::::new(); for commit_id in suffix_commit_ids { let Some(delta_entries) = storage::load_delta_pack(&mut self.store, commit_id).await? else { continue; }; for delta in delta_entries { if request.matches_key(&delta.key) { changed.insert(delta.key, delta.value); } } } if changed.is_empty() { return Ok(Vec::new()); } let keys = changed.keys().cloned().collect::>(); let base_values = self .projection_values_at_commit_for_keys(base_commit_id, &keys) .await?; let entries = keys .into_iter() .zip(base_values) .filter_map(|(key, base_value)| { let changed_value = changed.get(&key).cloned(); let (before_value, after_value) = if suffix_is_after { (base_value, changed_value) } else { (changed_value, base_value) }; if before_value == after_value { return None; } Some(TrackedStateTreeDiffEntry { before: before_value.map(|value| (key.clone(), value)), after: after_value.map(|value| (key, value)), }) }) .collect(); Ok(entries) } pub(crate) async fn materialize_tree_values( &mut self, entries: Vec<(TrackedStateKey, TrackedStateIndexValue)>, ) -> Result, LixError> { materialize_index_entries( &mut self.store, entries, &crate::tracked_state::TrackedMaterializationProjection::full(), ) .await } async fn scan_rows_at_commit_by_file_index( &mut self, primary_root_id: &crate::tracked_state::types::TrackedStateRootId, by_file_root_id: &crate::tracked_state::types::TrackedStateRootId, request: &TrackedStateScanRequest, ) -> Result, LixError> { let by_file_request = ByFileIndex::scan_request_from_tracked(request); let index_match_count = self .tree .count_matching_keys(&mut self.store, by_file_root_id, &by_file_request) .await?; let primary_row_count = self .tree .row_count(&mut self.store, primary_root_id) .await?; if index_match_count * 20 > primary_row_count { let rows = self .tree .scan( &mut self.store, primary_root_id, &tree_scan_request_from_tracked(request), ) .await?; return Ok(rows); } let index_rows = self .tree .scan(&mut self.store, by_file_root_id, &by_file_request) .await?; let mut rows = Vec::new(); let tree_request = tree_scan_request_from_tracked(request); let needs_payloads = scan_needs_json_payloads(request); if needs_payloads { let mut primary_keys = Vec::with_capacity(index_rows.len()); for (index_key, _) in index_rows { if let Some(primary_key) = ByFileIndex::primary_key_from_index_key(index_key) { primary_keys.push(primary_key); } } let primary_values = self .tree .get_many(&mut self.store, primary_root_id, &primary_keys) .await?; for (primary_key, value) in primary_keys.into_iter().zip(primary_values) { let Some(value) = value else { continue; }; if !tree_request.matches(&primary_key, &value) { continue; } rows.push((primary_key, value)); } return Ok(rows); } for (index_key, index_value) in index_rows { let Some(primary_key) = ByFileIndex::primary_key_from_index_key(index_key) else { continue; }; let value = index_value; if tree_request.matches(&primary_key, &value) { rows.push((primary_key, value)); } } Ok(rows) } async fn projection_root_exists(&mut self, commit_id: &str) -> Result { Ok(self .tree .load_root(&mut self.store, commit_id) .await? .is_some()) } async fn projection_has_pending_deltas(&mut self, commit_id: &str) -> Result { Ok(!self .delta_commit_ids_since_projection_root(commit_id) .await? .is_empty()) } async fn projection_entries_at_commit( &mut self, commit_id: &str, request: &TrackedStateTreeScanRequest, ) -> Result, LixError> { let delta_commit_ids = self .delta_commit_ids_since_projection_root(commit_id) .await?; let base_commit_id = self .projection_base_commit_id(commit_id, &delta_commit_ids) .await?; if base_commit_id.is_none() && delta_commit_ids.len() == 1 { return self .single_delta_pack_entries(&delta_commit_ids[0], request) .await; } let mut entries = if let Some(base_commit_id) = base_commit_id { let root_id = self .tree .load_root(&mut self.store, &base_commit_id) .await? .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked_state projection base root '{base_commit_id}' disappeared" ), ) })?; self.tree .scan(&mut self.store, &root_id, request) .await? .into_iter() .collect::>() } else { BTreeMap::new() }; self.apply_delta_packs_to_entries(&delta_commit_ids, Some(request), &mut entries) .await?; Ok(entries.into_iter().collect()) } async fn single_delta_pack_entries( &mut self, commit_id: &str, request: &TrackedStateTreeScanRequest, ) -> Result, LixError> { let Some(delta_entries) = storage::load_delta_pack(&mut self.store, commit_id).await? else { return Ok(Vec::new()); }; let mut rows = delta_entries .into_iter() .enumerate() .filter_map(|(ordinal, delta)| { request .matches_key(&delta.key) .then_some((ordinal, delta.key, delta.value)) }) .collect::>(); rows.sort_by(|left, right| left.1.cmp(&right.1).then(left.0.cmp(&right.0))); let mut out = Vec::new(); let mut rows = rows.into_iter().peekable(); while let Some((_, key, mut value)) = rows.next() { while rows.peek().is_some_and(|(_, next_key, _)| next_key == &key) { let (_, _, next_value) = rows .next() .expect("peek confirmed duplicate delta entry exists"); value = next_value; } if !request.include_tombstones && value.deleted { continue; } out.push((key, value)); } Ok(out) } async fn projection_values_at_commit_for_keys( &mut self, commit_id: &str, keys: &[TrackedStateKey], ) -> Result>, LixError> { let delta_commit_ids = self .delta_commit_ids_since_projection_root(commit_id) .await?; let base_commit_id = self .projection_base_commit_id(commit_id, &delta_commit_ids) .await?; let mut entries = if let Some(base_commit_id) = base_commit_id { let root_id = self .tree .load_root(&mut self.store, &base_commit_id) .await? .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked_state projection base root '{base_commit_id}' disappeared" ), ) })?; let values = self.tree.get_many(&mut self.store, &root_id, keys).await?; keys.iter() .cloned() .zip(values) .filter_map(|(key, value)| value.map(|value| (key, value))) .collect::>() } else { BTreeMap::new() }; let key_filter = keys.iter().cloned().collect::>(); self.apply_delta_packs_to_entries_for_keys(&delta_commit_ids, &key_filter, &mut entries) .await?; Ok(keys.iter().map(|key| entries.get(key).cloned()).collect()) } async fn projection_base_commit_id( &mut self, commit_id: &str, delta_commit_ids: &[String], ) -> Result, LixError> { if delta_commit_ids.is_empty() { return Ok(if self.projection_root_exists(commit_id).await? { Some(commit_id.to_string()) } else { None }); } let Some(first_delta_commit_id) = delta_commit_ids.first() else { return Ok(None); }; let commit = self .commit_store .load_commit_from(&mut self.store, first_delta_commit_id) .await? .ok_or_else(|| missing_commit_error(first_delta_commit_id))?; let Some(parent_id) = commit.parent_ids.first() else { return Ok(None); }; Ok(if self.projection_root_exists(parent_id).await? { Some(parent_id.clone()) } else { None }) } async fn delta_commit_ids_since_projection_root( &mut self, commit_id: &str, ) -> Result, LixError> { let mut out = Vec::new(); let mut seen = BTreeSet::new(); let mut current = Some(commit_id.to_string()); while let Some(current_id) = current { if !seen.insert(current_id.clone()) { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!("tracked_state projection found first-parent cycle at '{current_id}'"), )); } if self .tree .load_root(&mut self.store, ¤t_id) .await? .is_some() { break; } if storage::delta_pack_exists(&mut self.store, ¤t_id).await? { out.push(current_id.clone()); } let commit = self .commit_store .load_commit_from(&mut self.store, ¤t_id) .await? .ok_or_else(|| missing_commit_error(¤t_id))?; current = commit.parent_ids.first().cloned(); } out.reverse(); Ok(out) } async fn apply_delta_packs_to_entries( &mut self, commit_ids: &[String], request: Option<&TrackedStateTreeScanRequest>, entries: &mut BTreeMap, ) -> Result<(), LixError> { for commit_id in commit_ids { let Some(delta_entries) = storage::load_delta_pack(&mut self.store, commit_id).await? else { continue; }; for delta in delta_entries { if let Some(request) = request { if !request.matches_key(&delta.key) { continue; } if !request.include_tombstones && delta.value.deleted { entries.remove(&delta.key); continue; } entries.insert(delta.key, delta.value); } else { entries.insert(delta.key, delta.value); } } } Ok(()) } async fn apply_delta_packs_to_entries_for_keys( &mut self, commit_ids: &[String], keys: &BTreeSet, entries: &mut BTreeMap, ) -> Result<(), LixError> { for commit_id in commit_ids { let Some(delta_entries) = storage::load_delta_pack(&mut self.store, commit_id).await? else { continue; }; for delta in delta_entries { if keys.contains(&delta.key) { entries.insert(delta.key, delta.value); } } } Ok(()) } /// Plans a three-way merge by diffing both heads against the same base. /// /// `target_commit_id` is the destination root that should keep its own /// changes. `source_commit_id` is the incoming root whose non-conflicting /// changes should be applied. #[allow(dead_code)] pub(crate) async fn plan_merge( &mut self, base_commit_id: &str, target_commit_id: &str, source_commit_id: &str, request: &TrackedStateDiffRequest, ) -> Result { let target_diff = self .diff_commits(base_commit_id, target_commit_id, request) .await?; let source_diff = self .diff_commits(base_commit_id, source_commit_id, request) .await?; merge::plan_merge(&target_diff, &source_diff) } } /// Writer for commit-store-backed tracked-state projection roots. pub(crate) struct TrackedStateWriter<'a, S: ?Sized> { tree: TrackedStateTree, store: &'a mut S, writes: &'a mut StorageWriteSet, } /// Explicit projection-root materializer created by `TrackedStateContext`. pub(crate) struct TrackedStateMaterializer<'a, S: ?Sized> { pub(super) tracked_state: &'a TrackedStateContext, pub(super) store: &'a mut S, pub(super) writes: &'a mut StorageWriteSet, pub(super) commit_store: &'a CommitStoreContext, } impl TrackedStateMaterializer<'_, S> where S: StorageReader + ?Sized, { pub(crate) async fn materialize_root_at( &mut self, commit_id: &str, ) -> Result { crate::tracked_state::materializer::materialize_root_at(self, commit_id).await } } impl TrackedStateWriter<'_, S> where S: StorageReader + ?Sized, { /// Stages one tracked-state projection delta for `commit_id`. pub(crate) async fn stage_delta( &mut self, commit_id: &str, _parent_commit_id: Option<&str>, deltas: &[TrackedStateDeltaRef<'_>], ) -> Result { storage::stage_delta_pack_refs(self.writes, commit_id, deltas)?; Ok(TrackedStateWriteReport { commit_id: commit_id.to_string(), changed_rows: deltas.len(), primary_chunk_puts: 0, by_file_chunk_puts: 0, }) } pub(crate) async fn stage_delta_with_json_pack_indexes( &mut self, commit_id: &str, _parent_commit_id: Option<&str>, deltas: &[TrackedStateDeltaRef<'_>], json_pack_indexes: DeltaJsonPackIndexesRef<'_>, ) -> Result { storage::stage_delta_pack_refs_with_json_pack_indexes( self.writes, commit_id, deltas, json_pack_indexes, )?; Ok(TrackedStateWriteReport { commit_id: commit_id.to_string(), changed_rows: deltas.len(), primary_chunk_puts: 0, by_file_chunk_puts: 0, }) } pub(crate) async fn stage_projection_root<'a, I>( &mut self, commit_id: &str, parent_commit_id: Option<&str>, deltas: I, ) -> Result where I: IntoIterator>, { let deltas = deltas.into_iter().collect::>(); let base_root = match parent_commit_id { Some(parent_commit_id) => { let Some(root) = self.tree.load_root(self.store, parent_commit_id).await? else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "tracked-state parent root for commit '{parent_commit_id}' is missing" ), )); }; Some(root) } None => None, }; let mut mutations = Vec::with_capacity(deltas.len()); for delta in &deltas { let key = TrackedStateKeyRef { schema_key: delta.change.schema_key, file_id: delta.change.file_id, entity_id: delta.change.entity_id, }; let value = crate::tracked_state::types::TrackedStateIndexValueRef { change_locator: delta.locator, deleted: delta.change.snapshot_ref.is_none(), snapshot_ref: delta.change.snapshot_ref, metadata_ref: delta.change.metadata_ref, created_at: delta.created_at, updated_at: delta.updated_at, }; mutations.push(TrackedStateMutation::put_encoded( encode_key_ref(key), encode_value_ref(value), )); } let result = self .tree .apply_mutations( self.store, self.writes, base_root.as_ref(), mutations, Some(commit_id), ) .await?; let by_file_base_root = match parent_commit_id { Some(parent_commit_id) => { storage::load_by_file_root(self.store, parent_commit_id).await? } None => None, }; let concrete_file_deltas = deltas .iter() .filter(|delta| delta.change.file_id.is_some()) .collect::>(); let by_file_chunk_puts = if concrete_file_deltas.is_empty() { if let Some(by_file_base_root) = by_file_base_root.as_ref() { storage::stage_by_file_root(self.writes, commit_id, by_file_base_root); } 0 } else { let mut by_file_mutations = Vec::with_capacity(concrete_file_deltas.len()); for delta in concrete_file_deltas { let key = TrackedStateKeyRef { schema_key: delta.change.schema_key, file_id: delta.change.file_id, entity_id: delta.change.entity_id, }; let header_value = crate::tracked_state::types::TrackedStateIndexValueRef { change_locator: delta.locator, deleted: delta.change.snapshot_ref.is_none(), snapshot_ref: None, metadata_ref: None, created_at: delta.created_at, updated_at: delta.updated_at, }; by_file_mutations.push(TrackedStateMutation::put_encoded( ByFileIndex::encode_key_ref(key), ByFileIndex::encode_header_value_ref(header_value), )); } let by_file_result = self .tree .apply_mutations( self.store, self.writes, by_file_base_root.as_ref(), by_file_mutations, None, ) .await?; storage::stage_by_file_root(self.writes, commit_id, &by_file_result.root_id); by_file_result.chunk_count }; Ok(TrackedStateWriteReport { commit_id: commit_id.to_string(), changed_rows: deltas.len(), primary_chunk_puts: result.chunk_count, by_file_chunk_puts, }) } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateWriteReport { pub(crate) commit_id: String, pub(crate) changed_rows: usize, pub(crate) primary_chunk_puts: usize, pub(crate) by_file_chunk_puts: usize, } fn missing_commit_error(commit_id: &str) -> LixError { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("tracked_state projection references missing commit '{commit_id}'"), ) } fn tree_scan_request_from_tracked( request: &TrackedStateScanRequest, ) -> TrackedStateTreeScanRequest { TrackedStateTreeScanRequest { schema_keys: request.filter.schema_keys.clone(), entity_ids: request.filter.entity_ids.clone(), file_ids: request.filter.file_ids.clone(), include_tombstones: request.filter.include_tombstones, // User limits belong above delta overlay and tombstone visibility. // Pushing them into the physical tree can stop on rows that are later // hidden, returning too few live rows. limit: None, } } fn scan_needs_json_payloads(request: &TrackedStateScanRequest) -> bool { if request.projection.columns.is_empty() { return true; } request .projection .columns .iter() .any(|column| column == "snapshot_content" || column == "metadata") } fn tracked_key_from_request(request: &TrackedStateRowRequest) -> Result { let file_id = match &request.file_id { crate::NullableKeyFilter::Null => None, crate::NullableKeyFilter::Value(value) => Some(value.clone()), crate::NullableKeyFilter::Any => { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree exact lookup requires a concrete file_id filter", )) } }; Ok(TrackedStateKey { schema_key: request.schema_key.clone(), file_id, entity_id: request.entity_id.clone(), }) } #[cfg(test)] mod tests { use std::sync::Arc; use super::*; use crate::backend::{testing::UnitTestBackend, Backend}; use crate::storage::{StorageContext, StorageWriteTransaction}; use crate::NullableKeyFilter; #[tokio::test] async fn stage_delta_does_not_require_parent_projection_root() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let tracked_state = TrackedStateContext::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-child", Some("missing-parent"), &[row("entity-child", "change-child", "commit-child")], ) .await .expect("delta pack staging should not require a parent projection root"); } #[tokio::test] async fn plan_merge_from_roots_applies_source_only_change() { let (storage, tracked_state) = seed_merge_roots( &[row_with_value("entity-a", "change-base", "base", "base")], &[row_with_value("entity-a", "change-base", "base", "base")], &[row_with_value( "entity-a", "change-source", "source", "source", )], ) .await; let plan = tracked_state .reader(storage.clone()) .plan_merge( "base", "target", "source", &TrackedStateDiffRequest::default(), ) .await .expect("merge should plan"); assert_eq!(merge_patch_ids(&plan), vec!["entity-a"]); assert!(plan.conflicts.is_empty()); } #[tokio::test] async fn plan_merge_from_roots_keeps_target_only_change() { let (storage, tracked_state) = seed_merge_roots( &[row("entity-a", "change-base", "base")], &[row("entity-a", "change-target", "target")], &[row("entity-a", "change-base", "base")], ) .await; let plan = tracked_state .reader(storage.clone()) .plan_merge( "base", "target", "source", &TrackedStateDiffRequest::default(), ) .await .expect("merge should plan"); assert!(plan.patches.is_empty()); assert!(plan.conflicts.is_empty()); } #[tokio::test] async fn plan_merge_from_roots_reports_divergent_modification_conflict() { let (storage, tracked_state) = seed_merge_roots( &[row_with_value("entity-a", "change-base", "base", "base")], &[row_with_value( "entity-a", "change-target", "target", "target", )], &[row_with_value( "entity-a", "change-source", "source", "source", )], ) .await; let plan = tracked_state .reader(storage.clone()) .plan_merge( "base", "target", "source", &TrackedStateDiffRequest::default(), ) .await .expect("merge should plan"); assert!(plan.patches.is_empty()); assert_eq!(merge_conflict_ids(&plan), vec!["entity-a"]); } #[tokio::test] async fn plan_merge_from_roots_applies_source_tombstone() { let (storage, tracked_state) = seed_merge_roots( &[row("entity-a", "change-base", "base")], &[row("entity-a", "change-base", "base")], &[tombstone("entity-a", "change-source-delete", "source")], ) .await; let plan = tracked_state .reader(storage.clone()) .plan_merge( "base", "target", "source", &TrackedStateDiffRequest::default(), ) .await .expect("merge should plan"); assert_eq!(merge_patch_ids(&plan), vec!["entity-a"]); assert_eq!(plan.patches[0].projected_row().snapshot_content, None); assert_eq!(plan.patches[0].change_id(), "change-source-delete"); } #[tokio::test] async fn scan_rows_by_file_uses_file_index_shape() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut file_a = row("entity-a", "change-a", "commit-1"); file_a.file_id = Some("file-a.json".to_string()); let mut file_b = row("entity-b", "change-b", "commit-1"); file_b.file_id = Some("file-b.json".to_string()); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, &[file_a, file_b], ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit( "commit-1", &TrackedStateScanRequest { filter: crate::tracked_state::TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("file-a.json".to_string())], ..Default::default() }, ..Default::default() }, ) .await .expect("file scan should read through index"); assert_eq!(rows.len(), 1); assert_eq!( rows[0] .entity_id .as_single_string_owned() .expect("entity id"), "entity-a" ); assert_eq!(rows[0].file_id.as_deref(), Some("file-a.json")); } #[tokio::test] async fn by_file_header_index_fetches_primary_payload_only_when_requested() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut row = row("entity-a", "change-a", "commit-1"); row.file_id = Some("file-a.json".to_string()); let expected_snapshot = row.snapshot_content.clone(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, std::slice::from_ref(&row), ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let mut reader = tracked_state.reader(storage.clone()); let header_rows = reader .scan_rows_at_commit( "commit-1", &TrackedStateScanRequest { filter: crate::tracked_state::TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("file-a.json".to_string())], ..Default::default() }, projection: crate::tracked_state::TrackedStateProjection { columns: vec!["entity_id".to_string()], }, ..Default::default() }, ) .await .expect("header scan should read through by-file index"); let full_rows = reader .scan_rows_at_commit( "commit-1", &TrackedStateScanRequest { filter: crate::tracked_state::TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("file-a.json".to_string())], ..Default::default() }, ..Default::default() }, ) .await .expect("full scan should fetch primary payload"); assert_eq!(header_rows[0].snapshot_content, None); assert_eq!(full_rows[0].snapshot_content, expected_snapshot); } #[tokio::test] async fn null_file_rows_do_not_stage_by_file_index() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let row = row("entity-a", "change-a", "commit-1"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, std::slice::from_ref(&row), ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let by_file_root = storage::load_by_file_root(&mut storage.clone(), "commit-1") .await .expect("by-file root lookup should load"); assert!(by_file_root.is_none()); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit( "commit-1", &TrackedStateScanRequest { filter: crate::tracked_state::TrackedStateFilter { file_ids: vec![NullableKeyFilter::Null], ..Default::default() }, ..Default::default() }, ) .await .expect("null file scan should fall back to primary tree"); assert_eq!(rows.len(), 1); assert_eq!( rows[0] .entity_id .as_single_string_owned() .expect("entity id"), "entity-a" ); } #[tokio::test] async fn mixed_null_and_concrete_file_scan_uses_primary_tree() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let null_row = row("entity-null", "change-null", "commit-1"); let mut file_row = row("entity-file", "change-file", "commit-2"); file_row.file_id = Some("file-a.json".to_string()); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, std::slice::from_ref(&null_row), ) .await .expect("parent root should write"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-2", Some("commit-1"), std::slice::from_ref(&file_row), ) .await .expect("child root should write"); transaction .commit() .await .expect("transaction should commit"); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit( "commit-2", &TrackedStateScanRequest { filter: crate::tracked_state::TrackedStateFilter { file_ids: vec![ NullableKeyFilter::Null, NullableKeyFilter::Value("file-a.json".to_string()), ], ..Default::default() }, ..Default::default() }, ) .await .expect("mixed scan should use primary tree"); let mut entity_ids = rows .iter() .map(|row| row.entity_id.as_single_string_owned().expect("entity id")) .collect::>(); entity_ids.sort(); assert_eq!(entity_ids, vec!["entity-file", "entity-null"]); } #[tokio::test] async fn by_file_header_index_filters_tombstones_without_payload_sentinel() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut live = row("entity-live", "change-live", "commit-1"); live.file_id = Some("file-a.json".to_string()); let mut deleted = tombstone("entity-deleted", "change-delete", "commit-1"); deleted.file_id = Some("file-a.json".to_string()); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, &[live, deleted], ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit( "commit-1", &TrackedStateScanRequest { filter: crate::tracked_state::TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("file-a.json".to_string())], ..Default::default() }, projection: crate::tracked_state::TrackedStateProjection { columns: vec!["entity_id".to_string()], }, ..Default::default() }, ) .await .expect("file scan should read through index"); assert_eq!(rows.len(), 1); assert_eq!( rows[0] .entity_id .as_single_string_owned() .expect("entity id"), "entity-live" ); } #[tokio::test] async fn pending_tombstone_delta_hides_materialized_base_row() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let base = row("entity-a", "change-base", "base"); let delete = tombstone("entity-a", "change-delete", "child"); let mut transaction = storage .begin_write_transaction() .await .expect("base transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "base", None, std::slice::from_ref(&base), ) .await .expect("base delta should write"); transaction.commit().await.expect("base should commit"); let mut transaction = storage .begin_write_transaction() .await .expect("materialize transaction should open"); let mut writes = StorageWriteSet::new(); tracked_state .materializer( transaction.as_mut(), &mut writes, &CommitStoreContext::new(), ) .materialize_root_at("base") .await .expect("base projection root should materialize"); writes .apply(transaction.as_mut()) .await .expect("base root writes should apply"); transaction .commit() .await .expect("materialized base should commit"); let mut transaction = storage .begin_write_transaction() .await .expect("child transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "child", Some("base"), std::slice::from_ref(&delete), ) .await .expect("child tombstone delta should write"); transaction.commit().await.expect("child should commit"); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit("child", &TrackedStateScanRequest::default()) .await .expect("child scan should apply pending tombstone over base root"); assert!(rows.is_empty(), "pending tombstone must hide base row"); } #[tokio::test] async fn single_delta_pack_scan_keeps_last_delta_for_duplicate_key() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, &[ row_with_value("entity-a", "change-a1", "commit-1", "first"), row_with_value("entity-b", "change-b", "commit-1", "middle"), row_with_value("entity-a", "change-a2", "commit-1", "second"), tombstone("entity-c", "change-c1", "commit-1"), ], ) .await .expect("delta pack should write"); transaction .commit() .await .expect("transaction should commit"); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit("commit-1", &TrackedStateScanRequest::default()) .await .expect("single delta pack should scan"); assert_eq!(rows.len(), 2); assert_eq!( rows.iter() .map(|row| ( row.entity_id.as_single_string_owned().expect("entity id"), row.snapshot_content.clone() )) .collect::>(), vec![ ( "entity-a".to_string(), Some("{\"value\":\"second\"}".to_string()) ), ( "entity-b".to_string(), Some("{\"value\":\"middle\"}".to_string()) ), ] ); } #[tokio::test] async fn scan_limit_applies_after_tombstone_visibility() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, &[ tombstone("entity-a", "change-delete", "commit-1"), row("entity-b", "change-live", "commit-1"), ], ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit( "commit-1", &TrackedStateScanRequest { limit: Some(1), ..Default::default() }, ) .await .expect("limited scan should apply visibility before limit"); assert_eq!(rows.len(), 1); assert_eq!( rows[0] .entity_id .as_single_string_owned() .expect("entity id"), "entity-b" ); } #[tokio::test] async fn by_file_scan_limit_applies_after_tombstone_visibility() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut deleted = tombstone("entity-a", "change-delete", "commit-1"); deleted.file_id = Some("file-a.json".to_string()); let mut live = row("entity-b", "change-live", "commit-1"); live.file_id = Some("file-a.json".to_string()); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, &[deleted, live], ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit( "commit-1", &TrackedStateScanRequest { filter: crate::tracked_state::TrackedStateFilter { file_ids: vec![NullableKeyFilter::Value("file-a.json".to_string())], ..Default::default() }, projection: crate::tracked_state::TrackedStateProjection { columns: vec!["entity_id".to_string()], }, limit: Some(1), }, ) .await .expect("limited by-file scan should apply visibility before limit"); assert_eq!(rows.len(), 1); assert_eq!( rows[0] .entity_id .as_single_string_owned() .expect("entity id"), "entity-b" ); } #[tokio::test] async fn reads_resolve_json_snapshot_refs() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let large_value = "x".repeat(1536); let row = row_with_value("entity-a", "change-a", "commit-1", &large_value); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, std::slice::from_ref(&row), ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let mut reader = tracked_state.reader(storage.clone()); let loaded = reader .load_rows_at_commit( "commit-1", &[TrackedStateRowRequest { schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), file_id: NullableKeyFilter::Null, }], ) .await .expect("row should load") .pop() .flatten() .expect("row should exist"); let scanned = reader .scan_rows_at_commit("commit-1", &TrackedStateScanRequest::default()) .await .expect("rows should scan"); assert_eq!(loaded.snapshot_content, row.snapshot_content); assert_eq!(scanned[0].snapshot_content, row.snapshot_content); } #[tokio::test] async fn projection_cache_uses_seen_updated_at_not_change_created_at() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut row = row("entity-a", "change-a", "commit-1"); row.created_at = "2026-01-01T00:00:00Z".to_string(); row.updated_at = "2026-01-02T00:00:00Z".to_string(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, std::slice::from_ref(&row), ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let loaded = tracked_state .reader(storage.clone()) .load_rows_at_commit( "commit-1", &[TrackedStateRowRequest { schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), file_id: NullableKeyFilter::Null, }], ) .await .expect("row should load") .pop() .flatten() .expect("row should exist"); assert_eq!(loaded.created_at, "2026-01-01T00:00:00Z"); assert_eq!(loaded.updated_at, "2026-01-02T00:00:00Z"); } #[tokio::test] async fn projected_scans_do_not_materialize_snapshot_when_snapshot_content_is_omitted() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let large_value = "x".repeat(1536); let row = row_with_value("entity-a", "change-a", "commit-1", &large_value); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "commit-1", None, std::slice::from_ref(&row), ) .await .expect("root should write"); transaction .commit() .await .expect("transaction should commit"); let rows = tracked_state .reader(storage.clone()) .scan_rows_at_commit( "commit-1", &TrackedStateScanRequest { projection: crate::tracked_state::TrackedStateProjection { columns: vec!["entity_id".to_string()], }, ..Default::default() }, ) .await .expect("rows should scan"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].snapshot_content, None); } async fn seed_merge_roots( base_rows: &[MaterializedTrackedStateRow], target_rows: &[MaterializedTrackedStateRow], source_rows: &[MaterializedTrackedStateRow], ) -> (StorageContext, TrackedStateContext) { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( transaction.as_mut(), &tracked_state, "base", None, base_rows, ) .await .expect("base root should write"); write_root_for_test( transaction.as_mut(), &tracked_state, "target", None, target_rows, ) .await .expect("target root should write"); write_root_for_test( transaction.as_mut(), &tracked_state, "source", None, source_rows, ) .await .expect("source root should write"); transaction .commit() .await .expect("transaction should commit"); (storage, tracked_state) } fn merge_patch_ids(plan: &TrackedStateMergePlan) -> Vec { plan.patches .iter() .map(|entry| { entry .identity() .entity_id .as_single_string_owned() .expect("identity") }) .collect() } fn merge_conflict_ids(plan: &TrackedStateMergePlan) -> Vec { plan.conflicts .iter() .map(|entry| { entry .identity .entity_id .as_single_string_owned() .expect("identity") }) .collect() } async fn write_root_for_test( transaction: &mut dyn StorageWriteTransaction, tracked_state: &TrackedStateContext, commit_id: &str, parent_commit_id: Option<&str>, rows: &[MaterializedTrackedStateRow], ) -> Result<(), LixError> { crate::test_support::stage_tracked_root_from_materialized( transaction, tracked_state, commit_id, parent_commit_id, rows, ) .await } fn tombstone(entity_id: &str, change_id: &str, commit_id: &str) -> MaterializedTrackedStateRow { let mut row = row(entity_id, change_id, commit_id); row.snapshot_content = None; row } fn row(entity_id: &str, change_id: &str, commit_id: &str) -> MaterializedTrackedStateRow { row_with_value(entity_id, change_id, commit_id, "value") } fn row_with_value( entity_id: &str, change_id: &str, commit_id: &str, value: &str, ) -> MaterializedTrackedStateRow { MaterializedTrackedStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: "test_schema".to_string(), file_id: None, snapshot_content: Some(format!("{{\"value\":\"{value}\"}}")), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), change_id: change_id.to_string(), commit_id: commit_id.to_string(), } } } ================================================ FILE: packages/engine/src/tracked_state/diff.rs ================================================ use crate::entity_identity::EntityIdentity; use crate::tracked_state::types::TrackedStateTreeScanRequest; use crate::tracked_state::{ MaterializedTrackedStateRow, TrackedStateFilter, TrackedStateStoreReader, }; use crate::LixError; /// Filter for comparing two tracked-state commit roots. #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct TrackedStateDiffRequest { pub(crate) filter: TrackedStateFilter, } /// Changed tracked-state rows between two commit roots. #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct TrackedStateDiff { pub(crate) entries: Vec, } /// One changed identity between two commit roots. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateDiffEntry { pub(crate) identity: TrackedStateDiffIdentity, pub(crate) kind: TrackedStateDiffKind, /// Raw row in the left root. /// /// This can be a tombstone. Callers that need user-visible semantics /// should use `visible_before()` instead of inspecting this directly. pub(crate) before: Option, /// Raw row in the right root. /// /// This can be a tombstone. Keeping the raw tombstone is what lets merge /// apply deletes without reloading the source root. pub(crate) after: Option, } /// Root-local tracked-state identity. /// /// Entity identity used by merge/diff logic. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] pub(crate) struct TrackedStateDiffIdentity { pub(crate) schema_key: String, pub(crate) entity_id: EntityIdentity, pub(crate) file_id: Option, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum TrackedStateDiffKind { Added, Modified, Removed, } /// Diffs two tracked-state commit roots. /// pub(crate) async fn diff_commits( reader: &mut TrackedStateStoreReader, left_commit_id: &str, right_commit_id: &str, request: &TrackedStateDiffRequest, ) -> Result where S: crate::storage::StorageReader, { let scan_request = scan_request_for_diff(request); let tree_entries = reader .diff_tree_entries_at_commits(left_commit_id, right_commit_id, &scan_request) .await?; let mut before_entries = Vec::new(); let mut after_entries = Vec::new(); let mut pending_entries = Vec::with_capacity(tree_entries.len()); for tree_entry in tree_entries { let before_index = tree_entry.before.map(|entry| { let index = before_entries.len(); before_entries.push(entry); index }); let after_index = tree_entry.after.map(|entry| { let index = after_entries.len(); after_entries.push(entry); index }); pending_entries.push(PendingDiffEntry { before_index, after_index, }); } let before_rows = reader.materialize_tree_values(before_entries).await?; let after_rows = reader.materialize_tree_values(after_entries).await?; let mut entries = Vec::new(); for pending_entry in pending_entries { let before = materialized_row_at(pending_entry.before_index, &before_rows)?; let after = materialized_row_at(pending_entry.after_index, &after_rows)?; let identity = match before.as_ref().or(after.as_ref()) { Some(row) => TrackedStateDiffIdentity::from_row(row)?, None => continue, }; let Some(entry) = classify_diff(identity, before, after) else { continue; }; entries.push(entry); } Ok(TrackedStateDiff { entries }) } fn materialized_row_at( index: Option, rows: &[MaterializedTrackedStateRow], ) -> Result, LixError> { let Some(index) = index else { return Ok(None); }; rows.get(index).cloned().map(Some).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state diff materialization returned fewer rows than planned", ) }) } struct PendingDiffEntry { before_index: Option, after_index: Option, } fn scan_request_for_diff(request: &TrackedStateDiffRequest) -> TrackedStateTreeScanRequest { let mut filter = request.filter.clone(); filter.include_tombstones = true; TrackedStateTreeScanRequest { schema_keys: filter.schema_keys, entity_ids: filter.entity_ids, file_ids: filter.file_ids, include_tombstones: true, limit: None, } } fn classify_diff( identity: TrackedStateDiffIdentity, before: Option, after: Option, ) -> Option { match (is_live_row(before.as_ref()), is_live_row(after.as_ref())) { (None, None) => None, (None, Some(_)) => Some(TrackedStateDiffEntry { identity, kind: TrackedStateDiffKind::Added, before, after, }), (Some(_), None) => Some(TrackedStateDiffEntry { identity, kind: TrackedStateDiffKind::Removed, before, after, }), (Some(before), Some(after)) if tracked_row_payload_eq(before, after) => None, (Some(_), Some(_)) => Some(TrackedStateDiffEntry { identity, kind: TrackedStateDiffKind::Modified, before, after, }), } } fn is_live_row(row: Option<&MaterializedTrackedStateRow>) -> Option<&MaterializedTrackedStateRow> { row.filter(|row| row.snapshot_content.is_some()) } fn tracked_row_payload_eq( left: &MaterializedTrackedStateRow, right: &MaterializedTrackedStateRow, ) -> bool { left.snapshot_content == right.snapshot_content && left.metadata == right.metadata } impl TrackedStateDiffIdentity { fn from_row(row: &MaterializedTrackedStateRow) -> Result { Ok(Self { schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), file_id: row.file_id.clone(), }) } } impl TrackedStateDiffEntry { #[cfg(test)] pub(crate) fn before_is_live(&self) -> bool { self.visible_before().is_some() } #[cfg(test)] pub(crate) fn after_is_live(&self) -> bool { self.visible_after().is_some() } #[cfg(test)] pub(crate) fn visible_before(&self) -> Option<&MaterializedTrackedStateRow> { self.before .as_ref() .filter(|row| row.snapshot_content.is_some()) } #[cfg(test)] pub(crate) fn visible_after(&self) -> Option<&MaterializedTrackedStateRow> { self.after .as_ref() .filter(|row| row.snapshot_content.is_some()) } } #[cfg(test)] mod tests { use std::sync::Arc; use super::*; use crate::backend::testing::UnitTestBackend; use crate::storage::{StorageContext, StorageWriteTransaction}; use crate::tracked_state::TrackedStateContext; use crate::NullableKeyFilter; #[tokio::test] async fn diff_commits_reports_added_rows() { let (storage, tracked_state) = seed_roots(&[], &[row("entity-a", None, "after")]).await; let diff = diff(storage.clone(), &tracked_state).await; assert_eq!( kinds(&diff), vec![("entity-a".to_string(), TrackedStateDiffKind::Added)] ); assert!(diff.entries[0].before.is_none()); assert_eq!( diff.entries[0] .after .as_ref() .map(|row| row.change_id.as_str()), Some("after") ); assert!(!diff.entries[0].before_is_live()); assert!(diff.entries[0].after_is_live()); } #[tokio::test] async fn diff_commits_reports_removed_rows_when_right_side_is_absent() { let (storage, tracked_state) = seed_roots(&[row("entity-a", None, "before")], &[]).await; let diff = diff(storage.clone(), &tracked_state).await; assert_eq!( kinds(&diff), vec![("entity-a".to_string(), TrackedStateDiffKind::Removed)] ); assert_eq!( diff.entries[0] .before .as_ref() .map(|row| row.change_id.as_str()), Some("before") ); assert!(diff.entries[0].after.is_none()); assert!(diff.entries[0].before_is_live()); assert!(!diff.entries[0].after_is_live()); } #[tokio::test] async fn diff_commits_reports_removed_rows_when_right_side_is_tombstone() { let (storage, tracked_state) = seed_roots( &[row("entity-a", None, "before")], &[tombstone("entity-a", None, "delete")], ) .await; let diff = diff(storage.clone(), &tracked_state).await; assert_eq!( kinds(&diff), vec![("entity-a".to_string(), TrackedStateDiffKind::Removed)] ); let entry = &diff.entries[0]; assert_eq!( entry.after.as_ref().map(|row| row.change_id.as_str()), Some("delete") ); assert!( entry .after .as_ref() .is_some_and(|row| row.snapshot_content.is_none()), "removed diff should preserve the right-side tombstone for merge" ); assert!(entry.before_is_live()); assert!(!entry.after_is_live()); } #[tokio::test] async fn diff_commits_reports_added_rows_when_left_side_is_tombstone() { let (storage, tracked_state) = seed_roots( &[tombstone("entity-a", None, "delete")], &[row("entity-a", None, "after")], ) .await; let diff = diff(storage.clone(), &tracked_state).await; assert_eq!( kinds(&diff), vec![("entity-a".to_string(), TrackedStateDiffKind::Added)] ); let entry = &diff.entries[0]; assert_eq!( entry.before.as_ref().map(|row| row.change_id.as_str()), Some("delete") ); assert!( entry .before .as_ref() .is_some_and(|row| row.snapshot_content.is_none()), "added diff should preserve the left-side tombstone for merge" ); assert!(!entry.before_is_live()); assert!(entry.after_is_live()); } #[tokio::test] async fn diff_commits_reports_modified_rows_for_changed_payload() { let (storage, tracked_state) = seed_roots( &[row_with_value("entity-a", None, "before", "one")], &[row_with_value("entity-a", None, "after", "two")], ) .await; let diff = diff(storage.clone(), &tracked_state).await; assert_eq!( kinds(&diff), vec![("entity-a".to_string(), TrackedStateDiffKind::Modified)] ); assert!(diff.entries[0].before_is_live()); assert!(diff.entries[0].after_is_live()); } #[tokio::test] async fn diff_commits_omits_unchanged_rows_even_when_metadata_differs_only_by_commit() { let (storage, tracked_state) = seed_roots( &[row_with_value("entity-a", None, "before", "same")], &[row_with_value("entity-a", None, "after", "same")], ) .await; let diff = diff(storage.clone(), &tracked_state).await; assert!(diff.entries.is_empty()); } #[tokio::test] async fn diff_commits_distinguishes_same_entity_with_different_file_id() { let (storage, tracked_state) = seed_roots( &[row("entity-a", Some("file-a"), "before-a")], &[ row("entity-a", Some("file-a"), "before-a"), row("entity-a", Some("file-b"), "after-b"), ], ) .await; let diff = diff(storage.clone(), &tracked_state).await; assert_eq!(diff.entries.len(), 1); assert_eq!(diff.entries[0].identity.file_id.as_deref(), Some("file-b")); assert_eq!(diff.entries[0].kind, TrackedStateDiffKind::Added); } #[tokio::test] async fn diff_commits_filters_by_schema_entity_and_file_id() { let (storage, tracked_state) = seed_roots( &[], &[ row_with_schema("entity-a", Some("file-a"), "schema-a", "change-a"), row_with_schema("entity-b", Some("file-b"), "schema-b", "change-b"), ], ) .await; let mut reader = tracked_state.reader(storage.clone()); let diff = reader .diff_commits( "left", "right", &TrackedStateDiffRequest { filter: TrackedStateFilter { schema_keys: vec!["schema-b".to_string()], entity_ids: vec![crate::entity_identity::EntityIdentity::single( "entity-b", )], file_ids: vec![NullableKeyFilter::Value("file-b".to_string())], ..Default::default() }, }, ) .await .expect("diff should load"); assert_eq!( kinds(&diff), vec![("entity-b".to_string(), TrackedStateDiffKind::Added)] ); } #[tokio::test] async fn diff_commits_between_delta_parent_and_child_reports_suffix_rows() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test( tx.as_mut(), &tracked_state, "parent", None, &[ row_with_value("entity-a", None, "parent-a", "before"), row_with_value("entity-b", None, "parent-b", "same"), ], ) .await .expect("parent should write"); write_root_for_test( tx.as_mut(), &tracked_state, "child", Some("parent"), &[row_with_value("entity-a", None, "child-a", "after")], ) .await .expect("child should write"); tx.commit().await.expect("transaction should commit"); let diff = tracked_state .reader(storage) .diff_commits("parent", "child", &TrackedStateDiffRequest::default()) .await .expect("diff should load"); assert_eq!( kinds(&diff), vec![("entity-a".to_string(), TrackedStateDiffKind::Modified)] ); assert_eq!( diff.entries[0] .before .as_ref() .and_then(|row| row.snapshot_content.as_deref()), Some("{\"value\":\"before\"}") ); assert_eq!( diff.entries[0] .after .as_ref() .and_then(|row| row.snapshot_content.as_deref()), Some("{\"value\":\"after\"}") ); } #[tokio::test] async fn diff_commits_between_delta_child_and_parent_reports_reverse_suffix_rows() { let (storage, tracked_state) = seed_parent_child_delta( &[ row_with_value("entity-a", None, "parent-a", "before"), row_with_value("entity-b", None, "parent-b", "same"), ], &[row_with_value("entity-a", None, "child-a", "after")], ) .await; let diff = tracked_state .reader(storage) .diff_commits("child", "parent", &TrackedStateDiffRequest::default()) .await .expect("diff should load"); assert_eq!( kinds(&diff), vec![("entity-a".to_string(), TrackedStateDiffKind::Modified)] ); assert_eq!( diff.entries[0] .before .as_ref() .and_then(|row| row.snapshot_content.as_deref()), Some("{\"value\":\"after\"}") ); assert_eq!( diff.entries[0] .after .as_ref() .and_then(|row| row.snapshot_content.as_deref()), Some("{\"value\":\"before\"}") ); } #[tokio::test] async fn diff_commits_between_delta_parent_and_child_preserves_suffix_tombstones() { let (storage, tracked_state) = seed_parent_child_delta( &[ row_with_value("entity-a", None, "parent-a", "before"), row_with_value("entity-b", None, "parent-b", "same"), ], &[tombstone("entity-a", None, "child-delete")], ) .await; let diff = tracked_state .reader(storage) .diff_commits("parent", "child", &TrackedStateDiffRequest::default()) .await .expect("diff should load"); assert_eq!( kinds(&diff), vec![("entity-a".to_string(), TrackedStateDiffKind::Removed)] ); assert!(diff.entries[0].before_is_live()); assert!(!diff.entries[0].after_is_live()); assert_eq!( diff.entries[0] .after .as_ref() .map(|row| row.change_id.as_str()), Some("child-delete") ); } async fn diff( storage: StorageContext, tracked_state: &TrackedStateContext, ) -> TrackedStateDiff { tracked_state .reader(storage) .diff_commits("left", "right", &TrackedStateDiffRequest::default()) .await .expect("diff should load") } async fn seed_roots( left_rows: &[MaterializedTrackedStateRow], right_rows: &[MaterializedTrackedStateRow], ) -> (StorageContext, TrackedStateContext) { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test(tx.as_mut(), &tracked_state, "left", None, left_rows) .await .expect("left root should write"); write_root_for_test(tx.as_mut(), &tracked_state, "right", None, right_rows) .await .expect("right root should write"); tx.commit().await.expect("transaction should commit"); (storage, tracked_state) } async fn seed_parent_child_delta( parent_rows: &[MaterializedTrackedStateRow], child_rows: &[MaterializedTrackedStateRow], ) -> (StorageContext, TrackedStateContext) { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let tracked_state = TrackedStateContext::new(); let mut tx = storage .begin_write_transaction() .await .expect("transaction should open"); write_root_for_test(tx.as_mut(), &tracked_state, "parent", None, parent_rows) .await .expect("parent should write"); write_root_for_test( tx.as_mut(), &tracked_state, "child", Some("parent"), child_rows, ) .await .expect("child should write"); tx.commit().await.expect("transaction should commit"); (storage, tracked_state) } async fn write_root_for_test( tx: &mut dyn StorageWriteTransaction, tracked_state: &TrackedStateContext, commit_id: &str, parent_commit_id: Option<&str>, rows: &[MaterializedTrackedStateRow], ) -> Result<(), LixError> { crate::test_support::stage_tracked_root_from_materialized( tx, tracked_state, commit_id, parent_commit_id, rows, ) .await } fn kinds(diff: &TrackedStateDiff) -> Vec<(String, TrackedStateDiffKind)> { diff.entries .iter() .map(|entry| { ( entry .identity .entity_id .as_single_string_owned() .expect("identity"), entry.kind, ) }) .collect() } fn tombstone( entity_id: &str, file_id: Option<&str>, change_id: &str, ) -> MaterializedTrackedStateRow { let mut row = row(entity_id, file_id, change_id); row.snapshot_content = None; row.deleted = true; row } fn row(entity_id: &str, file_id: Option<&str>, change_id: &str) -> MaterializedTrackedStateRow { row_with_schema(entity_id, file_id, "test_schema", change_id) } fn row_with_schema( entity_id: &str, file_id: Option<&str>, schema_key: &str, change_id: &str, ) -> MaterializedTrackedStateRow { row_with_schema_and_value(entity_id, file_id, schema_key, change_id, "value") } fn row_with_value( entity_id: &str, file_id: Option<&str>, change_id: &str, value: &str, ) -> MaterializedTrackedStateRow { row_with_schema_and_value(entity_id, file_id, "test_schema", change_id, value) } fn row_with_schema_and_value( entity_id: &str, file_id: Option<&str>, schema_key: &str, change_id: &str, value: &str, ) -> MaterializedTrackedStateRow { MaterializedTrackedStateRow { entity_id: EntityIdentity::single(entity_id), schema_key: schema_key.to_string(), file_id: file_id.map(str::to_string), snapshot_content: Some(format!("{{\"value\":\"{value}\"}}")), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), change_id: change_id.to_string(), commit_id: change_id.replace("change", "commit"), } } } ================================================ FILE: packages/engine/src/tracked_state/materialization.rs ================================================ use crate::entity_identity::EntityIdentity; use crate::json_store::JsonRef; use crate::json_store::{JsonLoadRequestRef, JsonReadScopeRef, JsonStoreContext}; use crate::storage::StorageReader; use crate::tracked_state::types::{TrackedStateIndexValue, TrackedStateKey}; use crate::tracked_state::MaterializedTrackedStateRow; use crate::LixError; use std::collections::BTreeMap; /// Materializes tracked-state index entries. /// /// The durable tracked_state value is authoritative for scalar projection /// fields and stores the JSON refs needed for payload projections. Snapshot and /// metadata bytes are hydrated from grouped json_store loads only when the /// requested projection needs them. pub(crate) async fn materialize_index_entries( store: &mut S, entries: Vec<(TrackedStateKey, TrackedStateIndexValue)>, projection: &TrackedMaterializationProjection, ) -> Result, LixError> where S: StorageReader, { if !projection.snapshot_content && !projection.metadata { return Ok(entries .into_iter() .map(materialize_entry_without_json) .collect()); } let json_slots_per_row = usize::from(projection.snapshot_content) + usize::from(projection.metadata); let json_ref_capacity = entries.len().saturating_mul(json_slots_per_row); let mut row_plans = Vec::with_capacity(entries.len()); let mut json_refs = Vec::with_capacity(json_ref_capacity); let mut json_ref_localities = Vec::with_capacity(json_ref_capacity); for (key, value) in entries { let row_index = row_plans.len(); let snapshot_ref_index = projected_json_ref_index( projection.snapshot_content, value.snapshot_ref, row_index, value.change_locator.source_pack_id, &mut json_refs, &mut json_ref_localities, ); let metadata_ref_index = projected_json_ref_index( projection.metadata, value.metadata_ref, row_index, value.change_locator.source_pack_id, &mut json_refs, &mut json_ref_localities, ); row_plans.push(MaterializedTrackedStateRowPlan { entity_id: key.entity_id, schema_key: key.schema_key, file_id: key.file_id, deleted: value.deleted, created_at: value.created_at, updated_at: value.updated_at, change_id: value.change_locator.change_id, commit_id: value.change_locator.source_commit_id, snapshot_ref_index, metadata_ref_index, }); } let mut json_values = load_projection_json_values(store, &json_refs, &json_ref_localities, &row_plans).await?; row_plans .into_iter() .map(|plan| materialize_row_plan(plan, &json_refs, &mut json_values)) .collect() } fn materialize_entry_without_json( (key, value): (TrackedStateKey, TrackedStateIndexValue), ) -> MaterializedTrackedStateRow { MaterializedTrackedStateRow { entity_id: key.entity_id, schema_key: key.schema_key, file_id: key.file_id, snapshot_content: None, metadata: None, deleted: value.deleted, created_at: value.created_at, updated_at: value.updated_at, change_id: value.change_locator.change_id, commit_id: value.change_locator.source_commit_id, } } struct MaterializedTrackedStateRowPlan { entity_id: EntityIdentity, schema_key: String, file_id: Option, deleted: bool, created_at: String, updated_at: String, change_id: String, commit_id: String, snapshot_ref_index: Option, metadata_ref_index: Option, } fn projected_json_ref_index( include: bool, json_ref: Option, row_index: usize, pack_id: u32, json_refs: &mut Vec, json_ref_localities: &mut Vec, ) -> Option { if !include { return None; } let index = json_refs.len(); json_refs.push(json_ref?); json_ref_localities.push(JsonRefLocality { row_index, pack_id }); Some(index) } struct JsonRefLocality { row_index: usize, pack_id: u32, } async fn load_projection_json_values( store: &mut S, json_refs: &[JsonRef], json_ref_localities: &[JsonRefLocality], row_plans: &[MaterializedTrackedStateRowPlan], ) -> Result>>, LixError> where S: StorageReader, { if json_refs.len() != json_ref_localities.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization JSON refs and locality indexes diverged", )); } let json_store = JsonStoreContext::new(); if let Some((commit_id, pack_id)) = single_projection_pack(json_ref_localities, row_plans)? { let pack_ids = [pack_id]; return json_store .load_bytes_many( store, JsonLoadRequestRef { refs: json_refs, scope: JsonReadScopeRef::CommitPacks { commit_id, pack_ids: &pack_ids, }, }, ) .await .map(|batch| batch.into_values()); } let mut json_values = vec![None; json_refs.len()]; let mut refs_by_pack = BTreeMap::<(&str, u32), Vec<(usize, JsonRef)>>::new(); for (index, json_ref) in json_refs.iter().copied().enumerate() { let locality = json_ref_localities.get(index).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization lost JSON locality index", ) })?; let row_plan = row_plans.get(locality.row_index).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization lost JSON row locality index", ) })?; refs_by_pack .entry((row_plan.commit_id.as_str(), locality.pack_id)) .or_default() .push((index, json_ref)); } for ((commit_id, pack_id), refs) in refs_by_pack { let indexes = refs.iter().map(|(index, _)| *index).collect::>(); let refs = refs .into_iter() .map(|(_, json_ref)| json_ref) .collect::>(); let pack_ids = [pack_id]; let values = json_store .load_bytes_many( store, JsonLoadRequestRef { refs: &refs, scope: JsonReadScopeRef::CommitPacks { commit_id: &commit_id, pack_ids: &pack_ids, }, }, ) .await? .into_values(); for (index, value) in indexes.into_iter().zip(values) { json_values[index] = value; } } Ok(json_values) } fn single_projection_pack<'a>( json_ref_localities: &[JsonRefLocality], row_plans: &'a [MaterializedTrackedStateRowPlan], ) -> Result, LixError> { let Some(first_locality) = json_ref_localities.first() else { return Ok(None); }; let first_plan = row_plans.get(first_locality.row_index).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization lost JSON row locality index", ) })?; let commit_id = first_plan.commit_id.as_str(); let pack_id = first_locality.pack_id; for locality in &json_ref_localities[1..] { let row_plan = row_plans.get(locality.row_index).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization lost JSON row locality index", ) })?; if row_plan.commit_id != commit_id || locality.pack_id != pack_id { return Ok(None); } } Ok(Some((commit_id, pack_id))) } fn materialize_row_plan( plan: MaterializedTrackedStateRowPlan, json_refs: &[JsonRef], json_values: &mut [Option>], ) -> Result { Ok(MaterializedTrackedStateRow { entity_id: plan.entity_id, schema_key: plan.schema_key, file_id: plan.file_id, snapshot_content: materialized_json_string( plan.snapshot_ref_index, json_refs, json_values, )?, metadata: materialized_json_string(plan.metadata_ref_index, json_refs, json_values)?, deleted: plan.deleted, created_at: plan.created_at, updated_at: plan.updated_at, change_id: plan.change_id, commit_id: plan.commit_id, }) } fn materialized_json_string( index: Option, json_refs: &[JsonRef], json_values: &mut [Option>], ) -> Result, LixError> { let Some(index) = index else { return Ok(None); }; let json_ref = json_refs.get(index).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization lost JSON ref index", ) })?; // Each row plan owns its projected JSON slots. If this path starts // deduplicating refs, duplicate consumers must clone intentionally. let bytes = json_values .get_mut(index) .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization lost JSON value index", ) })? .take() .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked_state materialization missing JSON payload '{}'", json_ref.to_hex() ), ) })?; String::from_utf8(bytes).map(Some).map_err(|error| { let utf8_error = error.utf8_error(); LixError::new( LixError::CODE_INTERNAL_ERROR, format!("tracked_state materialized JSON payload is not UTF-8: {utf8_error}"), ) }) } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) struct TrackedMaterializationProjection { pub(crate) snapshot_content: bool, pub(crate) metadata: bool, } impl TrackedMaterializationProjection { pub(crate) fn full() -> Self { Self { snapshot_content: true, metadata: true, } } pub(crate) fn from_columns(columns: &[String]) -> Self { if columns.is_empty() { return Self::full(); } Self { snapshot_content: columns.iter().any(|column| column == "snapshot_content"), metadata: columns.iter().any(|column| column == "metadata"), } } } #[cfg(test)] mod tests { use super::*; fn row_plan(commit_id: &str) -> MaterializedTrackedStateRowPlan { MaterializedTrackedStateRowPlan { entity_id: EntityIdentity::single("entity"), schema_key: "schema".to_string(), file_id: None, deleted: false, created_at: "2024-01-01T00:00:00.000Z".to_string(), updated_at: "2024-01-01T00:00:00.000Z".to_string(), change_id: "change".to_string(), commit_id: commit_id.to_string(), snapshot_ref_index: None, metadata_ref_index: None, } } #[test] fn single_projection_pack_accepts_duplicate_slots_from_same_pack() { let row_plans = vec![row_plan("commit-a")]; let localities = vec![ JsonRefLocality { row_index: 0, pack_id: 7, }, JsonRefLocality { row_index: 0, pack_id: 7, }, ]; assert_eq!( single_projection_pack(&localities, &row_plans).expect("pack detection should succeed"), Some(("commit-a", 7)) ); } #[test] fn single_projection_pack_rejects_mixed_packs() { let row_plans = vec![row_plan("commit-a")]; let localities = vec![ JsonRefLocality { row_index: 0, pack_id: 7, }, JsonRefLocality { row_index: 0, pack_id: 8, }, ]; assert_eq!( single_projection_pack(&localities, &row_plans).expect("pack detection should succeed"), None ); } #[test] fn materialized_json_string_consumes_owned_payload_bytes() { let json = br#"{"value":1}"#.to_vec(); let json_ref = JsonRef::for_content(&json); let mut json_values = vec![Some(json)]; let materialized = materialized_json_string(Some(0), &[json_ref], &mut json_values) .expect("json should materialize"); assert_eq!(materialized, Some(r#"{"value":1}"#.to_string())); assert!(json_values[0].is_none()); } } ================================================ FILE: packages/engine/src/tracked_state/materializer.rs ================================================ use crate::commit_store::{Change, ChangeLocator, Commit, CommitStoreContext}; use crate::storage::StorageReader; use crate::tracked_state::context::{TrackedStateMaterializer, TrackedStateWriteReport}; use crate::tracked_state::types::TrackedStateKey; use crate::tracked_state::TrackedStateDeltaRef; use crate::LixError; use std::collections::{BTreeMap, BTreeSet}; /// Owned materialization delta used only by explicit projection-root hydration. /// /// Normal transaction commits already have borrowed `ChangeRef` and /// `ChangeLocatorRef` values available while staging commit_store. /// Materialization loads those facts back from storage, so it owns the decoded /// data internally and immediately passes a borrowed view into the same /// tracked-state root writer. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct MaterializationDelta { pub(crate) change: Change, pub(crate) locator: ChangeLocator, pub(crate) created_at: String, pub(crate) updated_at: String, } impl MaterializationDelta { pub(crate) fn as_ref(&self) -> TrackedStateDeltaRef<'_> { TrackedStateDeltaRef { change: self.change.as_ref(), locator: self.locator.as_ref(), created_at: &self.created_at, updated_at: &self.updated_at, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct MaterializationInput { pub(crate) commit_id: String, pub(crate) parent_commit_id: Option, pub(crate) deltas: Vec, } struct LocatedChange { locator: ChangeLocator, change: Change, } /// Explicit projection-root materialization over commit_store. /// /// Normal transaction commits must use `TrackedStateWriter::stage_delta` with /// already prepared commit_store refs. This path exists for deliberate /// materialization only. pub(crate) async fn materialize_root_at( materializer: &mut TrackedStateMaterializer<'_, S>, commit_id: &str, ) -> Result where S: StorageReader + ?Sized, { let input = build_materialization_input(materializer.store, materializer.commit_store, commit_id) .await?; let delta_refs = input .deltas .iter() .map(MaterializationDelta::as_ref) .collect::>(); materializer .tracked_state .writer(materializer.store, materializer.writes) .stage_projection_root( &input.commit_id, input.parent_commit_id.as_deref(), delta_refs, ) .await } async fn build_materialization_input( store: &mut S, commit_store: &CommitStoreContext, commit_id: &str, ) -> Result where S: StorageReader + ?Sized, { let lineage = load_first_parent_lineage(store, commit_store, commit_id).await?; let mut located_changes = Vec::new(); for commit in lineage { located_changes .append(&mut load_commit_located_changes(store, commit_store, &commit).await?); } let deltas = project_materialization_deltas(located_changes); Ok(MaterializationInput { commit_id: commit_id.to_string(), parent_commit_id: None, deltas, }) } async fn load_first_parent_lineage( store: &mut S, commit_store: &CommitStoreContext, commit_id: &str, ) -> Result, LixError> where S: StorageReader + ?Sized, { let mut lineage = Vec::new(); let mut seen = BTreeSet::new(); let mut current = Some(commit_id.to_string()); while let Some(current_id) = current { if !seen.insert(current_id.clone()) { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked_state materialization found first-parent cycle at commit '{current_id}'" ), )); } let commit = commit_store .load_commit_from(store, ¤t_id) .await? .ok_or_else(|| missing_commit_error(¤t_id))?; current = commit.parent_ids.first().cloned(); lineage.push(commit); } lineage.reverse(); Ok(lineage) } async fn load_commit_located_changes( store: &mut S, commit_store: &CommitStoreContext, commit: &Commit, ) -> Result, LixError> where S: StorageReader + ?Sized, { let mut located_changes = Vec::new(); for pack_id in 0..commit.change_pack_count { let changes = commit_store .load_change_pack_from(store, &commit.id, pack_id) .await? .ok_or_else(|| missing_pack_error("change", &commit.id, pack_id))?; for (source_ordinal, change) in changes.into_iter().enumerate() { let locator = ChangeLocator { source_commit_id: commit.id.clone(), source_pack_id: pack_id, source_ordinal: u32::try_from(source_ordinal).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization change pack ordinal exceeds u32", ) })?, change_id: change.id.clone(), }; located_changes.push(LocatedChange { locator, change }); } } let mut adopted_locators = Vec::new(); for pack_id in 0..commit.membership_pack_count { let mut locators = commit_store .load_membership_pack_from(store, &commit.id, pack_id) .await? .ok_or_else(|| missing_pack_error("membership", &commit.id, pack_id))?; adopted_locators.append(&mut locators); } let adopted_changes = load_changes_by_locators(store, commit_store, &adopted_locators).await?; located_changes.extend( adopted_locators .into_iter() .zip(adopted_changes) .map(|(locator, change)| LocatedChange { locator, change }), ); Ok(located_changes) } fn project_materialization_deltas( changes: impl IntoIterator, ) -> Vec { let mut projected = BTreeMap::::new(); for LocatedChange { locator, change } in changes { let key = TrackedStateKey { schema_key: change.schema_key.clone(), file_id: change.file_id.clone(), entity_id: change.entity_id.clone(), }; let created_at = projected .get(&key) .map(|delta| delta.created_at.clone()) .unwrap_or_else(|| change.created_at.clone()); let updated_at = change.created_at.clone(); projected.insert( key, MaterializationDelta { change, locator, created_at, updated_at, }, ); } projected.into_values().collect() } async fn load_changes_by_locators( store: &mut (impl StorageReader + ?Sized), commit_store: &CommitStoreContext, locators: &[ChangeLocator], ) -> Result, LixError> { let mut packs = BTreeMap::<(String, u32), Vec>::new(); for locator in locators { let key = (locator.source_commit_id.clone(), locator.source_pack_id); if packs.contains_key(&key) { continue; } let changes = commit_store .load_change_pack_from(store, &locator.source_commit_id, locator.source_pack_id) .await? .ok_or_else(|| { missing_pack_error("change", &locator.source_commit_id, locator.source_pack_id) })?; packs.insert(key, changes); } locators .iter() .map(|locator| change_from_loaded_packs(&packs, locator)) .collect() } fn change_from_loaded_packs( packs: &BTreeMap<(String, u32), Vec>, locator: &ChangeLocator, ) -> Result { let key = (locator.source_commit_id.clone(), locator.source_pack_id); let changes = packs.get(&key).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked_state materialization lost loaded change pack ({}, {})", locator.source_commit_id, locator.source_pack_id ), ) })?; let change = changes .get(usize::try_from(locator.source_ordinal).map_err(|_| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked_state materialization locator ordinal does not fit usize", ) })?) .ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked_state materialization locator for '{}' points past pack ({}, {})", locator.change_id, locator.source_commit_id, locator.source_pack_id ), ) })?; if change.id != locator.change_id { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked_state materialization locator expected '{}' but found '{}'", locator.change_id, change.id ), )); } Ok(change.clone()) } fn missing_pack_error(label: &str, commit_id: &str, pack_id: u32) -> LixError { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("tracked_state materialization missing {label} pack ({commit_id}, {pack_id})"), ) } fn missing_commit_error(commit_id: &str) -> LixError { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("tracked_state materialization missing commit '{commit_id}'"), ) } #[cfg(test)] mod tests { use super::*; use crate::commit_store::ChangeLocator; use crate::entity_identity::EntityIdentity; #[test] fn materialization_delta_ref_borrows_owned_facts() { let delta = MaterializationDelta { change: Change { id: "change-1".to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "schema".to_string(), file_id: Some("file".to_string()), snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), }, locator: ChangeLocator { source_commit_id: "commit-1".to_string(), source_pack_id: 7, source_ordinal: 11, change_id: "change-1".to_string(), }, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-02-01T00:00:00Z".to_string(), }; let delta_ref = delta.as_ref(); assert_eq!(delta_ref.change.id, "change-1"); assert_eq!(delta_ref.change.schema_key, "schema"); assert_eq!(delta_ref.change.file_id, Some("file")); assert_eq!(delta_ref.locator.source_commit_id, "commit-1"); assert_eq!(delta_ref.locator.source_pack_id, 7); assert_eq!(delta_ref.locator.source_ordinal, 11); assert_eq!(delta_ref.created_at, "2026-01-01T00:00:00Z"); assert_eq!(delta_ref.updated_at, "2026-02-01T00:00:00Z"); } #[test] fn change_from_loaded_packs_resolves_locator_by_pack_and_ordinal() { let mut packs = BTreeMap::new(); packs.insert( ("source-commit".to_string(), 3), vec![change("change-0"), change("change-1"), change("change-2")], ); let locator = ChangeLocator { source_commit_id: "source-commit".to_string(), source_pack_id: 3, source_ordinal: 1, change_id: "change-1".to_string(), }; let resolved = change_from_loaded_packs(&packs, &locator).expect("locator should resolve"); assert_eq!(resolved.id, "change-1"); } #[test] fn change_from_loaded_packs_rejects_locator_change_id_mismatch() { let mut packs = BTreeMap::new(); packs.insert(("source-commit".to_string(), 3), vec![change("actual")]); let locator = ChangeLocator { source_commit_id: "source-commit".to_string(), source_pack_id: 3, source_ordinal: 0, change_id: "expected".to_string(), }; let error = change_from_loaded_packs(&packs, &locator).expect_err("mismatched locator should fail"); assert!(error.message.contains("expected")); assert!(error.message.contains("actual")); } #[test] fn project_materialization_deltas_keeps_first_seen_created_at_and_latest_updated_at() { let deltas = project_materialization_deltas(vec![ located_change( "commit-1", 0, "change-create", "entity-1", "2026-01-01T00:00:00Z", ), located_change( "commit-2", 0, "change-update", "entity-1", "2026-02-01T00:00:00Z", ), ]); assert_eq!(deltas.len(), 1); let delta = &deltas[0]; assert_eq!(delta.change.id, "change-update"); assert_eq!(delta.locator.source_commit_id, "commit-2"); assert_eq!(delta.created_at, "2026-01-01T00:00:00Z"); assert_eq!(delta.updated_at, "2026-02-01T00:00:00Z"); } #[test] fn project_materialization_deltas_uses_adopted_change_time_not_target_commit_time() { let deltas = project_materialization_deltas(vec![located_change( "source-commit", 0, "adopted-change", "entity-1", "2026-01-01T00:00:00Z", )]); assert_eq!(deltas.len(), 1); assert_eq!(deltas[0].created_at, "2026-01-01T00:00:00Z"); assert_eq!(deltas[0].updated_at, "2026-01-01T00:00:00Z"); } #[test] fn project_materialization_deltas_tracks_entities_independently() { let deltas = project_materialization_deltas(vec![ located_change( "commit-1", 0, "entity-a-create", "entity-a", "2026-01-01T00:00:00Z", ), located_change( "commit-1", 1, "entity-b-create", "entity-b", "2026-01-02T00:00:00Z", ), located_change( "commit-2", 0, "entity-a-update", "entity-a", "2026-02-01T00:00:00Z", ), ]); let entity_a = deltas .iter() .find(|delta| delta.change.entity_id == EntityIdentity::single("entity-a")) .expect("entity-a delta"); let entity_b = deltas .iter() .find(|delta| delta.change.entity_id == EntityIdentity::single("entity-b")) .expect("entity-b delta"); assert_eq!(entity_a.change.id, "entity-a-update"); assert_eq!(entity_a.created_at, "2026-01-01T00:00:00Z"); assert_eq!(entity_a.updated_at, "2026-02-01T00:00:00Z"); assert_eq!(entity_b.change.id, "entity-b-create"); assert_eq!(entity_b.created_at, "2026-01-02T00:00:00Z"); assert_eq!(entity_b.updated_at, "2026-01-02T00:00:00Z"); } fn change(id: &str) -> Change { Change { id: id.to_string(), entity_id: EntityIdentity::single("entity-1"), schema_key: "schema".to_string(), file_id: Some("file".to_string()), snapshot_ref: None, metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), } } fn located_change( commit_id: &str, source_ordinal: u32, change_id: &str, entity_id: &str, created_at: &str, ) -> LocatedChange { LocatedChange { locator: ChangeLocator { source_commit_id: commit_id.to_string(), source_pack_id: 0, source_ordinal, change_id: change_id.to_string(), }, change: Change { id: change_id.to_string(), entity_id: EntityIdentity::single(entity_id), schema_key: "schema".to_string(), file_id: Some("file".to_string()), snapshot_ref: None, metadata_ref: None, created_at: created_at.to_string(), }, } } } ================================================ FILE: packages/engine/src/tracked_state/merge.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use crate::tracked_state::{ MaterializedTrackedStateRow, TrackedStateDiff, TrackedStateDiffEntry, TrackedStateDiffIdentity, }; use crate::LixError; /// Planned tracked-state merge result. /// /// This is intentionally a pure planner. It does not know about versions, /// sessions, changelog writes, or live-state overlays. Callers provide two /// diffs from the same merge base: /// /// - `base -> target`: what the destination version changed. /// - `base -> source`: what the incoming version changed. /// /// The planner returns source-side patches that can be applied to the target /// root plus first-class conflicts for identities changed differently on both /// sides. #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct TrackedStateMergePlan { pub(crate) patches: Vec, pub(crate) conflicts: Vec, } /// One source-side patch to apply to the target root. /// /// Merge patches are expressed as canonical change adoption, not as new row /// writes. The projected row carries the target-root materialization shape, /// including tombstones, while `change_id` preserves the source canonical /// change identity. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) enum TrackedStateMergePatch { Adopt { identity: TrackedStateDiffIdentity, change_id: String, projected_row: MaterializedTrackedStateRow, }, } impl TrackedStateMergePatch { #[cfg(test)] pub(crate) fn identity(&self) -> &TrackedStateDiffIdentity { match self { Self::Adopt { identity, .. } => identity, } } pub(crate) fn change_id(&self) -> &str { match self { Self::Adopt { change_id, .. } => change_id, } } pub(crate) fn projected_row(&self) -> &MaterializedTrackedStateRow { match self { Self::Adopt { projected_row, .. } => projected_row, } } } /// One identity that both sides changed incompatibly. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateMergeConflict { pub(crate) identity: TrackedStateDiffIdentity, pub(crate) target: TrackedStateDiffEntry, pub(crate) source: TrackedStateDiffEntry, } /// Plans a three-way tracked-state merge from two base-relative diffs. /// /// This follows the same shape as prolly-tree merge systems: compare /// `base -> target` and `base -> source` by identity, emit source-only patches /// for the target root, ignore target-only changes, collapse convergent /// changes, and report divergent same-identity changes as conflicts. pub(crate) fn plan_merge( target_diff: &TrackedStateDiff, source_diff: &TrackedStateDiff, ) -> Result { let target_by_identity = diff_by_identity(target_diff)?; let source_by_identity = diff_by_identity(source_diff)?; let identities = target_by_identity .keys() .chain(source_by_identity.keys()) .cloned() .collect::>(); let mut plan = TrackedStateMergePlan::default(); for identity in identities { match ( target_by_identity.get(&identity), source_by_identity.get(&identity), ) { (None, None) => {} (Some(_target), None) => { // Target already changed this identity. Source did not, so // there is nothing to apply. } (None, Some(source)) => { plan.patches .push(adopt_source_change_patch(identity, source)?); } (Some(target), Some(source)) if same_final_state(target, source) => { // Both sides reached the same visible state. Keep target to // avoid writing duplicate source metadata. } (Some(target), Some(source)) => { plan.conflicts.push(TrackedStateMergeConflict { identity, target: (*target).clone(), source: (*source).clone(), }); } } } Ok(plan) } fn diff_by_identity( diff: &TrackedStateDiff, ) -> Result, LixError> { let mut entries = BTreeMap::new(); for entry in &diff.entries { if entries.insert(entry.identity.clone(), entry).is_some() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "tracked-state merge received duplicate diff entry for schema '{}' entity '{}'", entry.identity.schema_key, entry.identity.entity_id.as_json_array_text()? ), )); } } Ok(entries) } fn adopt_source_change_patch( identity: TrackedStateDiffIdentity, entry: &TrackedStateDiffEntry, ) -> Result { let Some(row) = entry.after.clone() else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "tracked-state merge cannot apply source removal for schema '{}' entity '{}' without a tombstone row", entry.identity.schema_key, entry.identity.entity_id.as_json_array_text()? ), )); }; Ok(TrackedStateMergePatch::Adopt { identity, change_id: row.change_id.clone(), projected_row: row, }) } fn same_final_state(target: &TrackedStateDiffEntry, source: &TrackedStateDiffEntry) -> bool { match (target.after.as_ref(), source.after.as_ref()) { (None, None) => true, (Some(target), Some(source)) if !row_is_live(target) && !row_is_live(source) => true, (Some(target), Some(source)) if row_is_live(target) && row_is_live(source) => { tracked_row_payload_eq(target, source) } _ => false, } } fn row_is_live(row: &MaterializedTrackedStateRow) -> bool { row.snapshot_content.is_some() } fn tracked_row_payload_eq( left: &MaterializedTrackedStateRow, right: &MaterializedTrackedStateRow, ) -> bool { left.snapshot_content == right.snapshot_content && left.metadata == right.metadata } #[cfg(test)] mod tests { use super::*; use crate::entity_identity::EntityIdentity; use crate::tracked_state::TrackedStateDiffKind; #[test] fn source_add_applies() { let plan = plan_merge( &TrackedStateDiff::default(), &diff(vec![entry( "entity-a", TrackedStateDiffKind::Added, None, Some(row("entity-a", "source")), )]), ) .expect("merge should plan"); assert_eq!(patch_ids(&plan), vec!["entity-a"]); assert!(plan.conflicts.is_empty()); } #[test] fn source_modify_applies() { let plan = plan_merge( &TrackedStateDiff::default(), &diff(vec![entry( "entity-a", TrackedStateDiffKind::Modified, Some(row_with_value("entity-a", "base", "base")), Some(row_with_value("entity-a", "source", "source")), )]), ) .expect("merge should plan"); assert_eq!(patch_ids(&plan), vec!["entity-a"]); assert_eq!( plan.patches[0].projected_row().snapshot_content.as_deref(), Some("{\"value\":\"source\"}") ); assert_eq!(plan.patches[0].change_id(), "source"); } #[test] fn source_delete_applies_tombstone() { let plan = plan_merge( &TrackedStateDiff::default(), &diff(vec![entry( "entity-a", TrackedStateDiffKind::Removed, Some(row("entity-a", "base")), Some(tombstone("entity-a", "source-delete")), )]), ) .expect("merge should plan"); assert_eq!(patch_ids(&plan), vec!["entity-a"]); assert_eq!(plan.patches[0].projected_row().snapshot_content, None); assert_eq!(plan.patches[0].change_id(), "source-delete"); } #[test] fn target_only_change_is_noop() { let plan = plan_merge( &diff(vec![entry( "entity-a", TrackedStateDiffKind::Modified, Some(row("entity-a", "base")), Some(row("entity-a", "target")), )]), &TrackedStateDiff::default(), ) .expect("merge should plan"); assert!(plan.patches.is_empty()); assert!(plan.conflicts.is_empty()); } #[test] fn both_sides_same_final_value_is_convergent_noop() { let target = entry( "entity-a", TrackedStateDiffKind::Modified, Some(row_with_value("entity-a", "base", "base")), Some(row_with_value("entity-a", "target", "same")), ); let source = entry( "entity-a", TrackedStateDiffKind::Modified, Some(row_with_value("entity-a", "base", "base")), Some(row_with_value("entity-a", "source", "same")), ); let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect("merge should plan"); assert!(plan.patches.is_empty()); assert!(plan.conflicts.is_empty()); } #[test] fn both_sides_delete_is_convergent_noop() { let target = entry( "entity-a", TrackedStateDiffKind::Removed, Some(row("entity-a", "base")), Some(tombstone("entity-a", "target-delete")), ); let source = entry( "entity-a", TrackedStateDiffKind::Removed, Some(row("entity-a", "base")), Some(tombstone("entity-a", "source-delete")), ); let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect("merge should plan"); assert!(plan.patches.is_empty()); assert!(plan.conflicts.is_empty()); } #[test] fn different_modifications_conflict() { let target = entry( "entity-a", TrackedStateDiffKind::Modified, Some(row_with_value("entity-a", "base", "base")), Some(row_with_value("entity-a", "target", "target")), ); let source = entry( "entity-a", TrackedStateDiffKind::Modified, Some(row_with_value("entity-a", "base", "base")), Some(row_with_value("entity-a", "source", "source")), ); let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect("merge should plan"); assert!(plan.patches.is_empty()); assert_eq!(conflict_ids(&plan), vec!["entity-a"]); } #[test] fn delete_modify_conflicts() { let target = entry( "entity-a", TrackedStateDiffKind::Removed, Some(row("entity-a", "base")), Some(tombstone("entity-a", "target-delete")), ); let source = entry( "entity-a", TrackedStateDiffKind::Modified, Some(row("entity-a", "base")), Some(row_with_value("entity-a", "source", "source")), ); let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect("merge should plan"); assert_eq!(conflict_ids(&plan), vec!["entity-a"]); } #[test] fn modify_delete_conflicts() { let target = entry( "entity-a", TrackedStateDiffKind::Modified, Some(row("entity-a", "base")), Some(row_with_value("entity-a", "target", "target")), ); let source = entry( "entity-a", TrackedStateDiffKind::Removed, Some(row("entity-a", "base")), Some(tombstone("entity-a", "source-delete")), ); let plan = plan_merge(&diff(vec![target]), &diff(vec![source])).expect("merge should plan"); assert_eq!(conflict_ids(&plan), vec!["entity-a"]); } #[test] fn source_removal_without_tombstone_errors() { let error = plan_merge( &TrackedStateDiff::default(), &diff(vec![entry( "entity-a", TrackedStateDiffKind::Removed, Some(row("entity-a", "base")), None, )]), ) .expect_err("merge should reject impossible source removal"); assert!(error.message.contains("without a tombstone row")); } #[test] fn patch_and_conflict_order_is_deterministic_by_identity() { let target = diff(vec![entry( "entity-b", TrackedStateDiffKind::Modified, Some(row_with_value("entity-b", "base", "base")), Some(row_with_value("entity-b", "target", "target")), )]); let source = diff(vec![ entry( "entity-c", TrackedStateDiffKind::Added, None, Some(row("entity-c", "source-c")), ), entry( "entity-a", TrackedStateDiffKind::Added, None, Some(row("entity-a", "source-a")), ), entry( "entity-b", TrackedStateDiffKind::Modified, Some(row_with_value("entity-b", "base", "base")), Some(row_with_value("entity-b", "source", "source")), ), ]); let plan = plan_merge(&target, &source).expect("merge should plan"); assert_eq!(patch_ids(&plan), vec!["entity-a", "entity-c"]); assert_eq!(conflict_ids(&plan), vec!["entity-b"]); } fn diff(entries: Vec) -> TrackedStateDiff { TrackedStateDiff { entries } } fn entry( entity_id: &str, kind: TrackedStateDiffKind, before: Option, after: Option, ) -> TrackedStateDiffEntry { TrackedStateDiffEntry { identity: TrackedStateDiffIdentity { schema_key: "test_schema".to_string(), entity_id: EntityIdentity::single(entity_id), file_id: None, }, kind, before, after, } } fn patch_ids(plan: &TrackedStateMergePlan) -> Vec { plan.patches .iter() .map(|entry| { entry .identity() .entity_id .as_single_string_owned() .expect("identity") }) .collect() } fn conflict_ids(plan: &TrackedStateMergePlan) -> Vec { plan.conflicts .iter() .map(|entry| { entry .identity .entity_id .as_single_string_owned() .expect("identity") }) .collect() } fn tombstone(entity_id: &str, change_id: &str) -> MaterializedTrackedStateRow { let mut row = row(entity_id, change_id); row.snapshot_content = None; row.deleted = true; row } fn row(entity_id: &str, change_id: &str) -> MaterializedTrackedStateRow { row_with_value(entity_id, change_id, "value") } fn row_with_value( entity_id: &str, change_id: &str, value: &str, ) -> MaterializedTrackedStateRow { MaterializedTrackedStateRow { entity_id: EntityIdentity::single(entity_id), schema_key: "test_schema".to_string(), file_id: None, snapshot_content: Some(format!("{{\"value\":\"{value}\"}}")), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), change_id: change_id.to_string(), commit_id: change_id.replace("change", "commit"), } } } ================================================ FILE: packages/engine/src/tracked_state/mod.rs ================================================ mod by_file_index; mod codec; mod context; mod diff; mod materialization; mod materializer; mod merge; mod storage; mod tree; mod types; #[allow(unused_imports)] pub(crate) use context::{ TrackedStateContext, TrackedStateMaterializer, TrackedStateStoreReader, TrackedStateWriter, }; #[allow(unused_imports)] pub(crate) use diff::{ TrackedStateDiff, TrackedStateDiffEntry, TrackedStateDiffIdentity, TrackedStateDiffKind, TrackedStateDiffRequest, }; pub(crate) use materialization::{materialize_index_entries, TrackedMaterializationProjection}; #[allow(unused_imports)] pub(crate) use merge::{ plan_merge, TrackedStateMergeConflict, TrackedStateMergePatch, TrackedStateMergePlan, }; pub(crate) use storage::{load_delta_pack, DeltaJsonPackIndexesRef}; #[allow(unused_imports)] pub(crate) use types::{ MaterializedTrackedStateRow, TrackedStateDeltaRef, TrackedStateFilter, TrackedStateIndexValueRef, TrackedStateKeyRef, TrackedStateProjection, TrackedStateRowRequest, TrackedStateScanRequest, }; ================================================ FILE: packages/engine/src/tracked_state/storage.rs ================================================ use std::collections::HashMap; use crate::json_store::JsonStoreContext; use crate::storage::{KvGetGroup, KvGetRequest, StorageReader, StorageWriteSet}; use crate::tracked_state::codec::PendingChunkWrite; use crate::tracked_state::types::{ TrackedStateDeltaEntry, TrackedStateDeltaRef, TrackedStateRootId, TRACKED_STATE_HASH_BYTES, }; use crate::LixError; pub(crate) const TRACKED_STATE_CHUNK_NAMESPACE: &'static str = "tracked_state.tree.chunk"; pub(crate) const TRACKED_STATE_ROOT_NAMESPACE: &'static str = "tracked_state.tree.root"; pub(crate) const TRACKED_STATE_BY_FILE_ROOT_NAMESPACE: &'static str = "tracked_state.tree.root.by_file"; pub(crate) const TRACKED_STATE_DELTA_PACK_NAMESPACE: &'static str = "tracked_state.delta_pack"; async fn get_one( store: &mut (impl StorageReader + ?Sized), namespace: &str, key: Vec, ) -> Result>, LixError> { Ok(store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: namespace.to_string(), keys: vec![key], }], }) .await? .groups .into_iter() .next() .and_then(|group| group.single_value_owned())) } pub(crate) async fn load_root( store: &mut (impl StorageReader + ?Sized), commit_id: &str, ) -> Result, LixError> { let Some(bytes) = get_one( store, TRACKED_STATE_ROOT_NAMESPACE, commit_id.as_bytes().to_vec(), ) .await? else { return Ok(None); }; TrackedStateRootId::from_slice(&bytes).map(Some) } pub(crate) fn stage_root( writes: &mut StorageWriteSet, commit_id: &str, root_id: &TrackedStateRootId, ) { writes.put( TRACKED_STATE_ROOT_NAMESPACE, commit_id.as_bytes().to_vec(), root_id.as_bytes().to_vec(), ); } pub(crate) async fn load_by_file_root( store: &mut (impl StorageReader + ?Sized), commit_id: &str, ) -> Result, LixError> { let Some(bytes) = get_one( store, TRACKED_STATE_BY_FILE_ROOT_NAMESPACE, commit_id.as_bytes().to_vec(), ) .await? else { return Ok(None); }; TrackedStateRootId::from_slice(&bytes).map(Some) } pub(crate) fn stage_by_file_root( writes: &mut StorageWriteSet, commit_id: &str, root_id: &TrackedStateRootId, ) { writes.put( TRACKED_STATE_BY_FILE_ROOT_NAMESPACE, commit_id.as_bytes().to_vec(), root_id.as_bytes().to_vec(), ); } pub(crate) async fn load_delta_pack( store: &mut (impl StorageReader + ?Sized), commit_id: &str, ) -> Result>, LixError> { let json_store = JsonStoreContext::new(); let result = store .get_values(KvGetRequest { groups: vec![ KvGetGroup { namespace: TRACKED_STATE_DELTA_PACK_NAMESPACE.to_string(), keys: vec![commit_id.as_bytes().to_vec()], }, json_store.commit_pack_get_group(commit_id, 0), ], }) .await?; let mut groups = result.groups.into_iter(); let delta_group = groups.next().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state delta pack load returned no delta result group", ) })?; let json_pack_group = groups.next().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state delta pack load returned no JSON pack result group", ) })?; let Some(bytes) = delta_group.single_value_owned() else { return Ok(None); }; let pack_refs = if crate::tracked_state::codec::delta_pack_uses_json_pack_indexes(&bytes)? { json_pack_group .single_value_owned() .map(|bytes| json_store.decode_pack_refs(&bytes)) .transpose()? } else { None }; let (stored_commit_id, entries) = crate::tracked_state::codec::decode_delta_pack(&bytes, pack_refs.as_deref())?; if stored_commit_id != commit_id { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked-state delta pack identity mismatch: expected '{commit_id}', got '{stored_commit_id}'" ), )); } Ok(Some(entries)) } pub(crate) async fn delta_pack_exists( store: &mut (impl StorageReader + ?Sized), commit_id: &str, ) -> Result { let result = store .exists_many(KvGetRequest { groups: vec![KvGetGroup { namespace: TRACKED_STATE_DELTA_PACK_NAMESPACE.to_string(), keys: vec![commit_id.as_bytes().to_vec()], }], }) .await?; let group = result.groups.into_iter().next().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state delta pack existence check returned no result group", ) })?; group.exists.into_iter().next().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked-state delta pack existence check returned no result", ) }) } pub(crate) fn stage_delta_pack_refs( writes: &mut StorageWriteSet, commit_id: &str, deltas: &[TrackedStateDeltaRef<'_>], ) -> Result<(), LixError> { writes.put( TRACKED_STATE_DELTA_PACK_NAMESPACE, commit_id.as_bytes().to_vec(), crate::tracked_state::codec::encode_delta_pack_refs(commit_id, deltas)?, ); Ok(()) } pub(crate) struct DeltaJsonPackIndexesRef<'a> { pub(crate) commit_id: &'a str, pub(crate) pack_id: u32, pub(crate) indexes: &'a std::collections::HashMap<[u8; TRACKED_STATE_HASH_BYTES], usize>, } pub(crate) fn stage_delta_pack_refs_with_json_pack_indexes( writes: &mut StorageWriteSet, commit_id: &str, deltas: &[TrackedStateDeltaRef<'_>], json_pack_indexes: DeltaJsonPackIndexesRef<'_>, ) -> Result<(), LixError> { if json_pack_indexes.commit_id != commit_id { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked-state delta JSON pack indexes for '{}' cannot encode delta pack '{}'", json_pack_indexes.commit_id, commit_id ), )); } if json_pack_indexes.pack_id != 0 { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked-state delta JSON pack indexes only support pack 0, got pack {}", json_pack_indexes.pack_id ), )); } if json_pack_indexes.indexes.is_empty() { return stage_delta_pack_refs(writes, commit_id, deltas); } writes.put( TRACKED_STATE_DELTA_PACK_NAMESPACE, commit_id.as_bytes().to_vec(), crate::tracked_state::codec::encode_delta_pack_refs_with_json_pack_indexes( commit_id, deltas, Some(json_pack_indexes.indexes), )?, ); Ok(()) } pub(crate) async fn read_chunk( store: &mut (impl StorageReader + ?Sized), hash: &[u8; TRACKED_STATE_HASH_BYTES], ) -> Result>, LixError> { get_one(store, TRACKED_STATE_CHUNK_NAMESPACE, hash.to_vec()).await } pub(crate) fn verify_chunk_hash( expected: &[u8; TRACKED_STATE_HASH_BYTES], bytes: &[u8], ) -> Result<(), LixError> { let actual = crate::tracked_state::codec::hash_bytes(bytes); if &actual != expected { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state chunk hash mismatch", )); } Ok(()) } pub(crate) fn stage_chunks(writes: &mut StorageWriteSet, chunks: &[PendingChunkWrite]) { for chunk in chunks { writes.put( TRACKED_STATE_CHUNK_NAMESPACE, chunk.hash.to_vec(), chunk.data.clone(), ); } } #[allow(dead_code)] #[derive(Debug, Default)] pub(crate) struct TrackedStateChunkOverlay { chunks: HashMap<[u8; TRACKED_STATE_HASH_BYTES], Vec>, } impl TrackedStateChunkOverlay { pub(crate) fn new() -> Self { Self::default() } pub(crate) async fn read_chunk( &self, store: &mut (impl StorageReader + ?Sized), hash: &[u8; TRACKED_STATE_HASH_BYTES], ) -> Result>, LixError> { if let Some(bytes) = self.chunks.get(hash) { return Ok(Some(bytes.clone())); } read_chunk(store, hash).await } pub(crate) fn stage_chunks( &mut self, writes: &mut StorageWriteSet, chunks: &[PendingChunkWrite], ) { for chunk in chunks { self.chunks.insert(chunk.hash, chunk.data.clone()); } stage_chunks(writes, chunks); } } #[cfg(test)] mod tests { use std::fs; use std::path::{Path, PathBuf}; #[test] fn production_tracked_state_sources_do_not_call_storage_batch_writer() { let tracked_state_dir = Path::new(env!("CARGO_MANIFEST_DIR")).join("src/tracked_state"); let forbidden = ["write", "kv", "batch"].join("_"); for path in rust_sources(&tracked_state_dir) { let source = fs::read_to_string(&path).expect("tracked_state source should be readable"); for (line_number, line) in production_lines(&source) { assert!( !line.contains(&forbidden), "production tracked_state source must stage into StorageWriteSet instead of calling {forbidden}: {}:{}", path.display(), line_number ); } } } fn rust_sources(dir: &Path) -> Vec { let mut sources = Vec::new(); for entry in fs::read_dir(dir).expect("tracked_state source dir should be readable") { let path = entry .expect("tracked_state source entry should be readable") .path(); if path.is_dir() { sources.extend(rust_sources(&path)); } else if path.extension().and_then(|extension| extension.to_str()) == Some("rs") { sources.push(path); } } sources } fn production_lines(source: &str) -> Vec<(usize, &str)> { let mut lines = Vec::new(); let mut skipping_cfg_test_item = false; let mut pending_cfg_test = false; let mut item_started = false; let mut brace_depth = 0i32; for (index, line) in source.lines().enumerate() { let trimmed = line.trim(); if trimmed == "#[cfg(test)]" { pending_cfg_test = true; continue; } if pending_cfg_test || skipping_cfg_test_item { if pending_cfg_test && !item_started && trimmed.ends_with(';') { pending_cfg_test = false; continue; } let opens = line.matches('{').count() as i32; let closes = line.matches('}').count() as i32; if opens > 0 { item_started = true; skipping_cfg_test_item = true; } if item_started { brace_depth += opens - closes; if brace_depth <= 0 { pending_cfg_test = false; skipping_cfg_test_item = false; item_started = false; brace_depth = 0; } } continue; } lines.push((index + 1, line)); } lines } } ================================================ FILE: packages/engine/src/tracked_state/tree.rs ================================================ use std::{ collections::{BTreeMap, VecDeque}, future::Future, ops::Range, pin::Pin, }; use crate::storage::{StorageReader, StorageWriteSet}; use crate::tracked_state::codec::{ boundary_trigger, child_summary_from_node, decode_key, decode_key_with_trusted_prefix, decode_node, decode_node_ref, decode_value, decode_visible_value, encode_internal_node, encode_internal_node_refs, encode_key, encode_leaf_node, encode_leaf_node_refs, encode_schema_file_prefix, encode_schema_key_prefix, ChildSummary, ChildSummaryRef, DecodedLeafNodeRef, DecodedNode, DecodedNodeRef, EncodedLeafEntry, EncodedLeafEntryRef, PendingChunkWrite, }; use crate::tracked_state::storage; use crate::tracked_state::types::{ TrackedStateApplyResult, TrackedStateIndexValue, TrackedStateKey, TrackedStateMutation, TrackedStateRootId, TrackedStateTreeDiffEntry, TrackedStateTreeScanRequest, TRACKED_STATE_HASH_BYTES, }; use crate::{LixError, NullableKeyFilter}; #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateTreeOptions { pub(crate) target_chunk_bytes: usize, pub(crate) min_chunk_bytes: usize, pub(crate) max_chunk_bytes: usize, } enum MutationApply { Applied(TrackedStateApplyResult), Fallback(T), } impl Default for TrackedStateTreeOptions { fn default() -> Self { Self { target_chunk_bytes: 4 * 1024, min_chunk_bytes: 512, max_chunk_bytes: 16 * 1024, } } } /// Content-addressed tracked-state tree operations. /// /// This type owns tracked-state tree mechanics only. Version refs, untracked overlay, /// and SQL visibility remain outside the tree. #[derive(Debug, Clone)] pub(crate) struct TrackedStateTree { options: TrackedStateTreeOptions, } impl TrackedStateTree { pub(crate) fn new() -> Self { Self { options: TrackedStateTreeOptions::default(), } } #[allow(dead_code)] pub(crate) fn with_options(options: TrackedStateTreeOptions) -> Self { Self { options } } pub(crate) async fn load_root( &self, store: &mut (impl StorageReader + ?Sized), commit_id: &str, ) -> Result, LixError> { storage::load_root(store, commit_id).await } #[cfg(test)] pub(crate) async fn get( &self, store: &mut impl StorageReader, root_id: &TrackedStateRootId, key: &TrackedStateKey, ) -> Result, LixError> { let encoded_key = encode_key(key); let mut current = *root_id.as_bytes(); loop { match self.load_node(store, ¤t).await? { DecodedNode::Leaf(leaf) => { let entry = leaf .entries() .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_key)) .ok() .map(|index| &leaf.entries()[index]); return entry.map(|entry| decode_value(&entry.value)).transpose(); } DecodedNode::Internal(internal) => { let child = internal .children() .iter() .find(|child| child.last_key.as_slice() >= encoded_key.as_slice()) .or_else(|| internal.children().last()) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree internal node has no children", ) })?; current = child.child_hash; } } } } pub(crate) async fn get_many( &self, store: &mut impl StorageReader, root_id: &TrackedStateRootId, keys: &[TrackedStateKey], ) -> Result>, LixError> { if keys.is_empty() { return Ok(Vec::new()); } let mut encoded_keys = keys .iter() .enumerate() .map(|(index, key)| (index, encode_key(key))) .collect::>(); encoded_keys.sort_by(|left, right| left.1.cmp(&right.1)); let mut values = vec![None; keys.len()]; self.get_many_node(store, *root_id.as_bytes(), &encoded_keys, &mut values) .await?; Ok(values) } pub(crate) async fn row_count( &self, store: &mut impl StorageReader, root_id: &TrackedStateRootId, ) -> Result { match self.load_node(store, root_id.as_bytes()).await? { DecodedNode::Leaf(leaf) => Ok(leaf.entries().len()), DecodedNode::Internal(internal) => Ok(internal .children() .iter() .map(|child| child.subtree_count as usize) .sum()), } } pub(crate) async fn scan( &self, store: &mut impl StorageReader, root_id: &TrackedStateRootId, request: &TrackedStateTreeScanRequest, ) -> Result, LixError> { if request.limit == Some(0) { return Ok(Vec::new()); } let ranges = scan_ranges(request); let key_decode_hint = scan_key_decode_hint(request, &ranges); let mut rows = Vec::new(); self.scan_node( store, *root_id.as_bytes(), request, &ranges, key_decode_hint, &mut rows, ) .await?; Ok(rows) } pub(crate) async fn count_matching_keys( &self, store: &mut impl StorageReader, root_id: &TrackedStateRootId, request: &TrackedStateTreeScanRequest, ) -> Result { if request.limit == Some(0) { return Ok(0); } let ranges = scan_ranges(request); self.count_matching_keys_node(store, *root_id.as_bytes(), request, &ranges) .await } pub(crate) async fn diff( &self, store: &mut impl StorageReader, left_root: Option<&TrackedStateRootId>, right_root: Option<&TrackedStateRootId>, request: &TrackedStateTreeScanRequest, ) -> Result, LixError> { match (left_root, right_root) { (None, None) => Ok(Vec::new()), (Some(left), Some(right)) if left == right => Ok(Vec::new()), (Some(left), Some(right)) => { let mut out = Vec::new(); self.diff_nodes( store, *left.as_bytes(), *right.as_bytes(), request, &mut out, ) .await?; Ok(out) } (Some(left), None) => Ok(self .collect_filtered_entries(store, left, request) .await? .into_iter() .map(|(key, value)| TrackedStateTreeDiffEntry { before: Some((key, value)), after: None, }) .collect()), (None, Some(right)) => Ok(self .collect_filtered_entries(store, right, request) .await? .into_iter() .map(|(key, value)| TrackedStateTreeDiffEntry { before: None, after: Some((key, value)), }) .collect()), } } pub(crate) async fn apply_mutations( &self, store: &mut (impl StorageReader + ?Sized), writes: &mut StorageWriteSet, base_root: Option<&TrackedStateRootId>, mut mutations: Vec, commit_id: Option<&str>, ) -> Result { let mut overlay = storage::TrackedStateChunkOverlay::new(); if let Some(root_id) = base_root { if mutations.len() == 1 { let mutation = mutations.pop().expect("single mutation should exist"); match self .apply_single_mutation( store, writes, &mut overlay, root_id, mutation, commit_id, ) .await? { MutationApply::Applied(result) => return Ok(result), MutationApply::Fallback(mutation) => mutations = vec![mutation], } } else if mutations.len() > 1 { match self .apply_sorted_mutations_chunker( store, writes, &mut overlay, root_id, mutations, commit_id, ) .await? { MutationApply::Applied(result) => return Ok(result), MutationApply::Fallback(fallback_mutations) => mutations = fallback_mutations, } } } let mut entries = match base_root { Some(root_id) => self .collect_leaf_entries(store, root_id) .await? .into_iter() .map(|entry| (entry.key, entry.value)) .collect::>(), None => BTreeMap::new(), }; // Apply in caller order so repeated writes to the same key behave like // normal transaction staging: the latest mutation wins. for mutation in mutations { entries.insert(mutation.encoded_key, mutation.encoded_value); } let built = self.build_tree_from_entries( entries .into_iter() .map(|(key, value)| EncodedLeafEntry { key, value }) .collect(), )?; overlay.stage_chunks(writes, &built.chunks); let persisted_root = if let Some(commit_id) = commit_id { storage::stage_root(writes, commit_id, &built.root_id); true } else { false }; Ok(TrackedStateApplyResult { root_id: built.root_id, row_count: built.row_count, tree_height: built.tree_height, chunk_count: built.chunks.len(), chunk_bytes: built.chunk_bytes, persisted_root, }) } async fn apply_single_mutation( &self, store: &mut (impl StorageReader + ?Sized), writes: &mut StorageWriteSet, overlay: &mut storage::TrackedStateChunkOverlay, root_id: &TrackedStateRootId, mutation: TrackedStateMutation, commit_id: Option<&str>, ) -> Result, LixError> { let mutation = match self .apply_single_mutation_from_seek_path( store, writes, overlay, root_id, mutation, commit_id, ) .await? { MutationApply::Applied(result) => return Ok(MutationApply::Applied(result)), MutationApply::Fallback(mutation) => mutation, }; let TrackedStateMutation { encoded_key, encoded_value, } = mutation; let levels = self .collect_summary_levels_with_overlay(store, overlay, root_id) .await?; let Some(leaves) = levels.first() else { return Ok(MutationApply::Fallback(TrackedStateMutation { encoded_key, encoded_value, })); }; let target_leaf_index = leaves .iter() .position(|leaf| leaf.last_key.as_slice() >= encoded_key.as_slice()) .unwrap_or_else(|| leaves.len().saturating_sub(1)); let Some(target_leaf) = leaves.get(target_leaf_index).cloned() else { return Ok(MutationApply::Fallback(TrackedStateMutation { encoded_key, encoded_value, })); }; let mut entries = self .load_leaf_entries_with_overlay(store, overlay, &target_leaf.child_hash) .await?; let mutation_entry_index = match entries .binary_search_by(|entry| entry.key.as_slice().cmp(encoded_key.as_slice())) { Ok(index) => { if entries[index].value.as_slice() == encoded_value.as_slice() { return Ok(MutationApply::Fallback(TrackedStateMutation { encoded_key, encoded_value, })); } entries[index].value = encoded_value; index } Err(index) => { entries.insert( index, EncodedLeafEntry { key: encoded_key, value: encoded_value, }, ); index } }; let mut chunks = BTreeMap::new(); let mut suffix_entries = entries; let mut next_leaf_index = target_leaf_index + 1; let mut replacement_leaves; let old_leaf_count; // Rechunk from the edited leaf until a generated leaf matches an // existing post-mutation leaf, then reuse the rest of the old suffix. loop { let mut candidate_chunks = BTreeMap::new(); let candidate_summaries = self.build_leaf_level_from_refs( suffix_entries.iter().map(EncodedLeafEntry::as_ref), &mut candidate_chunks, ); if let Some((generated_resync_index, existing_resync_index)) = first_resync_index( &candidate_summaries, &leaves[target_leaf_index..], suffix_entries[mutation_entry_index].key.as_slice(), ) { for summary in &candidate_summaries[..generated_resync_index] { if let Some(chunk) = candidate_chunks.remove(&summary.child_hash) { chunks.entry(chunk.hash).or_insert(chunk); } } replacement_leaves = candidate_summaries .into_iter() .take(generated_resync_index) .collect(); old_leaf_count = existing_resync_index; break; } if next_leaf_index >= leaves.len() { chunks.extend(candidate_chunks); replacement_leaves = candidate_summaries; old_leaf_count = leaves.len() - target_leaf_index; break; } suffix_entries.extend( self.load_leaf_entries_with_overlay( store, overlay, &leaves[next_leaf_index].child_hash, ) .await?, ); next_leaf_index += 1; } let built = self.build_tree_from_leaf_patch( &levels, target_leaf_index, old_leaf_count, std::mem::take(&mut replacement_leaves), chunks, suffix_entries[mutation_entry_index].key.as_slice(), )?; overlay.stage_chunks(writes, &built.chunks); let persisted_root = if let Some(commit_id) = commit_id { storage::stage_root(writes, commit_id, &built.root_id); true } else { false }; Ok(MutationApply::Applied(TrackedStateApplyResult { root_id: built.root_id, row_count: built.row_count, tree_height: built.tree_height, chunk_count: built.chunks.len(), chunk_bytes: built.chunk_bytes, persisted_root, })) } fn diff_nodes<'a, S>( &'a self, store: &'a mut S, left_hash: [u8; TRACKED_STATE_HASH_BYTES], right_hash: [u8; TRACKED_STATE_HASH_BYTES], request: &'a TrackedStateTreeScanRequest, out: &'a mut Vec, ) -> Pin> + 'a>> where S: StorageReader + 'a, { Box::pin(async move { if left_hash == right_hash { return Ok(()); } let left = self.load_node(store, &left_hash).await?; let right = self.load_node(store, &right_hash).await?; match (left, right) { (DecodedNode::Leaf(left), DecodedNode::Leaf(right)) => { self.diff_leaf_entries(left.entries(), right.entries(), request, out)?; } (DecodedNode::Internal(left), DecodedNode::Internal(right)) if internal_boundaries_match(left.children(), right.children()) => { for (left_child, right_child) in left.children().iter().zip(right.children()) { if left_child == right_child { continue; } self.diff_nodes( store, left_child.child_hash, right_child.child_hash, request, out, ) .await?; } } _ => { self.diff_leaf_summary_cursors(store, left_hash, right_hash, request, out) .await?; } } Ok(()) }) } async fn diff_leaf_summary_cursors( &self, store: &mut impl StorageReader, left_hash: [u8; TRACKED_STATE_HASH_BYTES], right_hash: [u8; TRACKED_STATE_HASH_BYTES], request: &TrackedStateTreeScanRequest, out: &mut Vec, ) -> Result<(), LixError> { let mut left = LeafSummaryCursor::new(self, store, left_hash).await?; let mut right = LeafSummaryCursor::new(self, store, right_hash).await?; let mut left_window = Vec::new(); let mut right_window = Vec::new(); loop { match (left.current(), right.current()) { (Some(left_leaf), Some(right_leaf)) if left_leaf == right_leaf => { self.diff_leaf_summary_window(store, &left_window, &right_window, request, out) .await?; left_window.clear(); right_window.clear(); left.advance(self, store).await?; right.advance(self, store).await?; } (Some(left_leaf), Some(right_leaf)) => { match left_leaf.last_key.cmp(&right_leaf.last_key) { std::cmp::Ordering::Less => { left_window.push(left_leaf.clone()); left.advance(self, store).await?; } std::cmp::Ordering::Greater => { right_window.push(right_leaf.clone()); right.advance(self, store).await?; } std::cmp::Ordering::Equal => { left_window.push(left_leaf.clone()); right_window.push(right_leaf.clone()); left.advance(self, store).await?; right.advance(self, store).await?; } } } (Some(left_leaf), None) => { left_window.push(left_leaf.clone()); left.advance(self, store).await?; } (None, Some(right_leaf)) => { right_window.push(right_leaf.clone()); right.advance(self, store).await?; } (None, None) => { self.diff_leaf_summary_window(store, &left_window, &right_window, request, out) .await?; return Ok(()); } } } } async fn diff_leaf_summary_window( &self, store: &mut impl StorageReader, left_leaves: &[ChildSummary], right_leaves: &[ChildSummary], request: &TrackedStateTreeScanRequest, out: &mut Vec, ) -> Result<(), LixError> { if left_leaves.is_empty() && right_leaves.is_empty() { return Ok(()); } let left_entries = self .collect_entries_from_leaf_summaries(store, left_leaves) .await?; let right_entries = self .collect_entries_from_leaf_summaries(store, right_leaves) .await?; self.diff_leaf_entries(&left_entries, &right_entries, request, out) } fn diff_leaf_entries( &self, left: &[EncodedLeafEntry], right: &[EncodedLeafEntry], request: &TrackedStateTreeScanRequest, out: &mut Vec, ) -> Result<(), LixError> { let mut left_index = 0usize; let mut right_index = 0usize; while left_index < left.len() && right_index < right.len() { match left[left_index].key.cmp(&right[right_index].key) { std::cmp::Ordering::Less => { self.push_removed_diff(&left[left_index], request, out)?; left_index += 1; } std::cmp::Ordering::Greater => { self.push_added_diff(&right[right_index], request, out)?; right_index += 1; } std::cmp::Ordering::Equal => { if left[left_index].value != right[right_index].value { self.push_modified_diff( &left[left_index], &right[right_index], request, out, )?; } left_index += 1; right_index += 1; } } } for entry in &left[left_index..] { self.push_removed_diff(entry, request, out)?; } for entry in &right[right_index..] { self.push_added_diff(entry, request, out)?; } Ok(()) } fn push_removed_diff( &self, entry: &EncodedLeafEntry, request: &TrackedStateTreeScanRequest, out: &mut Vec, ) -> Result<(), LixError> { let (key, value) = decode_entry(entry)?; if request.matches(&key, &value) { out.push(TrackedStateTreeDiffEntry { before: Some((key, value)), after: None, }); } Ok(()) } fn push_added_diff( &self, entry: &EncodedLeafEntry, request: &TrackedStateTreeScanRequest, out: &mut Vec, ) -> Result<(), LixError> { let (key, value) = decode_entry(entry)?; if request.matches(&key, &value) { out.push(TrackedStateTreeDiffEntry { before: None, after: Some((key, value)), }); } Ok(()) } fn push_modified_diff( &self, left: &EncodedLeafEntry, right: &EncodedLeafEntry, request: &TrackedStateTreeScanRequest, out: &mut Vec, ) -> Result<(), LixError> { let (left_key, left_value) = decode_entry(left)?; let (right_key, right_value) = decode_entry(right)?; if request.matches(&left_key, &left_value) || request.matches(&right_key, &right_value) { out.push(TrackedStateTreeDiffEntry { before: Some((left_key, left_value)), after: Some((right_key, right_value)), }); } Ok(()) } async fn apply_sorted_mutations_chunker( &self, store: &mut (impl StorageReader + ?Sized), writes: &mut StorageWriteSet, overlay: &mut storage::TrackedStateChunkOverlay, root_id: &TrackedStateRootId, mutations: Vec, commit_id: Option<&str>, ) -> Result>, LixError> { let mut mutation_map = BTreeMap::new(); for mutation in mutations { mutation_map.insert(mutation.encoded_key, mutation.encoded_value); } if mutation_map.is_empty() { return Ok(MutationApply::Fallback(Vec::new())); } let levels = self .collect_summary_levels_with_overlay(store, overlay, root_id) .await?; let Some(leaves) = levels.first() else { return Ok(MutationApply::Fallback( mutation_map .into_iter() .map(|(encoded_key, encoded_value)| TrackedStateMutation { encoded_key, encoded_value, }) .collect(), )); }; let base_row_count = leaves .iter() .map(|leaf| leaf.subtree_count as usize) .sum::(); let first_mutation_key = mutation_map .keys() .next() .expect("non-empty mutation map should have first key"); let append_only = leaves .last() .is_some_and(|leaf| first_mutation_key.as_slice() > leaf.last_key.as_slice()); if !append_only && mutation_map.len() * 2 > base_row_count { return Ok(MutationApply::Fallback( mutation_map .into_iter() .map(|(encoded_key, encoded_value)| TrackedStateMutation { encoded_key, encoded_value, }) .collect(), )); } let mut mutations = mutation_map.into_iter().collect::>(); let mut output_leaves = Vec::new(); let mut chunks = BTreeMap::new(); let mut leaf_index = 0usize; while leaf_index < leaves.len() { let current_leaf_has_mutation = mutations .front() .is_some_and(|(key, _)| key.as_slice() <= leaves[leaf_index].last_key.as_slice()); if !current_leaf_has_mutation { output_leaves.push(leaves[leaf_index].clone()); leaf_index += 1; continue; } let window_start = leaf_index; let mut window_entries = BTreeMap::new(); let mut window_mutation_ceiling = mutations .front() .map(|(key, _)| key.clone()) .expect("window with mutation should have front mutation"); loop { if leaf_index < leaves.len() { let leaf = &leaves[leaf_index]; for entry in self .load_leaf_entries_with_overlay(store, overlay, &leaf.child_hash) .await? { window_entries.insert(entry.key, entry.value); } while mutations .front() .is_some_and(|(key, _)| key.as_slice() <= leaf.last_key.as_slice()) { let (key, value) = mutations .pop_front() .expect("front mutation should be present"); window_mutation_ceiling = key.clone(); window_entries.insert(key, value); } leaf_index += 1; } while let Some((key, _)) = mutations.front() { if leaf_index < leaves.len() && key.as_slice() >= leaves[leaf_index].first_key.as_slice() { break; } let (key, value) = mutations .pop_front() .expect("front mutation should be present"); window_mutation_ceiling = key.clone(); window_entries.insert(key, value); } if leaf_index < leaves.len() && mutations.front().is_some_and(|(key, _)| { key.as_slice() <= leaves[leaf_index].last_key.as_slice() }) { continue; } let mut candidate_chunks = BTreeMap::new(); let candidate_leaves = self.build_leaf_level_from_refs( window_entries .iter() .map(|(key, value)| EncodedLeafEntryRef { key, value }), &mut candidate_chunks, ); if let Some((generated_resync_index, existing_resync_index)) = first_resync_index( &candidate_leaves, &leaves[window_start..], &window_mutation_ceiling, ) { for summary in &candidate_leaves[..generated_resync_index] { if let Some(chunk) = candidate_chunks.remove(&summary.child_hash) { chunks.entry(chunk.hash).or_insert(chunk); } } output_leaves.extend(candidate_leaves.into_iter().take(generated_resync_index)); leaf_index = window_start + existing_resync_index; break; } if leaf_index >= leaves.len() { chunks.extend(candidate_chunks); output_leaves.extend(candidate_leaves); break; } } } if !mutations.is_empty() { let entries = mutations .into_iter() .map(|(key, value)| EncodedLeafEntry { key, value }) .collect(); output_leaves.extend(self.build_leaf_level(entries, &mut chunks)); } let built = self.build_tree_from_leaf_summaries(output_leaves, chunks)?; Ok(MutationApply::Applied( self.persist_built_tree(writes, overlay, built, commit_id) .await?, )) } async fn apply_single_mutation_from_seek_path( &self, store: &mut (impl StorageReader + ?Sized), writes: &mut StorageWriteSet, overlay: &mut storage::TrackedStateChunkOverlay, root_id: &TrackedStateRootId, mutation: TrackedStateMutation, commit_id: Option<&str>, ) -> Result, LixError> { let TrackedStateMutation { encoded_key, encoded_value, } = mutation; let mut current = *root_id.as_bytes(); let mut path = Vec::new(); let mut entries = loop { match self .load_node_with_overlay(store, overlay, ¤t) .await? { DecodedNode::Leaf(leaf) => break leaf.entries().to_vec(), DecodedNode::Internal(internal) => { let children = internal.children().to_vec(); let child_index = children .iter() .position(|child| child.last_key.as_slice() >= encoded_key.as_slice()) .or_else(|| (!children.is_empty()).then_some(children.len() - 1)) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree internal node has no children", ) })?; current = children[child_index].child_hash; path.push(SeekPathFrame { children, child_index, }); } } }; let mutation_entry_index = match entries .binary_search_by(|entry| entry.key.as_slice().cmp(encoded_key.as_slice())) { Ok(index) => { if entries[index].value.as_slice() == encoded_value.as_slice() { return Ok(MutationApply::Fallback(TrackedStateMutation { encoded_key, encoded_value, })); } entries[index].value = encoded_value; index } Err(index) => { entries.insert( index, EncodedLeafEntry { key: encoded_key, value: encoded_value, }, ); index } }; let mut chunks = BTreeMap::new(); let mut replacement_children; let mut old_child_count; let Some(leaf_parent) = path.pop() else { let built = self.build_tree_from_entries(entries)?; return Ok(MutationApply::Applied( self.persist_built_tree(writes, overlay, built, commit_id) .await?, )); }; let mutation_is_right_edge = leaf_parent.child_index + 1 == leaf_parent.children.len() && path .iter() .all(|frame| frame.child_index + 1 == frame.children.len()); let mut leaf_entries = entries; let mut next_leaf_index = leaf_parent.child_index + 1; loop { let mut candidate_chunks = BTreeMap::new(); let candidate_leaves = self.build_leaf_level_from_refs( leaf_entries.iter().map(EncodedLeafEntry::as_ref), &mut candidate_chunks, ); if let Some((generated_resync_index, existing_resync_index)) = first_resync_index( &candidate_leaves, &leaf_parent.children[leaf_parent.child_index..], leaf_entries[mutation_entry_index].key.as_slice(), ) { for summary in &candidate_leaves[..generated_resync_index] { if let Some(chunk) = candidate_chunks.remove(&summary.child_hash) { chunks.entry(chunk.hash).or_insert(chunk); } } replacement_children = candidate_leaves .into_iter() .take(generated_resync_index) .collect(); old_child_count = existing_resync_index; break; } if next_leaf_index >= leaf_parent.children.len() { if !mutation_is_right_edge { let entry = leaf_entries.remove(mutation_entry_index); return Ok(MutationApply::Fallback(TrackedStateMutation { encoded_key: entry.key, encoded_value: entry.value, })); } chunks.extend(candidate_chunks); replacement_children = candidate_leaves; old_child_count = leaf_parent.children.len() - leaf_parent.child_index; break; } leaf_entries.extend( self.load_leaf_entries_with_overlay( store, overlay, &leaf_parent.children[next_leaf_index].child_hash, ) .await?, ); next_leaf_index += 1; } let mut child_index = leaf_parent.child_index; let mut children = leaf_parent.children; let mut parent_level = 1usize; loop { children.splice( child_index..child_index + old_child_count, replacement_children, ); replacement_children = self.build_internal_level(children, parent_level, &mut chunks); old_child_count = 1; let Some(frame) = path.pop() else { let mut summaries = replacement_children; let mut tree_height = parent_level + 1; while summaries.len() > 1 { summaries = self.build_internal_level(summaries, tree_height, &mut chunks); tree_height += 1; } let root = summaries.pop().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state seek-path mutation produced no root", ) })?; let chunks = chunks.into_values().collect::>(); let chunk_bytes = chunks.iter().map(|chunk| chunk.data.len()).sum(); let built = BuiltTree { root_id: TrackedStateRootId::new(root.child_hash), chunks, row_count: root.subtree_count as usize, tree_height, chunk_bytes, }; return Ok(MutationApply::Applied( self.persist_built_tree(writes, overlay, built, commit_id) .await?, )); }; child_index = frame.child_index; children = frame.children; parent_level += 1; } } async fn persist_built_tree( &self, writes: &mut StorageWriteSet, overlay: &mut storage::TrackedStateChunkOverlay, built: BuiltTree, commit_id: Option<&str>, ) -> Result { overlay.stage_chunks(writes, &built.chunks); let persisted_root = if let Some(commit_id) = commit_id { storage::stage_root(writes, commit_id, &built.root_id); true } else { false }; Ok(TrackedStateApplyResult { root_id: built.root_id, row_count: built.row_count, tree_height: built.tree_height, chunk_count: built.chunks.len(), chunk_bytes: built.chunk_bytes, persisted_root, }) } fn build_tree_from_entries( &self, entries: Vec, ) -> Result { let row_count = entries.len(); let mut chunks = BTreeMap::<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>::new(); let mut summaries = self.build_leaf_level(entries, &mut chunks); let mut tree_height = 1usize; while summaries.len() > 1 { summaries = self.build_internal_level(summaries, tree_height, &mut chunks); tree_height += 1; } let root = summaries.pop().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree tree build produced no root", ) })?; let chunks = chunks.into_values().collect::>(); let chunk_bytes = chunks.iter().map(|chunk| chunk.data.len()).sum(); Ok(BuiltTree { root_id: TrackedStateRootId::new(root.child_hash), chunks, row_count, tree_height, chunk_bytes, }) } fn build_tree_from_leaf_summaries( &self, leaf_summaries: Vec, mut chunks: BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>, ) -> Result { let row_count = leaf_summaries .iter() .map(|summary| summary.subtree_count as usize) .sum(); let mut summaries = leaf_summaries; let mut tree_height = 1usize; while summaries.len() > 1 { summaries = self.build_internal_level(summaries, tree_height, &mut chunks); tree_height += 1; } let root = summaries.pop().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state tree build from leaves produced no root", ) })?; let chunks = chunks.into_values().collect::>(); let chunk_bytes = chunks.iter().map(|chunk| chunk.data.len()).sum(); Ok(BuiltTree { root_id: TrackedStateRootId::new(root.child_hash), chunks, row_count, tree_height, chunk_bytes, }) } fn build_tree_from_leaf_patch( &self, levels: &[Vec], leaf_start: usize, old_leaf_count: usize, replacement_leaves: Vec, mut chunks: BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>, mutation_key: &[u8], ) -> Result { if levels.len() <= 1 { let mut leaves = levels.first().cloned().unwrap_or_default(); leaves.splice(leaf_start..leaf_start + old_leaf_count, replacement_leaves); return self.build_tree_from_leaf_summaries(leaves, chunks); } let mut child_start = leaf_start; let mut old_child_count = old_leaf_count; let mut replacement_children = replacement_leaves; for level in 0..levels.len() - 1 { let patch = self.patch_parent_level( &levels[level], &levels[level + 1], child_start, old_child_count, replacement_children, level + 1, &mut chunks, mutation_key, )?; child_start = patch.parent_start; old_child_count = patch.old_parent_count; replacement_children = patch.replacement_parents; } let mut summaries = replacement_children; let mut tree_height = levels.len(); while summaries.len() > 1 { summaries = self.build_internal_level(summaries, tree_height, &mut chunks); tree_height += 1; } let root = summaries.pop().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state patched tree produced no root", ) })?; let chunks = chunks.into_values().collect::>(); let chunk_bytes = chunks.iter().map(|chunk| chunk.data.len()).sum(); Ok(BuiltTree { root_id: TrackedStateRootId::new(root.child_hash), chunks, row_count: root.subtree_count as usize, tree_height, chunk_bytes, }) } fn patch_parent_level( &self, old_children: &[ChildSummary], old_parents: &[ChildSummary], child_start: usize, old_child_count: usize, replacement_children: Vec, parent_level: usize, chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>, mutation_key: &[u8], ) -> Result { if old_parents.is_empty() { return Ok(ParentLevelPatch { parent_start: 0, old_parent_count: 0, replacement_parents: self.build_internal_level( replacement_children, parent_level, chunks, ), }); } let parent_start = parent_index_for_child_index(old_children, old_parents, child_start); let parent_child_range = child_range_for_parent(old_children, &old_parents[parent_start])?; let old_child_end = child_start + old_child_count; let parent_end = if old_child_count == 0 { parent_start } else { parent_index_for_child_index(old_children, old_parents, old_child_end - 1) }; let parent_end_child_range = child_range_for_parent(old_children, &old_parents[parent_end])?; let mut window_children = Vec::new(); window_children.extend( old_children[parent_child_range.start..child_start] .iter() .map(ChildSummary::as_ref), ); window_children.extend(replacement_children.iter().map(ChildSummary::as_ref)); window_children.extend( old_children[old_child_end..parent_end_child_range.end] .iter() .map(ChildSummary::as_ref), ); let mut next_parent_index = parent_end + 1; loop { let mut candidate_chunks = BTreeMap::new(); let candidate_parents = self.build_internal_level_from_refs( window_children.iter().copied(), parent_level, &mut candidate_chunks, ); if let Some((generated_resync_index, existing_resync_index)) = first_resync_index( &candidate_parents, &old_parents[parent_start..], mutation_key, ) { for summary in &candidate_parents[..generated_resync_index] { if let Some(chunk) = candidate_chunks.remove(&summary.child_hash) { chunks.entry(chunk.hash).or_insert(chunk); } } return Ok(ParentLevelPatch { parent_start, old_parent_count: existing_resync_index, replacement_parents: candidate_parents .into_iter() .take(generated_resync_index) .collect(), }); } if next_parent_index >= old_parents.len() { chunks.extend(candidate_chunks); return Ok(ParentLevelPatch { parent_start, old_parent_count: old_parents.len() - parent_start, replacement_parents: candidate_parents, }); } let next_range = child_range_for_parent(old_children, &old_parents[next_parent_index])?; window_children.extend(old_children[next_range].iter().map(ChildSummary::as_ref)); next_parent_index += 1; } } fn build_leaf_level( &self, entries: Vec, chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>, ) -> Vec { let groups = chunk_leaf_entries(entries, &self.options); groups .into_iter() .map(|group| { let subtree_count = group.entries.len() as u64; let first_key = group .entries .first() .map(|entry| entry.key.clone()) .unwrap_or_default(); let last_key = group .entries .last() .map(|entry| entry.key.clone()) .unwrap_or_default(); let node = encode_leaf_node(&group.entries); let (chunk, summary) = child_summary_from_node(node, first_key, last_key, subtree_count); chunks.entry(chunk.hash).or_insert(chunk); summary }) .collect() } fn build_leaf_level_from_refs<'a>( &self, entries: impl IntoIterator>, chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>, ) -> Vec { let groups = chunk_leaf_entry_refs(entries, &self.options); groups .into_iter() .map(|group| { let subtree_count = group.entries.len() as u64; let first_key = group .entries .first() .map(|entry| entry.key.to_vec()) .unwrap_or_default(); let last_key = group .entries .last() .map(|entry| entry.key.to_vec()) .unwrap_or_default(); let node = encode_leaf_node_refs(&group.entries); let (chunk, summary) = child_summary_from_node(node, first_key, last_key, subtree_count); chunks.entry(chunk.hash).or_insert(chunk); summary }) .collect() } fn build_internal_level( &self, children: Vec, level: usize, chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>, ) -> Vec { let groups = chunk_internal_entries(children, &self.options, level); groups .into_iter() .map(|group| { let subtree_count = group.children.iter().map(|child| child.subtree_count).sum(); let first_key = group .children .first() .map(|child| child.first_key.clone()) .unwrap_or_default(); let last_key = group .children .last() .map(|child| child.last_key.clone()) .unwrap_or_default(); let node = encode_internal_node(&group.children); let (chunk, summary) = child_summary_from_node(node, first_key, last_key, subtree_count); chunks.entry(chunk.hash).or_insert(chunk); summary }) .collect() } fn build_internal_level_from_refs<'a>( &self, children: impl IntoIterator>, level: usize, chunks: &mut BTreeMap<[u8; TRACKED_STATE_HASH_BYTES], PendingChunkWrite>, ) -> Vec { let groups = chunk_internal_entry_refs(children, &self.options, level); groups .into_iter() .map(|group| { let subtree_count = group.children.iter().map(|child| child.subtree_count).sum(); let first_key = group .children .first() .map(|child| child.first_key.to_vec()) .unwrap_or_default(); let last_key = group .children .last() .map(|child| child.last_key.to_vec()) .unwrap_or_default(); let node = encode_internal_node_refs(&group.children); let (chunk, summary) = child_summary_from_node(node, first_key, last_key, subtree_count); chunks.entry(chunk.hash).or_insert(chunk); summary }) .collect() } async fn collect_leaf_entries( &self, store: &mut (impl StorageReader + ?Sized), root_id: &TrackedStateRootId, ) -> Result, LixError> { let mut out = Vec::new(); let mut current = vec![*root_id.as_bytes()]; while !current.is_empty() { let mut next = Vec::new(); for hash in current { match self.load_node(store, &hash).await? { DecodedNode::Leaf(leaf) => out.extend(leaf.entries().iter().cloned()), DecodedNode::Internal(internal) => { next.extend(internal.children().iter().map(|child| child.child_hash)); } } } current = next; } Ok(out) } async fn collect_filtered_entries( &self, store: &mut impl StorageReader, root_id: &TrackedStateRootId, request: &TrackedStateTreeScanRequest, ) -> Result, LixError> { self.scan(store, root_id, request).await } fn scan_node<'a, S>( &'a self, store: &'a mut S, hash: [u8; TRACKED_STATE_HASH_BYTES], request: &'a TrackedStateTreeScanRequest, ranges: &'a [EncodedScanRange], key_decode_hint: Option>, rows: &'a mut Vec<(TrackedStateKey, TrackedStateIndexValue)>, ) -> Pin> + Send + 'a>> where S: StorageReader + Send + 'a, { Box::pin(async move { let bytes = self.load_node_bytes(store, &hash).await?; match decode_node_ref(&bytes)? { DecodedNodeRef::Leaf(leaf) => { for index in 0..leaf.len() { if scan_limit_reached(request, rows.len()) { break; } let entry = leaf.entry(index)?.ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf entry disappeared during scan", ) })?; if !encoded_key_in_scan_ranges(entry.key, ranges) { continue; } let key = match key_decode_hint { Some(hint) => decode_key_with_trusted_prefix( entry.key, hint.schema_key, hint.file_id, hint.prefix_len, )?, None => decode_key(entry.key)?, }; if key_decode_hint.is_none() && !key_matches_scan_filters(request, &key) { continue; } let Some(value) = decode_visible_value(entry.value, request.include_tombstones)? else { continue; }; if key_decode_hint.is_some() || request.matches(&key, &value) { rows.push((key, value)); } } } DecodedNodeRef::Internal(internal) => { for child in internal.children() { if scan_limit_reached(request, rows.len()) { break; } if child_summary_overlaps_scan_ranges(child, ranges) { self.scan_node( store, child.child_hash, request, ranges, key_decode_hint, rows, ) .await?; } } } } Ok(()) }) } fn get_many_node<'a, S>( &'a self, store: &'a mut S, hash: [u8; TRACKED_STATE_HASH_BYTES], encoded_keys: &'a [(usize, Vec)], values: &'a mut [Option], ) -> Pin> + Send + 'a>> where S: StorageReader + Send + 'a, { Box::pin(async move { if encoded_keys.is_empty() { return Ok(()); } let bytes = self.load_node_bytes(store, &hash).await?; match decode_node_ref(&bytes)? { DecodedNodeRef::Leaf(leaf) => { for (original_index, encoded_key) in encoded_keys { if let Some(entry_index) = binary_search_leaf_key(&leaf, encoded_key)? { let entry = leaf.entry(entry_index)?.ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf entry disappeared during get_many", ) })?; values[*original_index] = Some(decode_value(entry.value)?); } } } DecodedNodeRef::Internal(internal) => { let mut start = 0usize; let children = internal.children(); for (child_index, child) in children.iter().enumerate() { if start >= encoded_keys.len() { break; } let mut end = start; if child_index + 1 == children.len() { end = encoded_keys.len(); } else { while end < encoded_keys.len() && encoded_keys[end].1.as_slice() <= child.last_key.as_slice() { end += 1; } } if start < end { self.get_many_node( store, child.child_hash, &encoded_keys[start..end], values, ) .await?; } start = end; } } } Ok(()) }) } fn count_matching_keys_node<'a, S>( &'a self, store: &'a mut S, hash: [u8; TRACKED_STATE_HASH_BYTES], request: &'a TrackedStateTreeScanRequest, ranges: &'a [EncodedScanRange], ) -> Pin> + Send + 'a>> where S: StorageReader + Send + 'a, { Box::pin(async move { let mut count = 0usize; match self.load_node(store, &hash).await? { DecodedNode::Leaf(leaf) => { for entry in leaf.entries() { if !encoded_key_in_scan_ranges(&entry.key, ranges) { continue; } let key = decode_key(&entry.key)?; if key_matches_scan_filters(request, &key) { count += 1; } } } DecodedNode::Internal(internal) => { for child in internal.children() { if child_summary_contained_by_scan_ranges(child, ranges) && request.entity_ids.is_empty() { count += child.subtree_count as usize; } else if child_summary_overlaps_scan_ranges(child, ranges) { count += self .count_matching_keys_node(store, child.child_hash, request, ranges) .await?; } } } } Ok(count) }) } async fn collect_entries_from_leaf_summaries( &self, store: &mut impl StorageReader, leaves: &[ChildSummary], ) -> Result, LixError> { let mut entries = Vec::new(); for leaf in leaves { entries.extend(self.load_leaf_entries(store, &leaf.child_hash).await?); } Ok(entries) } async fn collect_summary_levels_with_overlay( &self, store: &mut (impl StorageReader + ?Sized), overlay: &storage::TrackedStateChunkOverlay, root_id: &TrackedStateRootId, ) -> Result>, LixError> { let mut levels = Vec::new(); self.collect_summary_levels_for_node_with_overlay( store, overlay, *root_id.as_bytes(), &mut levels, ) .await?; Ok(levels) } fn collect_summary_levels_for_node_with_overlay<'a, S>( &'a self, store: &'a mut S, overlay: &'a storage::TrackedStateChunkOverlay, hash: [u8; TRACKED_STATE_HASH_BYTES], levels: &'a mut Vec>, ) -> Pin> + 'a>> where S: StorageReader + ?Sized + 'a, { Box::pin(async move { match self.load_node_with_overlay(store, overlay, &hash).await? { DecodedNode::Leaf(leaf) => { let summary = leaf_summary(hash, leaf.entries()); push_level_summary(levels, 0, summary.clone()); Ok((summary, 0)) } DecodedNode::Internal(internal) => { let children = internal.children().to_vec(); let child_height = match children.first() { Some(child) => match self .load_node_with_overlay(store, overlay, &child.child_hash) .await? { DecodedNode::Leaf(_) => { if levels.is_empty() { levels.push(Vec::new()); } levels[0].extend(children.iter().cloned()); 0 } DecodedNode::Internal(_) => { let mut child_height = None; for child in &children { let (_, height) = self .collect_summary_levels_for_node_with_overlay( store, overlay, child.child_hash, levels, ) .await?; child_height = Some(height); } child_height.unwrap_or(0) } }, None => 0, }; let height = child_height + 1; let summary = internal_summary(hash, &children)?; push_level_summary(levels, height, summary.clone()); Ok((summary, height)) } } }) } async fn load_leaf_entries( &self, store: &mut (impl StorageReader + ?Sized), hash: &[u8; TRACKED_STATE_HASH_BYTES], ) -> Result, LixError> { match self.load_node(store, hash).await? { DecodedNode::Leaf(leaf) => Ok(leaf.entries().to_vec()), DecodedNode::Internal(_) => Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state expected leaf chunk but found internal node", )), } } async fn load_leaf_entries_with_overlay( &self, store: &mut (impl StorageReader + ?Sized), overlay: &storage::TrackedStateChunkOverlay, hash: &[u8; TRACKED_STATE_HASH_BYTES], ) -> Result, LixError> { match self.load_node_with_overlay(store, overlay, hash).await? { DecodedNode::Leaf(leaf) => Ok(leaf.entries().to_vec()), DecodedNode::Internal(_) => Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state expected leaf chunk but found internal node", )), } } async fn load_node( &self, store: &mut (impl StorageReader + ?Sized), hash: &[u8; TRACKED_STATE_HASH_BYTES], ) -> Result { let bytes = self.load_node_bytes(store, hash).await?; decode_node(&bytes) } async fn load_node_bytes( &self, store: &mut (impl StorageReader + ?Sized), hash: &[u8; TRACKED_STATE_HASH_BYTES], ) -> Result, LixError> { let bytes = storage::read_chunk(store, hash).await?.ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "tracked-state tree chunk is missing") })?; storage::verify_chunk_hash(hash, &bytes)?; Ok(bytes) } async fn load_node_with_overlay( &self, store: &mut (impl StorageReader + ?Sized), overlay: &storage::TrackedStateChunkOverlay, hash: &[u8; TRACKED_STATE_HASH_BYTES], ) -> Result { let bytes = overlay.read_chunk(store, hash).await?.ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "tracked-state tree chunk is missing") })?; storage::verify_chunk_hash(hash, &bytes)?; decode_node(&bytes) } } #[derive(Debug)] struct BuiltTree { root_id: TrackedStateRootId, chunks: Vec, row_count: usize, tree_height: usize, chunk_bytes: usize, } struct ParentLevelPatch { parent_start: usize, old_parent_count: usize, replacement_parents: Vec, } struct SeekPathFrame { children: Vec, child_index: usize, } #[derive(Debug, Clone)] struct EncodedScanRange { start: Vec, end: Option>, } #[derive(Debug, Clone, Copy)] struct ScanKeyDecodeHint<'a> { schema_key: &'a str, file_id: Option<&'a str>, prefix_len: usize, } fn binary_search_leaf_key( leaf: &DecodedLeafNodeRef<'_>, encoded_key: &[u8], ) -> Result, LixError> { let mut low = 0usize; let mut high = leaf.len(); while low < high { let mid = low + (high - low) / 2; let key = leaf.key(mid)?.ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state leaf key disappeared during binary search", ) })?; match key.cmp(encoded_key) { std::cmp::Ordering::Less => low = mid + 1, std::cmp::Ordering::Equal => return Ok(Some(mid)), std::cmp::Ordering::Greater => high = mid, } } Ok(None) } struct LeafSummaryCursor { stack: Vec, current: Option, } struct LeafSummaryCursorFrame { children: Vec, next_index: usize, children_are_leaves: bool, } impl LeafSummaryCursor { async fn new( tree: &TrackedStateTree, store: &mut impl StorageReader, root_hash: [u8; TRACKED_STATE_HASH_BYTES], ) -> Result { let mut cursor = Self { stack: Vec::new(), current: None, }; match tree.load_node(store, &root_hash).await? { DecodedNode::Leaf(leaf) => { cursor.current = Some(leaf_summary(root_hash, leaf.entries())); } DecodedNode::Internal(internal) => { let children = internal.children().to_vec(); let children_are_leaves = child_summaries_are_leaves(tree, store, &children).await?; cursor.stack.push(LeafSummaryCursorFrame { children, next_index: 0, children_are_leaves, }); cursor.advance(tree, store).await?; } } Ok(cursor) } fn current(&self) -> Option<&ChildSummary> { self.current.as_ref() } async fn advance( &mut self, tree: &TrackedStateTree, store: &mut impl StorageReader, ) -> Result<(), LixError> { self.current = None; while let Some(frame) = self.stack.last_mut() { if frame.next_index >= frame.children.len() { self.stack.pop(); continue; } let next = frame.children[frame.next_index].clone(); let next_is_leaf = frame.children_are_leaves; frame.next_index += 1; if next_is_leaf { self.current = Some(next); return Ok(()); } self.descend_to_leaf(tree, store, next).await?; return Ok(()); } Ok(()) } async fn descend_to_leaf( &mut self, tree: &TrackedStateTree, store: &mut impl StorageReader, mut summary: ChildSummary, ) -> Result<(), LixError> { loop { match tree.load_node(store, &summary.child_hash).await? { DecodedNode::Leaf(_) => { self.current = Some(summary); return Ok(()); } DecodedNode::Internal(internal) => { let children = internal.children().to_vec(); let children_are_leaves = child_summaries_are_leaves(tree, store, &children).await?; let Some(first_child) = children.first().cloned() else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state internal node has no children", )); }; self.stack.push(LeafSummaryCursorFrame { children, next_index: 1, children_are_leaves, }); if children_are_leaves { self.current = Some(first_child); return Ok(()); } else { summary = first_child; } } } } } } #[derive(Debug, Default)] struct LeafChunkAccumulator { entries: Vec, key_bytes: usize, value_bytes: usize, } #[derive(Debug, Default)] struct LeafChunkRefAccumulator<'a> { entries: Vec>, key_bytes: usize, value_bytes: usize, } #[derive(Debug, Default)] struct InternalChunkAccumulator { children: Vec, first_key_bytes: usize, last_key_bytes: usize, } #[derive(Debug, Default)] struct InternalChunkRefAccumulator<'a> { children: Vec>, first_key_bytes: usize, last_key_bytes: usize, } fn chunk_leaf_entries( entries: Vec, options: &TrackedStateTreeOptions, ) -> Vec { if entries.is_empty() { return vec![LeafChunkAccumulator::default()]; } let mut groups = Vec::new(); let mut current = LeafChunkAccumulator::default(); for entry in entries { let item_size = estimate_leaf_entry_size(entry.key.len(), entry.value.len()); let projected_size = estimate_leaf_chunk_size( current.entries.len() + 1, current.key_bytes + entry.key.len(), current.value_bytes + entry.value.len(), ); if !current.entries.is_empty() && projected_size > options.max_chunk_bytes { groups.push(std::mem::take(&mut current)); } current.key_bytes += entry.key.len(); current.value_bytes += entry.value.len(); current.entries.push(entry); let current_size = estimate_leaf_chunk_size( current.entries.len(), current.key_bytes, current.value_bytes, ); if current_size >= options.min_chunk_bytes && (current_size >= options.max_chunk_bytes || current.entries.last().is_some_and(|entry| { boundary_trigger( &entry.key, 0, current_size, item_size, options.target_chunk_bytes, ) })) { groups.push(std::mem::take(&mut current)); } } if !current.entries.is_empty() { groups.push(current); } groups } fn chunk_leaf_entry_refs<'a>( entries: impl IntoIterator>, options: &TrackedStateTreeOptions, ) -> Vec> { let mut iter = entries.into_iter().peekable(); if iter.peek().is_none() { return vec![LeafChunkRefAccumulator::default()]; } let mut groups = Vec::new(); let mut current = LeafChunkRefAccumulator::default(); for entry in iter { let item_size = estimate_leaf_entry_size(entry.key.len(), entry.value.len()); let projected_size = estimate_leaf_chunk_size( current.entries.len() + 1, current.key_bytes + entry.key.len(), current.value_bytes + entry.value.len(), ); if !current.entries.is_empty() && projected_size > options.max_chunk_bytes { groups.push(std::mem::take(&mut current)); } current.key_bytes += entry.key.len(); current.value_bytes += entry.value.len(); current.entries.push(entry); let current_size = estimate_leaf_chunk_size( current.entries.len(), current.key_bytes, current.value_bytes, ); if current_size >= options.min_chunk_bytes && (current_size >= options.max_chunk_bytes || current.entries.last().is_some_and(|entry| { boundary_trigger( entry.key, 0, current_size, item_size, options.target_chunk_bytes, ) })) { groups.push(std::mem::take(&mut current)); } } if !current.entries.is_empty() { groups.push(current); } groups } fn chunk_internal_entries( children: Vec, options: &TrackedStateTreeOptions, level: usize, ) -> Vec { let mut groups = Vec::new(); let mut current = InternalChunkAccumulator::default(); for child in children { let item_size = child.first_key.len() + child.last_key.len() + TRACKED_STATE_HASH_BYTES + std::mem::size_of::(); let projected_size = estimate_internal_chunk_size( current.children.len() + 1, current.first_key_bytes + child.first_key.len(), current.last_key_bytes + child.last_key.len(), ); if !current.children.is_empty() && projected_size > options.max_chunk_bytes { groups.push(std::mem::take(&mut current)); } current.first_key_bytes += child.first_key.len(); current.last_key_bytes += child.last_key.len(); current.children.push(child); let current_size = estimate_internal_chunk_size( current.children.len(), current.first_key_bytes, current.last_key_bytes, ); if current_size >= options.min_chunk_bytes && (current_size >= options.max_chunk_bytes || current.children.last().is_some_and(|child| { boundary_trigger( &child.first_key, level, current_size, item_size, options.target_chunk_bytes, ) })) { groups.push(std::mem::take(&mut current)); } } if !current.children.is_empty() { groups.push(current); } groups } fn chunk_internal_entry_refs<'a>( children: impl IntoIterator>, options: &TrackedStateTreeOptions, level: usize, ) -> Vec> { let mut groups = Vec::new(); let mut current = InternalChunkRefAccumulator::default(); for child in children { let item_size = child.first_key.len() + child.last_key.len() + TRACKED_STATE_HASH_BYTES + std::mem::size_of::(); let projected_size = estimate_internal_chunk_size( current.children.len() + 1, current.first_key_bytes + child.first_key.len(), current.last_key_bytes + child.last_key.len(), ); if !current.children.is_empty() && projected_size > options.max_chunk_bytes { groups.push(std::mem::take(&mut current)); } current.first_key_bytes += child.first_key.len(); current.last_key_bytes += child.last_key.len(); current.children.push(child); let current_size = estimate_internal_chunk_size( current.children.len(), current.first_key_bytes, current.last_key_bytes, ); if current_size >= options.min_chunk_bytes && (current_size >= options.max_chunk_bytes || current.children.last().is_some_and(|child| { boundary_trigger( child.first_key, level, current_size, item_size, options.target_chunk_bytes, ) })) { groups.push(std::mem::take(&mut current)); } } if !current.children.is_empty() { groups.push(current); } groups } fn estimate_leaf_chunk_size(entry_count: usize, key_bytes: usize, value_bytes: usize) -> usize { 10 + entry_count * 12 + key_bytes + value_bytes } fn estimate_leaf_entry_size(key_bytes: usize, value_bytes: usize) -> usize { 12 + key_bytes + value_bytes } fn estimate_internal_chunk_size( child_count: usize, first_key_bytes: usize, last_key_bytes: usize, ) -> usize { 16 + child_count * (8 + TRACKED_STATE_HASH_BYTES + std::mem::size_of::()) + first_key_bytes + last_key_bytes } fn first_resync_index( generated: &[ChildSummary], existing: &[ChildSummary], mutation_key: &[u8], ) -> Option<(usize, usize)> { for (generated_index, generated) in generated.iter().enumerate() { // A matching old chunk before the mutation key is only unchanged // prefix; resync is only valid after the mutation has been emitted. if generated.first_key.as_slice() <= mutation_key { continue; } if let Some(existing_index) = existing.iter().position(|existing| generated == existing) { return Some((generated_index, existing_index)); } } None } fn internal_boundaries_match(left: &[ChildSummary], right: &[ChildSummary]) -> bool { left.len() == right.len() && left.iter().zip(right).all(|(left, right)| { left.first_key == right.first_key && left.last_key == right.last_key }) } async fn child_summaries_are_leaves( tree: &TrackedStateTree, store: &mut impl StorageReader, children: &[ChildSummary], ) -> Result { let Some(first_child) = children.first() else { return Ok(false); }; Ok(matches!( tree.load_node(store, &first_child.child_hash).await?, DecodedNode::Leaf(_) )) } fn decode_entry( entry: &EncodedLeafEntry, ) -> Result<(TrackedStateKey, TrackedStateIndexValue), LixError> { Ok((decode_key(&entry.key)?, decode_value(&entry.value)?)) } fn parent_index_for_child_index( old_children: &[ChildSummary], old_parents: &[ChildSummary], child_index: usize, ) -> usize { let key = if child_index < old_children.len() { old_children[child_index].first_key.as_slice() } else { old_children .last() .map(|child| child.last_key.as_slice()) .unwrap_or_default() }; old_parents .iter() .position(|parent| parent.last_key.as_slice() >= key) .unwrap_or_else(|| old_parents.len().saturating_sub(1)) } fn child_range_for_parent( old_children: &[ChildSummary], parent: &ChildSummary, ) -> Result, LixError> { let start = old_children .iter() .position(|child| child.last_key.as_slice() >= parent.first_key.as_slice()) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state parent summary does not overlap child summaries", ) })?; let end = old_children[start..] .iter() .position(|child| child.last_key == parent.last_key) .map(|offset| start + offset + 1) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state parent summary end does not match child summaries", ) })?; Ok(start..end) } fn leaf_summary( hash: [u8; TRACKED_STATE_HASH_BYTES], entries: &[EncodedLeafEntry], ) -> ChildSummary { ChildSummary { first_key: entries .first() .map(|entry| entry.key.clone()) .unwrap_or_default(), last_key: entries .last() .map(|entry| entry.key.clone()) .unwrap_or_default(), child_hash: hash, subtree_count: entries.len() as u64, } } fn internal_summary( hash: [u8; TRACKED_STATE_HASH_BYTES], children: &[ChildSummary], ) -> Result { let first_key = children .first() .map(|child| child.first_key.clone()) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state internal node has no children", ) })?; let last_key = children .last() .map(|child| child.last_key.clone()) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "tracked-state internal node has no children", ) })?; Ok(ChildSummary { first_key, last_key, child_hash: hash, subtree_count: children.iter().map(|child| child.subtree_count).sum(), }) } fn push_level_summary(levels: &mut Vec>, level: usize, summary: ChildSummary) { while levels.len() <= level { levels.push(Vec::new()); } levels[level].push(summary); } fn scan_ranges(request: &TrackedStateTreeScanRequest) -> Vec { if request.schema_keys.is_empty() { return Vec::new(); } let can_bind_entity = !request.entity_ids.is_empty() && !request.file_ids.is_empty() && request .file_ids .iter() .all(|filter| !matches!(filter, NullableKeyFilter::Any)); let mut ranges = Vec::new(); for schema_key in &request.schema_keys { if can_bind_entity { for file_filter in &request.file_ids { let file_id = match file_filter { NullableKeyFilter::Null => None, NullableKeyFilter::Value(file_id) => Some(file_id.clone()), NullableKeyFilter::Any => unreachable!("filtered above"), }; for entity_id in &request.entity_ids { let key = TrackedStateKey { schema_key: schema_key.clone(), file_id: file_id.clone(), entity_id: entity_id.clone(), }; ranges.push(exact_scan_range(encode_key(&key))); } } continue; } if request.file_ids.is_empty() || request .file_ids .iter() .any(|filter| matches!(filter, NullableKeyFilter::Any)) { ranges.push(prefix_scan_range(encode_schema_key_prefix(schema_key))); continue; } for file_filter in &request.file_ids { let prefix = match file_filter { NullableKeyFilter::Null => encode_schema_file_prefix(schema_key, None), NullableKeyFilter::Value(file_id) => { encode_schema_file_prefix(schema_key, Some(file_id)) } NullableKeyFilter::Any => unreachable!("handled above"), }; ranges.push(prefix_scan_range(prefix)); } } ranges } fn scan_key_decode_hint<'a>( request: &'a TrackedStateTreeScanRequest, ranges: &[EncodedScanRange], ) -> Option> { if ranges.len() != 1 || request.schema_keys.len() != 1 || request.file_ids.len() != 1 { return None; } if !request.entity_ids.is_empty() { return None; } let file_id = match request.file_ids.first()? { NullableKeyFilter::Null => None, NullableKeyFilter::Value(file_id) => Some(file_id.as_str()), NullableKeyFilter::Any => return None, }; Some(ScanKeyDecodeHint { schema_key: request.schema_keys.first()?.as_str(), file_id, prefix_len: ranges.first()?.start.len(), }) } fn prefix_scan_range(prefix: Vec) -> EncodedScanRange { EncodedScanRange { end: lexicographic_successor(&prefix), start: prefix, } } fn exact_scan_range(key: Vec) -> EncodedScanRange { EncodedScanRange { end: lexicographic_successor(&key), start: key, } } fn lexicographic_successor(bytes: &[u8]) -> Option> { let mut out = bytes.to_vec(); for index in (0..out.len()).rev() { if out[index] != u8::MAX { out[index] += 1; out.truncate(index + 1); return Some(out); } } None } fn child_summary_overlaps_scan_ranges(child: &ChildSummary, ranges: &[EncodedScanRange]) -> bool { ranges.is_empty() || ranges.iter().any(|range| { child.last_key.as_slice() >= range.start.as_slice() && range .end .as_ref() .is_none_or(|end| child.first_key.as_slice() < end.as_slice()) }) } fn child_summary_contained_by_scan_ranges( child: &ChildSummary, ranges: &[EncodedScanRange], ) -> bool { ranges.is_empty() || ranges.iter().any(|range| { child.first_key.as_slice() >= range.start.as_slice() && range .end .as_ref() .is_none_or(|end| child.last_key.as_slice() < end.as_slice()) }) } fn encoded_key_in_scan_ranges(key: &[u8], ranges: &[EncodedScanRange]) -> bool { ranges.is_empty() || ranges.iter().any(|range| { key >= range.start.as_slice() && range.end.as_ref().is_none_or(|end| key < end.as_slice()) }) } fn key_matches_scan_filters(request: &TrackedStateTreeScanRequest, key: &TrackedStateKey) -> bool { if !request.schema_keys.is_empty() && !request.schema_keys.contains(&key.schema_key) { return false; } if !request.entity_ids.is_empty() && !request.entity_ids.contains(&key.entity_id) { return false; } if !request.file_ids.is_empty() && !request .file_ids .iter() .any(|filter| filter.matches(key.file_id.as_ref())) { return false; } true } fn scan_limit_reached(request: &TrackedStateTreeScanRequest, row_count: usize) -> bool { request.limit.is_some_and(|limit| row_count >= limit) } #[cfg(test)] mod tests { use std::sync::Arc; use super::*; use crate::backend::testing::UnitTestBackend; use crate::entity_identity::EntityIdentity; use crate::storage::{StorageContext, StorageWriteTransaction}; use crate::tracked_state::codec::encode_value; #[tokio::test] async fn exact_read_roundtrips_from_stored_root() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::new(); let key = key("schema", None, "entity"); let value = value("change-1", Some("{}")); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let result = apply_mutations_for_test( &tree, transaction.as_mut(), None, vec![mutation(&key, &value)], Some("commit-1"), ) .await .expect("mutations should apply"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); assert_eq!( tree.load_root(&mut store, "commit-1") .await .expect("root should load"), Some(result.root_id.clone()) ); assert_eq!( tree.get(&mut store, &result.root_id, &key) .await .expect("row should load"), Some(value) ); } #[tokio::test] async fn latest_mutation_for_key_wins() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::new(); let key = key("schema", None, "entity"); let old_value = value("change-old", Some("{\"v\":1}")); let new_value = value("change-new", Some("{\"v\":2}")); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let result = apply_mutations_for_test( &tree, transaction.as_mut(), None, vec![mutation(&key, &old_value), mutation(&key, &new_value)], None, ) .await .expect("mutations should apply"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); let loaded = tree .get(&mut store, &result.root_id, &key) .await .expect("row should load") .expect("row should exist"); assert_eq!(loaded.change_locator.change_id, "change-new"); assert_eq!(loaded.change_locator.source_commit_id, "commit"); } #[tokio::test] async fn scan_filters_by_index_key_without_materializing_tombstones() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let result = apply_mutations_for_test( &tree, transaction.as_mut(), None, vec![ mutation_owned(key("schema-a", None, "visible"), value("c1", Some("{}"))), mutation_owned(key("schema-a", None, "deleted"), value("c2", None)), mutation_owned(key("schema-b", None, "other"), value("c3", Some("{}"))), ], None, ) .await .expect("mutations should apply"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); let rows = tree .scan( &mut store, &result.root_id, &TrackedStateTreeScanRequest { schema_keys: vec!["schema-a".to_string()], ..Default::default() }, ) .await .expect("scan should succeed"); assert_eq!(rows.len(), 2); let identities = rows .iter() .map(|(key, _)| key.entity_id.as_single_string_owned().expect("identity")) .collect::>(); assert_eq!(identities, vec!["deleted", "visible"]); let live_rows = tree .scan( &mut store, &result.root_id, &TrackedStateTreeScanRequest { schema_keys: vec!["schema-a".to_string()], include_tombstones: false, ..Default::default() }, ) .await .expect("live scan should succeed"); let live_identities = live_rows .iter() .map(|(key, _)| key.entity_id.as_single_string_owned().expect("identity")) .collect::>(); assert_eq!(live_identities, vec!["visible"]); } #[tokio::test] async fn scan_filters_by_schema_entity_and_file() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let result = apply_mutations_for_test( &tree, transaction.as_mut(), None, vec![ mutation_owned( key("schema-a", Some("file-a"), "entity-a"), value("c1", Some("{}")), ), mutation_owned( key("schema-a", Some("file-b"), "entity-a"), value("c2", Some("{}")), ), mutation_owned( key("schema-a", Some("file-a"), "entity-b"), value("c3", Some("{}")), ), mutation_owned( key("schema-b", Some("file-a"), "entity-a"), value("c4", Some("{}")), ), ], None, ) .await .expect("mutations should apply"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); let rows = tree .scan( &mut store, &result.root_id, &TrackedStateTreeScanRequest { schema_keys: vec!["schema-a".to_string()], entity_ids: vec![crate::entity_identity::EntityIdentity::single("entity-a")], file_ids: vec![crate::NullableKeyFilter::Value("file-a".to_string())], ..Default::default() }, ) .await .expect("scan should succeed"); assert_eq!(rows.len(), 1); assert_eq!(rows[0].0.schema_key, "schema-a"); assert_eq!( rows[0] .0 .entity_id .as_single_string_owned() .expect("identity"), "entity-a" ); assert_eq!(rows[0].0.file_id.as_deref(), Some("file-a")); } #[tokio::test] async fn scan_schema_file_prefix_honors_tombstones_and_limit() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let result = apply_mutations_for_test( &tree, transaction.as_mut(), None, vec![ mutation_owned( key("schema-a", Some("file-a"), "entity-a"), value("c1", Some("{}")), ), mutation_owned( key("schema-a", Some("file-a"), "entity-b"), value("c2", None), ), mutation_owned( key("schema-a", Some("file-a"), "entity-c"), value("c3", Some("{}")), ), mutation_owned( key("schema-a", Some("file-b"), "entity-d"), value("c4", Some("{}")), ), ], None, ) .await .expect("mutations should apply"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); let rows = tree .scan( &mut store, &result.root_id, &TrackedStateTreeScanRequest { schema_keys: vec!["schema-a".to_string()], file_ids: vec![crate::NullableKeyFilter::Value("file-a".to_string())], include_tombstones: false, limit: Some(2), ..Default::default() }, ) .await .expect("scan should succeed"); assert_eq!(rows.len(), 2); assert!(rows.iter().all( |(key, _)| key.schema_key == "schema-a" && key.file_id.as_deref() == Some("file-a") )); assert_eq!( rows.iter() .map(|(key, _)| key.entity_id.as_single_string_owned().expect("identity")) .collect::>(), vec!["entity-a", "entity-c"] ); } #[tokio::test] async fn applying_to_base_root_reuses_existing_rows_and_overwrites_changed_rows() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::new(); let unchanged_key = key("schema", None, "unchanged"); let changed_key = key("schema", None, "changed"); let unchanged_value = value("c1", Some("{}")); let old_changed_value = value("c2", Some("{\"old\":true}")); let new_changed_value = value("c3", Some("{\"new\":true}")); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let base = apply_mutations_for_test( &tree, transaction.as_mut(), None, vec![ mutation(&unchanged_key, &unchanged_value), mutation(&changed_key, &old_changed_value), ], None, ) .await .expect("base should build"); let next = apply_mutations_for_test( &tree, transaction.as_mut(), Some(&base.root_id), vec![mutation(&changed_key, &new_changed_value)], None, ) .await .expect("next should build"); transaction .commit() .await .expect("transaction should commit"); let mut store = storage.clone(); assert_eq!( tree.get(&mut store, &next.root_id, &unchanged_key) .await .expect("unchanged read") .expect("unchanged exists") .change_locator .change_id, "c1" ); assert_eq!( tree.get(&mut store, &next.root_id, &changed_key) .await .expect("changed read") .expect("changed exists") .change_locator .change_id, "c3" ); } #[tokio::test] async fn two_commit_roots_can_share_unchanged_rows() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::new(); let shared_key = key("schema", None, "shared"); let branch_a_key = key("schema", None, "branch-a"); let branch_b_key = key("schema", None, "branch-b"); let shared_value = value("shared-change", Some("{\"shared\":true}")); let branch_a_value = value("branch-a-change", Some("{\"branch\":\"a\"}")); let branch_b_value = value("branch-b-change", Some("{\"branch\":\"b\"}")); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let base = apply_mutations_for_test( &tree, transaction.as_mut(), None, vec![mutation(&shared_key, &shared_value)], Some("commit-base"), ) .await .expect("base root should build"); let branch_a = apply_mutations_for_test( &tree, transaction.as_mut(), Some(&base.root_id), vec![mutation(&branch_a_key, &branch_a_value)], Some("commit-a"), ) .await .expect("branch a root should build"); let branch_b = apply_mutations_for_test( &tree, transaction.as_mut(), Some(&base.root_id), vec![mutation(&branch_b_key, &branch_b_value)], Some("commit-b"), ) .await .expect("branch b root should build"); transaction .commit() .await .expect("transaction should commit"); assert_ne!(branch_a.root_id, branch_b.root_id); let mut store = storage.clone(); assert_eq!( tree.get(&mut store, &branch_a.root_id, &shared_key) .await .expect("branch a shared row should load"), Some(value("shared-change", Some("{\"shared\":true}"))) ); assert_eq!( tree.get(&mut store, &branch_b.root_id, &shared_key) .await .expect("branch b shared row should load"), Some(value("shared-change", Some("{\"shared\":true}"))) ); assert!(tree .get(&mut store, &branch_a.root_id, &branch_b_key) .await .expect("branch a should read") .is_none()); assert!(tree .get(&mut store, &branch_b.root_id, &branch_a_key) .await .expect("branch b should read") .is_none()); } #[tokio::test] async fn single_update_matches_full_canonical_rebuild() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::with_options(TrackedStateTreeOptions { target_chunk_bytes: 128, min_chunk_bytes: 64, max_chunk_bytes: 256, }); let rows = (0..100) .map(|index| { mutation_owned( key("schema", None, &format!("entity-{index:03}")), value(&format!("c-{index}"), Some(&format!("{{\"v\":{index}}}"))), ) }) .collect::>(); let changed_key = key("schema", None, "entity-000"); let changed_value = value("changed", Some("{\"v\":\"changed\"}")); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let base = apply_mutations_for_test(&tree, transaction.as_mut(), None, rows, None) .await .expect("base should build"); let fast = apply_mutations_for_test( &tree, transaction.as_mut(), Some(&base.root_id), vec![mutation(&changed_key, &changed_value)], None, ) .await .expect("fast path should apply"); let mut canonical_entries = tree .collect_leaf_entries(&mut transaction.as_mut(), &base.root_id) .await .expect("base entries should collect"); assert!(canonical_entries .windows(2) .all(|window| window[0].key < window[1].key)); let encoded_changed_key = encode_key(&changed_key); let encoded_changed_value = encode_value(&changed_value); let index = canonical_entries .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_changed_key)) .expect("changed key should exist"); canonical_entries[index].value = encoded_changed_value; let canonical = tree .build_tree_from_entries(canonical_entries) .expect("canonical root should build"); assert_eq!(fast.root_id, canonical.root_id); } #[tokio::test] async fn single_insert_matches_full_canonical_rebuild() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::with_options(TrackedStateTreeOptions { target_chunk_bytes: 128, min_chunk_bytes: 64, max_chunk_bytes: 256, }); let rows = (0..100) .map(|index| { mutation_owned( key("schema", None, &format!("entity-{index:03}")), value(&format!("c-{index}"), Some(&format!("{{\"v\":{index}}}"))), ) }) .collect::>(); let inserted_key = key("schema", None, "entity-050a"); let inserted_value = value("inserted", Some("{\"v\":\"inserted\"}")); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let base = apply_mutations_for_test(&tree, transaction.as_mut(), None, rows, None) .await .expect("base should build"); let fast = apply_mutations_for_test( &tree, transaction.as_mut(), Some(&base.root_id), vec![mutation(&inserted_key, &inserted_value)], None, ) .await .expect("fast path should apply"); let mut canonical_entries = tree .collect_leaf_entries(&mut transaction.as_mut(), &base.root_id) .await .expect("base entries should collect"); let encoded_inserted_key = encode_key(&inserted_key); let encoded_inserted_value = encode_value(&inserted_value); let index = canonical_entries .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_inserted_key)) .expect_err("inserted key should not exist"); canonical_entries.insert( index, EncodedLeafEntry { key: encoded_inserted_key, value: encoded_inserted_value, }, ); let canonical = tree .build_tree_from_entries(canonical_entries) .expect("canonical root should build"); assert_eq!(fast.root_id, canonical.root_id); } #[tokio::test] async fn batch_update_matches_full_canonical_rebuild() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::with_options(TrackedStateTreeOptions { target_chunk_bytes: 128, min_chunk_bytes: 64, max_chunk_bytes: 256, }); let rows = (0..100) .map(|index| { mutation_owned( key("schema", None, &format!("entity-{index:03}")), value(&format!("c-{index}"), Some(&format!("{{\"v\":{index}}}"))), ) }) .collect::>(); let updates = (10..25) .map(|index| { ( key("schema", None, &format!("entity-{index:03}")), value( &format!("changed-{index}"), Some(&format!("{{\"changed\":{index}}}")), ), ) }) .collect::>(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let base = apply_mutations_for_test(&tree, transaction.as_mut(), None, rows, None) .await .expect("base should build"); let fast = apply_mutations_for_test( &tree, transaction.as_mut(), Some(&base.root_id), updates .iter() .map(|(key, value)| mutation(key, value)) .collect(), None, ) .await .expect("batch path should apply"); let mut canonical_entries = tree .collect_leaf_entries(&mut transaction.as_mut(), &base.root_id) .await .expect("base entries should collect"); for (key, value) in updates { let encoded_key = encode_key(&key); let encoded_value = encode_value(&value); let index = canonical_entries .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_key)) .expect("updated key should exist"); canonical_entries[index].value = encoded_value; } let canonical = tree .build_tree_from_entries(canonical_entries) .expect("canonical root should build"); assert_eq!(fast.root_id, canonical.root_id); } #[tokio::test] async fn batch_insert_matches_full_canonical_rebuild() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let tree = TrackedStateTree::with_options(TrackedStateTreeOptions { target_chunk_bytes: 128, min_chunk_bytes: 64, max_chunk_bytes: 256, }); let rows = (0..100) .map(|index| { mutation_owned( key("schema", None, &format!("entity-{index:03}")), value(&format!("c-{index}"), Some(&format!("{{\"v\":{index}}}"))), ) }) .collect::>(); let inserts = ["entity-050a", "entity-050b", "entity-050c"] .into_iter() .enumerate() .map(|(index, entity_id)| { ( key("schema", None, entity_id), value( &format!("inserted-{index}"), Some(&format!("{{\"inserted\":{index}}}")), ), ) }) .collect::>(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let base = apply_mutations_for_test(&tree, transaction.as_mut(), None, rows, None) .await .expect("base should build"); let fast = apply_mutations_for_test( &tree, transaction.as_mut(), Some(&base.root_id), inserts .iter() .map(|(key, value)| mutation(key, value)) .collect(), None, ) .await .expect("batch path should apply"); let mut canonical_entries = tree .collect_leaf_entries(&mut transaction.as_mut(), &base.root_id) .await .expect("base entries should collect"); for (key, value) in inserts { let encoded_key = encode_key(&key); let encoded_value = encode_value(&value); let index = canonical_entries .binary_search_by(|entry| entry.key.as_slice().cmp(&encoded_key)) .expect_err("inserted key should not exist"); canonical_entries.insert( index, EncodedLeafEntry { key: encoded_key, value: encoded_value, }, ); } let canonical = tree .build_tree_from_entries(canonical_entries) .expect("canonical root should build"); assert_eq!(fast.root_id, canonical.root_id); } async fn apply_mutations_for_test( tree: &TrackedStateTree, transaction: &mut dyn StorageWriteTransaction, base_root: Option<&TrackedStateRootId>, mutations: Vec, commit_id: Option<&str>, ) -> Result { let mut writes = StorageWriteSet::new(); let result = tree .apply_mutations(transaction, &mut writes, base_root, mutations, commit_id) .await?; writes.apply(transaction).await?; Ok(result) } fn mutation(key: &TrackedStateKey, value: &TrackedStateIndexValue) -> TrackedStateMutation { TrackedStateMutation::put_encoded(encode_key(key), encode_value(value)) } fn mutation_owned(key: TrackedStateKey, value: TrackedStateIndexValue) -> TrackedStateMutation { mutation(&key, &value) } fn key(schema_key: &str, file_id: Option<&str>, entity_id: &str) -> TrackedStateKey { TrackedStateKey { schema_key: schema_key.to_string(), file_id: file_id.map(str::to_string), entity_id: EntityIdentity::single(entity_id), } } fn value(change_id: &str, snapshot_content: Option<&str>) -> TrackedStateIndexValue { let source_ordinal = match snapshot_content { Some("{\"v\":1}") => 1, Some("{\"v\":2}") => 2, Some(_) => 3, None => 0, }; TrackedStateIndexValue { change_locator: crate::commit_store::ChangeLocator { source_commit_id: "commit".to_string(), source_pack_id: 0, source_ordinal, change_id: change_id.to_string(), }, deleted: snapshot_content.is_none(), snapshot_ref: snapshot_content .map(|content| crate::json_store::JsonRef::for_content(content.as_bytes())), metadata_ref: None, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), } } } ================================================ FILE: packages/engine/src/tracked_state/types.rs ================================================ use crate::commit_store::{ChangeLocator, ChangeLocatorRef, ChangeRef}; use crate::entity_identity::EntityIdentity; use crate::json_store::JsonRef; use crate::{LixError, NullableKeyFilter}; pub(crate) const TRACKED_STATE_HASH_BYTES: usize = 32; /// Content-addressed root id for one tracked-state projection tree. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) struct TrackedStateRootId([u8; TRACKED_STATE_HASH_BYTES]); impl TrackedStateRootId { pub(crate) fn new(bytes: [u8; TRACKED_STATE_HASH_BYTES]) -> Self { Self(bytes) } pub(crate) fn from_slice(bytes: &[u8]) -> Result { if bytes.len() != TRACKED_STATE_HASH_BYTES { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!( "tracked-state tree root id must be {TRACKED_STATE_HASH_BYTES} bytes, got {}", bytes.len() ), )); } let mut out = [0_u8; TRACKED_STATE_HASH_BYTES]; out.copy_from_slice(bytes); Ok(Self(out)) } pub(crate) fn as_bytes(&self) -> &[u8; TRACKED_STATE_HASH_BYTES] { &self.0 } } /// Root-independent tracked entity identity. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) struct TrackedStateKey { pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) entity_id: EntityIdentity, } /// Zero-copy view of primary tracked-state key. #[derive(Debug, Clone, Copy)] pub(crate) struct TrackedStateKeyRef<'a> { pub(crate) schema_key: &'a str, pub(crate) file_id: Option<&'a str>, pub(crate) entity_id: &'a EntityIdentity, } /// Zero-copy tracked-state projection delta prepared from commit_store facts. #[derive(Debug, Clone, Copy)] pub(crate) struct TrackedStateDeltaRef<'a> { pub(crate) change: ChangeRef<'a>, pub(crate) locator: ChangeLocatorRef<'a>, pub(crate) created_at: &'a str, pub(crate) updated_at: &'a str, } /// Owned per-commit projection delta entry. /// /// Normal commits persist these entries in `tracked_state.delta_pack`. Full /// projection roots are materialized separately from these deltas. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateDeltaEntry { pub(crate) key: TrackedStateKey, pub(crate) value: TrackedStateIndexValue, } /// Projection value stored in tracked-state trees. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateIndexValue { pub(crate) change_locator: ChangeLocator, pub(crate) deleted: bool, pub(crate) snapshot_ref: Option, pub(crate) metadata_ref: Option, pub(crate) created_at: String, pub(crate) updated_at: String, } /// Zero-copy view of a tracked-state projection value. #[derive(Debug, Clone, Copy)] pub(crate) struct TrackedStateIndexValueRef<'a> { pub(crate) change_locator: ChangeLocatorRef<'a>, pub(crate) deleted: bool, pub(crate) snapshot_ref: Option<&'a JsonRef>, pub(crate) metadata_ref: Option<&'a JsonRef>, pub(crate) created_at: &'a str, pub(crate) updated_at: &'a str, } /// Materialized tracked-state projection row. /// /// Tracked rows are the projection that can be rebuilt from changelog facts. /// They intentionally do not carry an `untracked` flag: untracked local overlay /// data belongs to `untracked_state`, and the serving `live_state` facade is /// responsible for combining both sources. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct MaterializedTrackedStateRow { pub(crate) entity_id: EntityIdentity, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot_content: Option, pub(crate) metadata: Option, pub(crate) deleted: bool, pub(crate) created_at: String, pub(crate) updated_at: String, pub(crate) change_id: String, pub(crate) commit_id: String, } /// Identity-centered filter for tracked-state scans. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct TrackedStateFilter { #[serde(default)] pub(crate) schema_keys: Vec, #[serde(default)] pub(crate) entity_ids: Vec, #[serde(default)] pub(crate) file_ids: Vec>, #[serde(default)] pub(crate) include_tombstones: bool, } /// Requested property set for a tracked-state scan. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct TrackedStateProjection { #[serde(default)] pub(crate) columns: Vec, } /// Scan request for the tracked-state projection. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct TrackedStateScanRequest { #[serde(default)] pub(crate) filter: TrackedStateFilter, #[serde(default)] pub(crate) projection: TrackedStateProjection, #[serde(default)] pub(crate) limit: Option, } /// Point lookup request for one tracked-state row. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateRowRequest { pub(crate) schema_key: String, pub(crate) entity_id: EntityIdentity, pub(crate) file_id: NullableKeyFilter, } #[derive(Debug, PartialEq, Eq)] pub(crate) struct TrackedStateMutation { pub(crate) encoded_key: Vec, pub(crate) encoded_value: Vec, } impl TrackedStateMutation { pub(crate) fn put_encoded(encoded_key: Vec, encoded_value: Vec) -> Self { Self { encoded_key, encoded_value, } } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateTreeScanRequest { pub(crate) schema_keys: Vec, pub(crate) entity_ids: Vec, pub(crate) file_ids: Vec>, pub(crate) include_tombstones: bool, pub(crate) limit: Option, } impl Default for TrackedStateTreeScanRequest { fn default() -> Self { Self { schema_keys: Vec::new(), entity_ids: Vec::new(), file_ids: Vec::new(), include_tombstones: true, limit: None, } } } impl TrackedStateTreeScanRequest { pub(crate) fn matches(&self, key: &TrackedStateKey, value: &TrackedStateIndexValue) -> bool { if !self.include_tombstones && value.deleted { return false; } self.matches_key(key) } pub(crate) fn matches_key(&self, key: &TrackedStateKey) -> bool { if !self.schema_keys.is_empty() && !self.schema_keys.contains(&key.schema_key) { return false; } if !self.entity_ids.is_empty() && !self.entity_ids.contains(&key.entity_id) { return false; } if !self.file_ids.is_empty() && !self.file_ids.iter().any(|filter| match filter { NullableKeyFilter::Any => true, NullableKeyFilter::Null => key.file_id.is_none(), NullableKeyFilter::Value(value) => key.file_id.as_ref() == Some(value), }) { return false; } true } } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateApplyResult { pub(crate) root_id: TrackedStateRootId, pub(crate) row_count: usize, pub(crate) tree_height: usize, pub(crate) chunk_count: usize, pub(crate) chunk_bytes: usize, pub(crate) persisted_root: bool, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TrackedStateTreeDiffEntry { pub(crate) before: Option<(TrackedStateKey, TrackedStateIndexValue)>, pub(crate) after: Option<(TrackedStateKey, TrackedStateIndexValue)>, } ================================================ FILE: packages/engine/src/transaction/commit.rs ================================================ use crate::binary_cas::BinaryCasContext; use crate::commit_store::{ChangeRef, CommitDraftRef, CommitStoreContext, StagedCommitStoreCommit}; use crate::functions::FunctionContext; use crate::json_store::{JsonStoreContext, JsonWritePlacementRef, NormalizedJsonRef}; use crate::storage::{StorageReader, StorageWriteSet, StorageWriteTransaction}; use crate::tracked_state::{TrackedStateContext, TrackedStateDeltaRef}; use crate::transaction::prepare_version_ref_row; use crate::transaction::staging::PreparedWriteSet; use crate::transaction::types::{PreparedAdoptedStateRow, PreparedStateRow, StagedCommitMembers}; use crate::untracked_state::{ UntrackedStateContext, UntrackedStateIdentity, UntrackedStateIdentityRef, UntrackedStateRowRef, }; use crate::version::{VersionContext, VersionRefReader}; use crate::LixError; use std::collections::BTreeMap; type RowIndex = usize; type AdoptedRowIndex = usize; /// Commits prepared transaction rows into durable tracked and untracked stores. /// /// Providers decode DataFusion DML into hydrated `PreparedStateRow`s. Untracked /// rows are durable local overlay state and bypass commit-store rows. Tracked /// rows stage canonical commit-store facts, then update the live-state serving /// projection. The tracked side of that projection is a prolly root keyed by /// the new commit id. pub(crate) async fn commit_prepared_writes( binary_cas: &BinaryCasContext, commit_store: &CommitStoreContext, version_ctx: &VersionContext, runtime_functions: Option<&FunctionContext>, transaction: &mut (impl StorageWriteTransaction + ?Sized), prepared_writes: PreparedWriteSet, ) -> Result<(), LixError> { let mut writes = StorageWriteSet::new(); let mut json_writer = JsonStoreContext::new().writer(); if !prepared_writes.file_data_writes.is_empty() { let mut blob_writer = binary_cas.writer(&mut writes); for write in &prepared_writes.file_data_writes { blob_writer.stage_bytes(&write.data)?; } } let state_rows = prepared_writes.state_rows; let adopted_rows = prepared_writes.adopted_rows; let finalized = finalize_commit_rows( prepared_writes.commit_members_by_version, prepared_writes.extra_commit_parents_by_version, version_ctx, transaction, ) .await?; let commit_rows = finalized.commit_rows; let version_heads = finalized.version_heads; let tracked_roots = finalized.tracked_roots; let row_index = index_prepared_rows(&state_rows)?; let adopted_index = index_adopted_rows(&adopted_rows); if let Some(runtime_functions) = runtime_functions { runtime_functions .stage_persist_if_needed(&mut writes) .await?; } if state_rows.is_empty() && adopted_rows.is_empty() && commit_rows.is_empty() && version_heads.is_empty() && writes.is_empty() { return Ok(()); } let staged_commits = stage_commit_store_commits( commit_store, transaction, &mut writes, &state_rows, &row_index.tracked_row_indices_by_commit, &adopted_rows, &adopted_index.tracked_row_indices_by_commit, &commit_rows, ) .await?; let json_pack_indexes_by_commit = stage_prepared_json_payloads( &mut json_writer, &mut writes, &state_rows, &row_index.tracked_row_indices_by_commit, &staged_commits, &row_index.untracked_row_indices, )?; // The serving projection is updated in the same backend transaction as the // commit-store append. Tracked rows become prolly mutations under their owning // commit root; untracked rows remain in the separate local overlay store. { let untracked_overlay_delete_identities = existing_untracked_overlay_delete_identities( transaction, row_index .canonical_row_indices .iter() .map(|&row_index| untracked_identity_ref_from_state_row(&state_rows[row_index])) .chain( adopted_rows .iter() .map(untracked_identity_ref_from_adopted_row), ), ) .await?; UntrackedStateContext::new() .writer(&mut writes) .stage_rows( row_index .untracked_row_indices .iter() .map(|&row_index| untracked_row_ref_from_state_row(&state_rows[row_index])), )?; UntrackedStateContext::new() .writer(&mut writes) .stage_delete_rows( untracked_overlay_delete_identities .iter() .map(UntrackedStateIdentity::as_ref), ); stage_tracked_roots( transaction, &mut writes, &state_rows, row_index.tracked_row_indices_by_commit, &adopted_rows, adopted_index.tracked_row_indices_by_commit, tracked_roots, staged_commits, json_pack_indexes_by_commit, ) .await?; } for version_head in version_heads { let canonical_row = prepare_version_ref_row( &version_head.version_id, &version_head.commit_id, &version_head.timestamp, )?; version_ctx.stage_canonical_ref_rows(&mut writes, &[canonical_row.row])?; } writes.apply(transaction).await?; Ok(()) } fn stage_prepared_json_payloads( json_writer: &mut crate::json_store::JsonStoreWriter, writes: &mut StorageWriteSet, state_rows: &[PreparedStateRow], tracked_row_indices_by_commit: &BTreeMap>, staged_commits: &BTreeMap, untracked_row_indices: &[RowIndex], ) -> Result>>, LixError> { let mut pack_indexes_by_commit = BTreeMap::new(); for (commit_id, row_indices) in tracked_row_indices_by_commit { let staged_commit = staged_commits.get(commit_id).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!("commit '{commit_id}' has tracked JSON rows but no staged commit-store locators"), ) })?; if row_indices.len() != staged_commit.authored_locators.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit '{commit_id}' has {} tracked JSON rows but {} authored locators", row_indices.len(), staged_commit.authored_locators.len() ), )); } let mut row_indices_by_pack = BTreeMap::>::new(); for (&row_index, locator) in row_indices.iter().zip(&staged_commit.authored_locators) { row_indices_by_pack .entry(locator.source_pack_id) .or_default() .push(row_index); } for (pack_id, pack_row_indices) in row_indices_by_pack { let report = json_writer.stage_batch_report( writes, JsonWritePlacementRef::CommitPack { commit_id, pack_id }, pack_row_indices .iter() .flat_map(|&row_index| json_payloads_from_state_row(&state_rows[row_index])), )?; pack_indexes_by_commit .entry(commit_id.clone()) .or_insert_with(BTreeMap::new) .insert(pack_id, report.pack_indexes); } } json_writer.stage_batch( writes, JsonWritePlacementRef::OutOfBand, untracked_row_indices .iter() .flat_map(|&row_index| json_payloads_from_state_row(&state_rows[row_index])), )?; Ok(pack_indexes_by_commit) } fn json_payloads_from_state_row( row: &PreparedStateRow, ) -> impl Iterator> { row.snapshot .iter() .chain(row.metadata.iter()) .map(|json| NormalizedJsonRef::trusted_prehashed(json.normalized.as_ref(), json.json_ref)) } async fn existing_untracked_overlay_delete_identities<'a>( transaction: &mut (impl StorageReader + ?Sized), identities: impl IntoIterator>, ) -> Result, LixError> { UntrackedStateContext::new() .reader(transaction) .existing_identities(identities) .await } struct PreparedRowIndex { canonical_row_indices: Vec, untracked_row_indices: Vec, tracked_row_indices_by_commit: BTreeMap>, } struct PreparedAdoptedRowIndex { tracked_row_indices_by_commit: BTreeMap>, } fn index_prepared_rows(rows: &[PreparedStateRow]) -> Result { let mut canonical_row_indices = Vec::new(); let mut untracked_row_indices = Vec::new(); let mut tracked_row_indices_by_commit = BTreeMap::>::new(); for (row_index, row) in rows.iter().enumerate() { if row.untracked { untracked_row_indices.push(row_index); continue; } let Some(commit_id) = row.commit_id.as_ref() else { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, "tracked prepared row is missing commit_id before commit indexing", )); }; canonical_row_indices.push(row_index); tracked_row_indices_by_commit .entry(commit_id.clone()) .or_default() .push(row_index); } Ok(PreparedRowIndex { canonical_row_indices, untracked_row_indices, tracked_row_indices_by_commit, }) } fn index_adopted_rows(rows: &[PreparedAdoptedStateRow]) -> PreparedAdoptedRowIndex { let mut tracked_row_indices_by_commit = BTreeMap::>::new(); for (row_index, row) in rows.iter().enumerate() { tracked_row_indices_by_commit .entry(row.commit_id.clone()) .or_default() .push(row_index); } PreparedAdoptedRowIndex { tracked_row_indices_by_commit, } } async fn stage_commit_store_commits( commit_store: &CommitStoreContext, transaction: &mut (impl StorageReader + ?Sized), writes: &mut StorageWriteSet, state_rows: &[PreparedStateRow], tracked_row_indices_by_commit: &BTreeMap>, adopted_rows: &[PreparedAdoptedStateRow], adopted_row_indices_by_commit: &BTreeMap>, commit_rows: &[FinalizedCommitRow], ) -> Result, LixError> { let mut commits = Vec::with_capacity(commit_rows.len()); let mut commit_ids = Vec::with_capacity(commit_rows.len()); for commit_row in commit_rows { let state_row_indices = tracked_row_indices_by_commit .get(&commit_row.commit_id) .map(Vec::as_slice) .unwrap_or_default(); let adopted_row_indices = adopted_row_indices_by_commit .get(&commit_row.commit_id) .map(Vec::as_slice) .unwrap_or_default(); let mut authored_changes = Vec::with_capacity(state_row_indices.len()); for &row_index in state_row_indices { authored_changes.push(change_ref_from_state_row(&state_rows[row_index])?); } let mut adopted_changes = Vec::with_capacity(adopted_row_indices.len()); for &row_index in adopted_row_indices { adopted_changes.push(change_ref_from_adopted_row(&adopted_rows[row_index])); } let commit = CommitDraftRef { id: &commit_row.commit_id, change_id: &commit_row.change_id, parent_ids: &commit_row.parent_commit_ids, author_account_ids: &[], created_at: &commit_row.created_at, }; commit_ids.push(commit_row.commit_id.clone()); commits.push((commit, authored_changes, adopted_changes)); } let staged = commit_store .writer(transaction, writes) .stage_tracked_commit_drafts(commits) .await?; if staged.len() != commit_ids.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store staged {} commits for {} finalized commit rows", staged.len(), commit_ids.len() ), )); } Ok(commit_ids.into_iter().zip(staged).collect()) } fn change_ref_from_state_row(row: &PreparedStateRow) -> Result, LixError> { let Some(change_id) = row.change_id.as_deref() else { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "tracked staged row is missing change_id before commit-store append", )); }; Ok(ChangeRef { id: change_id, entity_id: &row.entity_id, schema_key: &row.schema_key, file_id: row.file_id.as_deref(), snapshot_ref: row.snapshot.as_ref().map(|snapshot| &snapshot.json_ref), metadata_ref: row.metadata.as_ref().map(|metadata| &metadata.json_ref), created_at: &row.updated_at, }) } fn change_ref_from_adopted_row(row: &PreparedAdoptedStateRow) -> ChangeRef<'_> { ChangeRef { id: &row.change_id, entity_id: &row.entity_id, schema_key: &row.schema_key, file_id: row.file_id.as_deref(), snapshot_ref: row.snapshot.as_ref().map(|snapshot| &snapshot.json_ref), metadata_ref: row.metadata.as_ref().map(|metadata| &metadata.json_ref), created_at: &row.updated_at, } } async fn stage_tracked_roots( transaction: &mut (impl StorageReader + ?Sized), writes: &mut StorageWriteSet, state_rows: &[PreparedStateRow], mut tracked_row_indices_by_commit: BTreeMap>, adopted_rows: &[PreparedAdoptedStateRow], mut adopted_row_indices_by_commit: BTreeMap>, tracked_roots: Vec, mut staged_commits: BTreeMap, json_pack_indexes_by_commit: BTreeMap< String, BTreeMap>, >, ) -> Result<(), LixError> { let tracked_state = TrackedStateContext::new(); let mut writer = tracked_state.writer(transaction, writes); for root in tracked_roots { let staged = staged_commits.remove(&root.commit_id).ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked-state root for commit '{}' has no staged commit-store locators", root.commit_id ), ) })?; let state_row_indices = tracked_row_indices_by_commit .remove(&root.commit_id) .unwrap_or_default(); let adopted_row_indices = adopted_row_indices_by_commit .remove(&root.commit_id) .unwrap_or_default(); if state_row_indices.len() != staged.authored_locators.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit '{}' has {} tracked authored rows but {} commit-store authored locators", root.commit_id, state_row_indices.len(), staged.authored_locators.len() ), )); } if adopted_row_indices.len() != staged.adopted_locators.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit '{}' has {} tracked adopted rows but {} commit-store adopted locators", root.commit_id, adopted_row_indices.len(), staged.adopted_locators.len() ), )); } let authored_changes = state_row_indices .iter() .map(|&row_index| change_ref_from_state_row(&state_rows[row_index])) .collect::, _>>()?; let adopted_changes = adopted_row_indices .iter() .map(|&row_index| change_ref_from_adopted_row(&adopted_rows[row_index])) .collect::>(); let authored_updated_at = state_row_indices .iter() .map(|&row_index| state_rows[row_index].updated_at.as_str()) .collect::>(); let authored_created_at = state_row_indices .iter() .map(|&row_index| state_rows[row_index].created_at.as_str()) .collect::>(); let adopted_updated_at = adopted_row_indices .iter() .map(|&row_index| adopted_rows[row_index].updated_at.as_str()) .collect::>(); let adopted_created_at = adopted_row_indices .iter() .map(|&row_index| adopted_rows[row_index].created_at.as_str()) .collect::>(); let mut deltas = Vec::with_capacity(authored_changes.len() + adopted_changes.len()); deltas.extend( authored_changes .iter() .zip(&staged.authored_locators) .zip(authored_created_at) .zip(authored_updated_at) .map( |(((change, locator), created_at), updated_at)| TrackedStateDeltaRef { change: *change, locator: locator.as_ref(), created_at, updated_at, }, ), ); deltas.extend( adopted_changes .iter() .zip(&staged.adopted_locators) .zip(adopted_created_at) .zip(adopted_updated_at) .map( |(((change, locator), created_at), updated_at)| TrackedStateDeltaRef { change: *change, locator: locator.as_ref(), created_at, updated_at, }, ), ); if let Some(indexes) = json_pack_indexes_by_commit .get(&root.commit_id) .and_then(|packs| packs.get(&0)) { writer .stage_delta_with_json_pack_indexes( &root.commit_id, root.parent_commit_id.as_deref(), &deltas, crate::tracked_state::DeltaJsonPackIndexesRef { commit_id: &root.commit_id, pack_id: 0, indexes, }, ) .await?; } else { writer .stage_delta(&root.commit_id, root.parent_commit_id.as_deref(), &deltas) .await?; } } if !tracked_row_indices_by_commit.is_empty() || !adopted_row_indices_by_commit.is_empty() { let mut commit_ids = tracked_row_indices_by_commit .keys() .chain(adopted_row_indices_by_commit.keys()) .cloned() .collect::>(); commit_ids.sort(); commit_ids.dedup(); return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "tracked live_state rows have no finalized root metadata for commit ids: {}", commit_ids.join(", ") ), )); } if !staged_commits.is_empty() { let commit_ids = staged_commits.keys().cloned().collect::>(); return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "commit-store staged commits without tracked root metadata: {}", commit_ids.join(", ") ), )); } Ok(()) } fn untracked_row_ref_from_state_row(row: &PreparedStateRow) -> UntrackedStateRowRef<'_> { UntrackedStateRowRef { entity_id: &row.entity_id, schema_key: &row.schema_key, file_id: row.file_id.as_deref(), snapshot_content: row .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()), metadata: row .metadata .as_ref() .map(|metadata| metadata.normalized.as_ref()), created_at: &row.created_at, updated_at: &row.updated_at, global: row.global, version_id: &row.version_id, } } fn untracked_identity_ref_from_state_row(row: &PreparedStateRow) -> UntrackedStateIdentityRef<'_> { UntrackedStateIdentityRef { version_id: &row.version_id, schema_key: &row.schema_key, entity_id: &row.entity_id, file_id: row.file_id.as_deref(), } } fn untracked_identity_ref_from_adopted_row( row: &PreparedAdoptedStateRow, ) -> UntrackedStateIdentityRef<'_> { UntrackedStateIdentityRef { version_id: &row.version_id, schema_key: &row.schema_key, entity_id: &row.entity_id, file_id: row.file_id.as_deref(), } } /// Materializes tracked staged membership into commit-store commits. /// /// Staging only accumulates `version_id -> change_ids` because commit ids, /// parent heads, and commit-row timestamps belong to transaction finalization. /// The `change_ids` list is the ordered set of canonical changes whose effects /// the commit introduces relative to its first parent; merge commits may later /// populate this list with existing source-parent changes instead of copied /// change payloads. /// This function turns those membership sets into finalized commit facts. /// /// Commit finalization output split by durability target. /// /// `commit_rows` are canonical commit-store facts. live_state later projects /// commit SQL surfaces from commit_store; tracked_state roots do not store /// commit graph facts. /// /// `version_heads` are moving refs. They are written through `VersionContext`, /// not the canonical commit store. struct FinalizedCommitRows { commit_rows: Vec, version_heads: Vec, tracked_roots: Vec, } struct FinalizedCommitRow { commit_id: String, parent_commit_ids: Vec, created_at: String, change_id: String, } struct PendingVersionHead { version_id: String, commit_id: String, timestamp: String, } struct PendingTrackedRoot { commit_id: String, parent_commit_id: Option, } async fn finalize_commit_rows( commit_members_by_version: BTreeMap, extra_commit_parents_by_version: BTreeMap>, version_ctx: &VersionContext, transaction: &mut (impl StorageReader + ?Sized), ) -> Result { let mut commit_rows = Vec::new(); let mut version_heads = Vec::new(); let mut tracked_roots = Vec::new(); for (version_id, members) in commit_members_by_version { if members.is_empty() && !members.allow_empty { continue; } let commit_id = members.commit_id; let commit_change_id = members.commit_change_id; let timestamp = members.created_at; let _change_ids = members.change_ids; let parent_commit_ids = version_ctx .ref_reader(&mut *transaction) .load_head_commit_id(&version_id) .await? .into_iter() .collect::>(); let parent_commit_ids = merge_parent_commit_ids( parent_commit_ids, extra_commit_parents_by_version .get(&version_id) .cloned() .unwrap_or_default(), ); let parent_commit_id = parent_commit_ids.first().cloned(); commit_rows.push(FinalizedCommitRow { commit_id: commit_id.clone(), parent_commit_ids: parent_commit_ids.clone(), created_at: timestamp.clone(), change_id: commit_change_id, }); version_heads.push(PendingVersionHead { version_id: version_id.clone(), commit_id: commit_id.clone(), timestamp, }); tracked_roots.push(PendingTrackedRoot { commit_id, parent_commit_id, }); } Ok(FinalizedCommitRows { commit_rows, version_heads, tracked_roots, }) } fn merge_parent_commit_ids(mut base: Vec, extra: Vec) -> Vec { for parent in extra { if !base.contains(&parent) { base.push(parent); } } base } #[cfg(test)] mod tests { use std::collections::BTreeMap; use std::sync::{ atomic::{AtomicUsize, Ordering}, Arc, }; use super::*; use crate::backend::{ testing::UnitTestBackend, Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRequest, BackendKvValueBatch, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, }; use crate::catalog::SchemaPlanId; use crate::commit_store::{ChangeIndexEntry, ChangeLocator}; use crate::live_state::{LiveStateContext, LiveStateRowRequest}; use crate::storage::StorageContext; use crate::transaction::types::PreparedRowFacts; use crate::untracked_state::{ MaterializedUntrackedStateRow, UntrackedStateContext, UntrackedStateRowRequest, }; use crate::version::VersionContext; use crate::NullableKeyFilter; use crate::GLOBAL_VERSION_ID; use async_trait::async_trait; const DETERMINISTIC_MODE_KEY: &str = "lix_deterministic_mode"; const DETERMINISTIC_SEQUENCE_KEY: &str = "lix_deterministic_sequence_number"; fn live_state_context() -> LiveStateContext { LiveStateContext::new( crate::tracked_state::TrackedStateContext::new(), crate::untracked_state::UntrackedStateContext::new(), crate::commit_graph::CommitGraphContext::new(), ) } #[tokio::test] async fn commit_staged_writes_appends_commit_store_and_updates_serving_projection() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let binary_cas = BinaryCasContext::new(); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let state_rows = vec![tracked_global_row("change-1")]; commit_prepared_writes( &binary_cas, &crate::commit_store::CommitStoreContext::new(), &version_ctx, None, transaction.as_mut(), PreparedWriteSet { insert_identities: BTreeMap::new(), state_rows, adopted_rows: Vec::new(), commit_members_by_version: BTreeMap::from([( GLOBAL_VERSION_ID.to_string(), members(["change-1"]), )]), extra_commit_parents_by_version: BTreeMap::new(), file_data_writes: Vec::new(), }, ) .await .expect("commit should flush staged rows"); transaction .commit() .await .expect("commit should persist kv"); let commit_reader = crate::commit_store::CommitStoreContext::new().reader(storage.clone()); let commit = commit_reader .load_commit("test-uuid-1") .await .expect("commit-store commit should load") .expect("commit-store commit should exist"); assert_eq!(commit.change_id, "test-uuid-2"); assert_eq!(commit.change_pack_count, 1); assert_eq!(commit.membership_pack_count, 0); let index_entries = commit_reader .load_change_index_entries(&["change-1".to_string(), "test-uuid-2".to_string()]) .await .expect("commit-store change index should load"); assert_eq!( index_entries, vec![ Some(ChangeIndexEntry::PackedChange { locator: ChangeLocator { source_commit_id: "test-uuid-1".to_string(), source_pack_id: 0, source_ordinal: 0, change_id: "change-1".to_string(), }, }), Some(ChangeIndexEntry::CommitHeader { commit_id: "test-uuid-1".to_string(), change_id: "test-uuid-2".to_string(), }), ] ); let change_pack = commit_reader .load_change_pack("test-uuid-1", 0) .await .expect("commit-store change pack should load") .expect("commit-store change pack should exist"); assert_eq!(change_pack.len(), 1); assert_eq!(change_pack[0].id, "change-1"); assert_eq!(change_pack[0].schema_key, "test_schema"); let loaded_head = version_ctx .ref_reader(storage.clone()) .load_head_commit_id(GLOBAL_VERSION_ID) .await .expect("version ref load should succeed"); assert_eq!(loaded_head.as_deref(), Some("test-uuid-1")); } #[tokio::test] async fn commit_with_only_untracked_writes_does_not_create_lix_commit() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let binary_cas = BinaryCasContext::new(); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); let untracked_state = UntrackedStateContext::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let state_rows = vec![untracked_global_row("change-untracked")]; commit_prepared_writes( &binary_cas, &crate::commit_store::CommitStoreContext::new(), &version_ctx, None, transaction.as_mut(), PreparedWriteSet { insert_identities: BTreeMap::new(), state_rows, adopted_rows: Vec::new(), commit_members_by_version: BTreeMap::new(), extra_commit_parents_by_version: BTreeMap::new(), file_data_writes: Vec::new(), }, ) .await .expect("commit should flush untracked row"); transaction .commit() .await .expect("commit should persist kv"); let commit_reader = crate::commit_store::CommitStoreContext::new().reader(storage.clone()); let index_entries = commit_reader .load_change_index_entries(&["change-untracked".to_string()]) .await .expect("commit-store change index should load"); assert_eq!(index_entries, vec![None]); let loaded = { let mut untracked_reader = untracked_state.reader(storage.clone()); untracked_reader .load_row(&UntrackedStateRowRequest { schema_key: "test_schema".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single("entity-1"), file_id: NullableKeyFilter::Null, }) .await } .expect("untracked row load should succeed") .expect("untracked row should be persisted"); assert_eq!( loaded.snapshot_content.as_deref(), Some("{\"value\":\"untracked\"}") ); } #[tokio::test] async fn tracked_write_deletes_matching_untracked_overlay() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let binary_cas = BinaryCasContext::new(); let untracked_state = UntrackedStateContext::new(); let live_state = Arc::new(live_state_context()); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); let mut seed_transaction = storage .begin_write_transaction() .await .expect("seed transaction should open"); let mut writes = StorageWriteSet::new(); let staged_row = untracked_global_row("change-untracked"); let canonical_row = crate::test_support::untracked_state_row_from_materialized( &mut writes, &MaterializedUntrackedStateRow::from(staged_row), ) .expect("untracked seed should canonicalize"); untracked_state .writer(&mut writes) .stage_rows(std::iter::once(canonical_row.as_ref())) .expect("untracked seed should write"); writes .apply(&mut seed_transaction.as_mut()) .await .expect("untracked seed should apply"); seed_transaction .commit() .await .expect("seed transaction should persist"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let state_rows = vec![tracked_global_row("change-tracked")]; commit_prepared_writes( &binary_cas, &crate::commit_store::CommitStoreContext::new(), &version_ctx, None, transaction.as_mut(), PreparedWriteSet { insert_identities: BTreeMap::new(), state_rows, adopted_rows: Vec::new(), commit_members_by_version: BTreeMap::from([( GLOBAL_VERSION_ID.to_string(), members(["change-tracked"]), )]), extra_commit_parents_by_version: BTreeMap::new(), file_data_writes: Vec::new(), }, ) .await .expect("tracked commit should flush"); transaction .commit() .await .expect("commit should persist kv"); let untracked = { let mut untracked_reader = untracked_state.reader(storage.clone()); untracked_reader.load_row(&untracked_request()).await } .expect("untracked load should succeed"); assert_eq!(untracked, None); let visible = live_state .reader(storage.clone()) .load_row(&live_state_request()) .await .expect("live-state load should succeed") .expect("tracked row should be visible"); assert!(!visible.untracked); assert_eq!(visible.change_id.as_deref(), Some("change-tracked")); assert_eq!(visible.snapshot_content.as_deref(), Some("{\"value\":1}")); } #[tokio::test] async fn commit_staged_writes_applies_cross_subsystem_rows_as_one_backend_batch() { let counting_backend = Arc::new(CountingBackend::new()); let write_batches = counting_backend.write_batches(); let backend: Arc = counting_backend; let storage = StorageContext::new(backend); let binary_cas = BinaryCasContext::new(); let live_state = Arc::new(live_state_context()); let untracked_state = UntrackedStateContext::new(); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); crate::test_support::seed_global_version_head(storage.clone()).await; { let mut seed_transaction = storage .begin_write_transaction() .await .expect("seed transaction should open"); let mut writes = StorageWriteSet::new(); let mode_snapshot = serde_json::to_string(&serde_json::json!({ "key": DETERMINISTIC_MODE_KEY, "value": { "enabled": true }, })) .expect("mode snapshot should serialize"); JsonStoreContext::new() .writer() .stage_batch( &mut writes, JsonWritePlacementRef::OutOfBand, [NormalizedJsonRef::new(mode_snapshot.as_str())], ) .expect("deterministic mode snapshot should stage"); let row = crate::untracked_state::UntrackedStateRow { entity_id: crate::entity_identity::EntityIdentity::single(DETERMINISTIC_MODE_KEY), schema_key: "lix_key_value".to_string(), file_id: None, snapshot_content: Some(mode_snapshot.to_string()), metadata: None, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global: true, version_id: GLOBAL_VERSION_ID.to_string(), }; UntrackedStateContext::new() .writer(&mut writes) .stage_rows(std::iter::once(row.as_ref())) .expect("deterministic mode should stage"); writes .apply(&mut seed_transaction.as_mut()) .await .expect("deterministic mode should apply"); seed_transaction .commit() .await .expect("seed transaction should persist"); } write_batches.store(0, Ordering::SeqCst); let runtime_functions = { let reader = live_state.reader(storage.clone()); FunctionContext::prepare(&reader) .await .expect("runtime context should prepare") }; runtime_functions.provider().call_uuid_v7(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let tracked_row = tracked_global_row("change-tracked"); let mut untracked_row = untracked_global_row("change-untracked"); untracked_row.entity_id = crate::entity_identity::EntityIdentity::single("entity-2"); commit_prepared_writes( &binary_cas, &crate::commit_store::CommitStoreContext::new(), &version_ctx, Some(&runtime_functions), transaction.as_mut(), PreparedWriteSet { insert_identities: BTreeMap::new(), state_rows: vec![tracked_row, untracked_row], adopted_rows: Vec::new(), commit_members_by_version: BTreeMap::from([( GLOBAL_VERSION_ID.to_string(), members(["change-tracked"]), )]), extra_commit_parents_by_version: BTreeMap::new(), file_data_writes: Vec::new(), }, ) .await .expect("cross-subsystem commit should stage and apply"); assert_eq!( write_batches.load(Ordering::SeqCst), 1, "tracked, json, untracked, commit-store, and version refs must apply as one backend write batch" ); transaction .commit() .await .expect("commit should persist kv"); assert_eq!(write_batches.load(Ordering::SeqCst), 1); let commit_reader = crate::commit_store::CommitStoreContext::new().reader(storage.clone()); let commit = commit_reader .load_commit("test-uuid-1") .await .expect("commit-store commit should load") .expect("commit-store commit should exist"); assert_eq!(commit.change_id, "test-uuid-2"); let index_entries = commit_reader .load_change_index_entries(&["change-tracked".to_string()]) .await .expect("commit-store change index should load"); assert!(matches!( index_entries.as_slice(), [Some(ChangeIndexEntry::PackedChange { .. })] )); let loaded_head = version_ctx .ref_reader(storage.clone()) .load_head_commit_id(GLOBAL_VERSION_ID) .await .expect("version ref load should succeed"); assert_eq!(loaded_head.as_deref(), Some("test-uuid-1")); let untracked = { let mut untracked_reader = untracked_state.reader(storage.clone()); untracked_reader .load_row(&UntrackedStateRowRequest { schema_key: "test_schema".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single("entity-2"), file_id: NullableKeyFilter::Null, }) .await } .expect("untracked row load should succeed") .expect("untracked row should persist"); assert_eq!( untracked.snapshot_content.as_deref(), Some("{\"value\":\"untracked\"}") ); let sequence_row = live_state .reader(storage.clone()) .load_row(&LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single( DETERMINISTIC_SEQUENCE_KEY, ), file_id: NullableKeyFilter::Null, }) .await .expect("deterministic sequence should load") .expect("deterministic sequence should persist"); assert_eq!( sequence_row.snapshot_content.as_deref(), Some("{\"key\":\"lix_deterministic_sequence_number\",\"value\":0}") ); } #[tokio::test] async fn non_global_tracked_write_creates_one_commit_and_advances_only_touched_version() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let binary_cas = BinaryCasContext::new(); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); crate::test_support::seed_version_head(storage.clone(), GLOBAL_VERSION_ID, "global-before") .await; crate::test_support::seed_version_head(storage.clone(), "version-a", "version-a-before") .await; let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let state_rows = vec![tracked_version_row("version-a", "change-version-a")]; commit_prepared_writes( &binary_cas, &crate::commit_store::CommitStoreContext::new(), &version_ctx, None, transaction.as_mut(), PreparedWriteSet { insert_identities: BTreeMap::new(), state_rows, adopted_rows: Vec::new(), commit_members_by_version: BTreeMap::from([( "version-a".to_string(), members(["change-version-a"]), )]), extra_commit_parents_by_version: BTreeMap::new(), file_data_writes: Vec::new(), }, ) .await .expect("version commit should flush"); transaction .commit() .await .expect("commit should persist kv"); let commit_reader = crate::commit_store::CommitStoreContext::new().reader(storage.clone()); let commit = commit_reader .load_commit("test-uuid-1") .await .expect("commit-store commit should load") .expect("commit-store commit should exist"); assert_eq!(commit.change_id, "test-uuid-2"); assert_eq!(commit.parent_ids, vec!["version-a-before"]); let index_entries = commit_reader .load_change_index_entries(&["change-version-a".to_string()]) .await .expect("commit-store change index should load"); assert!(matches!( index_entries.as_slice(), [Some(ChangeIndexEntry::PackedChange { .. })] )); let global_head = version_ctx .ref_reader(storage.clone()) .load_head_commit_id(GLOBAL_VERSION_ID) .await .expect("global head should load"); let version_head = version_ctx .ref_reader(storage.clone()) .load_head_commit_id("version-a") .await .expect("version head should load"); assert_eq!(global_head.as_deref(), Some("global-before")); assert_eq!(version_head.as_deref(), Some("test-uuid-1")); } #[tokio::test] async fn finalize_commit_rows_parents_global_commit_to_existing_version_ref() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); crate::test_support::seed_version_head( storage.clone(), GLOBAL_VERSION_ID, "initial-commit", ) .await; let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let rows = finalize_commit_rows( BTreeMap::from([( GLOBAL_VERSION_ID.to_string(), members(["change-a", "change-b"]), )]), BTreeMap::new(), &version_ctx, transaction.as_mut(), ) .await .expect("global commit row should finalize"); assert_eq!(rows.commit_rows.len(), 1); assert_eq!(rows.version_heads.len(), 1); let row = &rows.commit_rows[0]; assert_eq!(row.commit_id, "test-uuid-1"); assert_eq!(row.change_id, "test-uuid-2"); assert_eq!(row.created_at, "test-timestamp-1"); assert_eq!(row.parent_commit_ids, vec!["initial-commit"]); let version_head = &rows.version_heads[0]; assert_eq!(version_head.version_id, GLOBAL_VERSION_ID); assert_eq!(version_head.commit_id, "test-uuid-1"); } #[tokio::test] async fn finalize_commit_rows_skips_empty_members() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let rows = finalize_commit_rows( BTreeMap::from([( GLOBAL_VERSION_ID.to_string(), StagedCommitMembers::default(), )]), BTreeMap::new(), &version_ctx, transaction.as_mut(), ) .await .expect("empty members should be ignored"); assert!(rows.commit_rows.is_empty()); assert!(rows.version_heads.is_empty()); } #[tokio::test] async fn finalize_commit_rows_uses_existing_version_ref_as_parent() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); crate::test_support::seed_version_head(storage.clone(), GLOBAL_VERSION_ID, "global-before") .await; crate::test_support::seed_version_head(storage.clone(), "version-a", "previous-commit") .await; let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let rows = finalize_commit_rows( BTreeMap::from([("version-a".to_string(), members(["change-a"]))]), BTreeMap::new(), &version_ctx, transaction.as_mut(), ) .await .expect("active-version commit finalization should resolve parent"); assert_eq!( rows.commit_rows[0].parent_commit_ids, vec!["previous-commit"] ); assert_eq!(rows.version_heads[0].version_id, "version-a"); } #[tokio::test] async fn finalize_commit_rows_appends_extra_merge_parent_after_target_head() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let version_ctx = VersionContext::new(Arc::new(UntrackedStateContext::new())); crate::test_support::seed_version_head(storage.clone(), "version-a", "target-head").await; let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let rows = finalize_commit_rows( BTreeMap::from([("version-a".to_string(), members(["change-a"]))]), BTreeMap::from([("version-a".to_string(), vec!["source-head".to_string()])]), &version_ctx, transaction.as_mut(), ) .await .expect("merge commit finalization should resolve parents"); assert_eq!( rows.commit_rows[0].parent_commit_ids, vec!["target-head", "source-head"] ); } fn members(change_ids: [&str; N]) -> StagedCommitMembers { let mut members = StagedCommitMembers::new( "test-uuid-1".to_string(), "test-uuid-2".to_string(), "test-timestamp-1".to_string(), ); for change_id in change_ids { members.add_change_id(change_id.to_string()); } members } fn tracked_global_row(change_id: &str) -> PreparedStateRow { tracked_version_row(GLOBAL_VERSION_ID, change_id) } fn tracked_version_row(version_id: &str, change_id: &str) -> PreparedStateRow { PreparedStateRow { schema_plan_id: SchemaPlanId::for_test(0), facts: PreparedRowFacts::default(), entity_id: crate::entity_identity::EntityIdentity::single("entity-1"), schema_key: "test_schema".to_string(), file_id: None, snapshot: Some( crate::transaction::types::stage_json_from_value( crate::transaction::types::TransactionJson::from_value_for_test( serde_json::json!({ "value": 1 }), ), "test tracked row snapshot", ) .expect("test snapshot should stage"), ), metadata: None, origin: None, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global: version_id == GLOBAL_VERSION_ID, change_id: Some(change_id.to_string()), commit_id: Some("test-uuid-1".to_string()), untracked: false, version_id: version_id.to_string(), } } fn untracked_global_row(change_id: &str) -> PreparedStateRow { let mut row = tracked_global_row(change_id); row.snapshot = Some( crate::transaction::types::stage_json_from_value( crate::transaction::types::TransactionJson::from_value_for_test( serde_json::json!({ "value": "untracked" }), ), "test untracked row snapshot", ) .expect("test snapshot should stage"), ); PreparedStateRow { change_id: None, commit_id: None, untracked: true, ..row } } fn untracked_request() -> UntrackedStateRowRequest { UntrackedStateRowRequest { schema_key: "test_schema".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single("entity-1"), file_id: NullableKeyFilter::Null, } } fn live_state_request() -> LiveStateRowRequest { LiveStateRowRequest { schema_key: "test_schema".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single("entity-1"), file_id: NullableKeyFilter::Null, } } struct CountingBackend { inner: UnitTestBackend, write_batches: Arc, } impl CountingBackend { fn new() -> Self { Self { inner: UnitTestBackend::new(), write_batches: Arc::new(AtomicUsize::new(0)), } } fn write_batches(&self) -> Arc { Arc::clone(&self.write_batches) } } #[async_trait] impl Backend for CountingBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { self.inner.begin_read_transaction().await } async fn begin_write_transaction( &self, ) -> Result, LixError> { Ok(Box::new(CountingWriteTransaction { inner: self.inner.begin_write_transaction().await?, write_batches: Arc::clone(&self.write_batches), })) } } struct CountingWriteTransaction { inner: Box, write_batches: Arc, } #[async_trait] impl BackendReadTransaction for CountingWriteTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { self.inner.get_values(request).await } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { self.inner.exists_many(request).await } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { self.inner.scan_keys(request).await } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { self.inner.scan_values(request).await } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { self.inner.scan_entries(request).await } async fn rollback(self: Box) -> Result<(), LixError> { let Self { inner, .. } = *self; inner.rollback().await } } #[async_trait] impl BackendWriteTransaction for CountingWriteTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { self.write_batches.fetch_add(1, Ordering::SeqCst); self.inner.write_kv_batch(batch).await } async fn commit(self: Box) -> Result<(), LixError> { let Self { inner, .. } = *self; inner.commit().await } } } ================================================ FILE: packages/engine/src/transaction/context.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use std::sync::Arc; use async_trait::async_trait; use serde_json::Value as JsonValue; use crate::binary_cas::{BinaryCasContext, BlobBytesBatch, BlobHash}; use crate::catalog::CatalogContext; use crate::commit_graph::{CommitGraphContext, CommitGraphStoreReader}; use crate::commit_store::CommitStoreContext; use crate::domain::Domain; use crate::entity_identity::EntityIdentity; use crate::functions::{FunctionContext, FunctionProviderHandle}; use crate::live_state::{ LiveStateContext, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::session::{SessionMode, WORKSPACE_VERSION_KEY}; use crate::sql2::SqlWriteExecutionContext; use crate::storage::{StorageContext, StorageWriteSet, StorageWriteTransaction}; use crate::tracked_state::{TrackedStateContext, TrackedStateStoreReader}; use crate::transaction::commit; use crate::transaction::live_state_overlay::overlay_scan_rows; use crate::transaction::normalization::{ normalize_transaction_write_row, remember_pending_registered_schema, NormalizedTransactionWriteRow, REGISTERED_SCHEMA_KEY, }; use crate::transaction::prepare_version_ref_row; use crate::transaction::schema_resolver::TransactionSchemaResolver; use crate::transaction::staging::{PreparedWriteSet, TransactionWriteBuffer}; use crate::transaction::types::{ stage_json_from_value, PreparedAdoptedStateRow, PreparedRowFacts, PreparedStateRow, PreparedTransactionWrite, TransactionAdoptedChange, TransactionFileData, TransactionJson, TransactionWrite, TransactionWriteMode, TransactionWriteOutcome, TransactionWriteRow, }; use crate::transaction::validation::{validate_prepared_writes, TransactionValidationInput}; use crate::version::{VersionContext, VersionRefReader}; use crate::GLOBAL_VERSION_ID; use crate::{LixError, NullableKeyFilter}; #[derive(Debug, Clone, PartialEq, Eq, Default)] pub(crate) struct TransactionCommitOutcome; /// One execution-scoped transaction capability for engine write paths. /// /// This is intentionally not a session-wide kitchen sink. It owns the backend /// write transaction for one `SessionContext::execute(...)` call and projects /// accepted SQL/provider writes back into the SQL DAG through an engine-local live-state /// overlay. /// /// Transaction invariant: this is the capability for engine operations /// that may write. Write-relevant reads must be exposed from this transaction, /// after the backend write transaction has begun, rather than from session-level /// helpers. pub(crate) struct Transaction { active_version_id: String, live_state: Arc, tracked_state: Arc, binary_cas: Arc, commit_store: Arc, version_ctx: Arc, schema_resolver: TransactionSchemaResolver, staged_writes: Arc, storage_transaction: Box, visible_schemas: Vec, functions: FunctionProviderHandle, } impl Transaction { /// Opens a backend write transaction and creates an execution-scoped /// staging area for SQL/provider hooks. async fn open( mode: &SessionMode, storage: StorageContext, live_state: Arc, tracked_state: Arc, binary_cas: Arc, commit_store: Arc, version_ctx: Arc, catalog_context: Arc, ) -> Result { let mut storage_transaction = storage.begin_write_transaction().await?; let setup_result = async { let active_version_id = resolve_active_version_id( mode, live_state.as_ref(), version_ctx.as_ref(), storage_transaction.as_mut(), ) .await?; let runtime_functions = { let runtime_live_state = live_state.reader(storage_transaction.as_mut()); FunctionContext::prepare(&runtime_live_state).await? }; let functions = runtime_functions.provider(); let visible_schemas = { let visible_live_state = live_state.reader(storage_transaction.as_mut()); catalog_context .schema_jsons_for_sql_read_planning(&visible_live_state, &active_version_id) .await? }; let schema_facts = { let visible_live_state = live_state.reader(storage_transaction.as_mut()); catalog_context .schema_facts_for_domain( &visible_live_state, &Domain::schema_catalog(active_version_id.clone(), true), ) .await? }; Ok::<_, LixError>(( active_version_id, runtime_functions, functions, visible_schemas, schema_facts, )) } .await; let (active_version_id, runtime_functions, functions, visible_schemas, schema_facts) = match setup_result { Ok(result) => result, Err(error) => { let _ = storage_transaction.rollback().await; return Err(error); } }; let mut schema_resolver = TransactionSchemaResolver::new(catalog_context); schema_resolver.remember_schema_facts( &Domain::schema_catalog(active_version_id.clone(), true), schema_facts, ); let staged_writes = Arc::new(TransactionWriteBuffer::new(functions.clone())); Ok(OpenTransaction { transaction: Self { active_version_id, live_state, tracked_state, binary_cas, commit_store, version_ctx, schema_resolver, staged_writes, storage_transaction, visible_schemas, functions, }, runtime_functions, }) } /// Commits prepared writes, runtime function state, and the backend transaction. /// /// Commit owns the execution boundary: prepared rows become commit-store /// facts, version-ref updates, and visible live_state rows before the /// backend transaction is committed. pub(crate) async fn commit( mut self, runtime_functions: &FunctionContext, ) -> Result { let prepared_writes = match self.staged_writes.drain() { Ok(prepared_writes) => prepared_writes, Err(error) => { let _ = self.storage_transaction.rollback().await; return Err(error); } }; if let Err(error) = self .validate_prepared_writes_by_version(&prepared_writes) .await { let _ = self.storage_transaction.rollback().await; return Err(error); } if let Err(error) = commit::commit_prepared_writes( &self.binary_cas, &self.commit_store, self.version_ctx.as_ref(), Some(runtime_functions), self.storage_transaction.as_mut(), prepared_writes, ) .await { let _ = self.storage_transaction.rollback().await; return Err(error); } self.storage_transaction.commit().await?; Ok(TransactionCommitOutcome::default()) } /// Rolls back the backend transaction. /// /// This is the explicit failure path for a write execution. Dropping the /// buffered transaction without commit is not the API we want callers to /// rely on. #[allow(dead_code)] pub(crate) async fn rollback(self) -> Result<(), LixError> { self.storage_transaction.rollback().await } /// Stages one decoded write batch into this transaction. /// /// This is the programmatic write entrypoint used by non-SQL APIs. The /// transaction still owns preparation from `TransactionWriteRow` into /// `PreparedStateRow`, so generated timestamps, change ids, commit ids, and /// commit membership stay in one place. #[allow(dead_code)] pub(crate) async fn stage_write( &mut self, write: TransactionWrite, ) -> Result { require_valid_transaction_write_storage_scopes(&write)?; #[cfg(feature = "storage-benches")] { crate::storage_bench::record_transaction_rows_staged(transaction_write_row_count( &write, )); crate::storage_bench::record_transaction_untracked_rows( transaction_write_untracked_row_count(&write), ); } self.require_existing_transaction_write_version_ids(&write) .await?; let write = self.prepare_transaction_write(write).await?; self.staged_writes.stage_write(write) } async fn prepare_transaction_write( &mut self, write: TransactionWrite, ) -> Result { Ok(match write { TransactionWrite::Rows { mode, rows } => PreparedTransactionWrite::Rows { mode, rows: self.prepare_transaction_rows(rows).await?, }, TransactionWrite::RowsWithFileData { mode, rows, file_data, count, } => PreparedTransactionWrite::RowsWithFileData { mode, rows: self.prepare_transaction_rows(rows).await?, file_data, count, }, TransactionWrite::AdoptedChanges { changes } => { PreparedTransactionWrite::AdoptedChanges { rows: self.prepare_adopted_changes(changes).await?, } } }) } async fn prepare_transaction_rows( &mut self, rows: Vec, ) -> Result, LixError> { let row_count = rows.len(); let staged = self.staged_writes.staging_overlay()?; let live_state = self.live_state.reader(self.storage_transaction.as_mut()); let mut rows_by_scope = BTreeMap::>::new(); for (index, row) in rows.into_iter().enumerate() { rows_by_scope .entry(Domain::schema_catalog( row.schema_scope_version_id().to_string(), row.untracked, )) .or_default() .push((index, row)); } let mut prepared_rows = Vec::with_capacity(row_count); prepared_rows.resize_with(row_count, || None); for (domain, rows) in rows_by_scope { let functions = self.functions.clone(); let catalog = self .schema_resolver .catalog_for_row_normalization(&live_state, &staged, &domain) .await?; for (_, row) in &rows { if row.schema_key != REGISTERED_SCHEMA_KEY { continue; } if row.file_id.is_some() { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "lix_registered_schema rows must not be scoped to a file", ) .with_hint("Schema definitions are scoped by version and durability only; write them with null file_id.")); } remember_pending_registered_schema( row.snapshot.as_ref().map(TransactionJson::value), Domain::schema_catalog( row.schema_scope_version_id().to_string(), row.untracked, ), catalog, )?; } let normalized_rows = rows .into_iter() .map(|(index, row)| { normalize_transaction_write_row(row, catalog, functions.clone()) .map(|row| (index, row)) }) .collect::, _>>()?; for (index, row) in normalized_rows { prepared_rows[index] = Some(prepare_state_row(row, &functions)?); } } Ok(prepared_rows .into_iter() .map(|row| { row.expect("every row should be prepared exactly once by schema scope grouping") }) .collect()) } async fn prepare_adopted_changes( &mut self, changes: Vec, ) -> Result, LixError> { let change_count = changes.len(); let staged = self.staged_writes.staging_overlay()?; let live_state = self.live_state.reader(self.storage_transaction.as_mut()); let mut changes_by_scope = BTreeMap::>::new(); for (index, change) in changes.into_iter().enumerate() { let schema_scope_version_id = if change.version_id == GLOBAL_VERSION_ID { GLOBAL_VERSION_ID } else { change.version_id.as_str() }; changes_by_scope .entry(Domain::schema_catalog( schema_scope_version_id.to_string(), false, )) .or_default() .push((index, change)); } let mut prepared_rows = Vec::with_capacity(change_count); prepared_rows.resize_with(change_count, || None); for (domain, changes) in changes_by_scope { let catalog = self .schema_resolver .catalog_for_row_normalization(&live_state, &staged, &domain) .await?; for (_, change) in &changes { let row = &change.projected_row; if row.schema_key != REGISTERED_SCHEMA_KEY { continue; } if row.file_id.is_some() { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "lix_registered_schema rows must not be scoped to a file", ) .with_hint("Schema definitions are scoped by version and durability only; write them with null file_id.")); } remember_adopted_registered_schema( Domain::schema_catalog(change.version_id.clone(), false), row.snapshot_content.as_deref(), catalog, )?; } let mut planned_changes = Vec::with_capacity(changes.len()); for (index, change) in changes { let row = &change.projected_row; let Some((schema_plan_id, _)) = catalog.plan_for_key(&row.schema_key) else { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema '{}' is not visible to this transaction", row.schema_key ), )); }; if row.schema_key == REGISTERED_SCHEMA_KEY { if row.file_id.is_some() { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "lix_registered_schema rows must not be scoped to a file", ) .with_hint("Schema definitions are scoped by version and durability only; write them with null file_id.")); } remember_adopted_registered_schema( Domain::schema_catalog(change.version_id.clone(), false), row.snapshot_content.as_deref(), catalog, )?; } planned_changes.push((index, change, schema_plan_id)); } for (index, change, schema_plan_id) in planned_changes { prepared_rows[index] = Some(prepare_adopted_state_row(change, schema_plan_id)?); } } Ok(prepared_rows .into_iter() .map(|row| row.expect("every adopted row should be prepared exactly once")) .collect()) } async fn validate_prepared_writes_by_version( &mut self, prepared_writes: &PreparedWriteSet, ) -> Result<(), LixError> { let validation_index = prepared_writes.validation_index(); for scope in validation_index.schema_scopes() { #[cfg(feature = "storage-benches")] crate::storage_bench::record_transaction_validation_version(); let version_prepared_writes = validation_index.validation_set_for_schema_scope(scope); let live_state = self.live_state.reader(self.storage_transaction.as_mut()); let schema_catalog = self .schema_resolver .catalog_for_validation(&live_state, scope) .await?; validate_prepared_writes(TransactionValidationInput::new( &version_prepared_writes, &schema_catalog, &live_state, )) .await?; } Ok(()) } /// Convenience helper for programmatic APIs that only stage state rows. #[allow(dead_code)] pub(crate) async fn stage_rows( &mut self, rows: Vec, ) -> Result { self.stage_write(TransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows, }) .await } async fn require_existing_transaction_write_version_ids( &mut self, write: &TransactionWrite, ) -> Result<(), LixError> { let version_ids = transaction_write_version_ids(write); let reader = self .version_ctx .ref_reader(self.storage_transaction.as_mut()); for version_id in version_ids { if version_id == GLOBAL_VERSION_ID { continue; } if reader.load_head_commit_id(&version_id).await?.is_none() { return Err(LixError::version_not_found( version_id, "stage_write", "target", )); } } Ok(()) } /// Returns the active version resolved inside this write transaction. pub(crate) fn active_version_id(&self) -> &str { &self.active_version_id } /// Returns this transaction's prepared runtime functions. pub(crate) fn functions(&self) -> FunctionProviderHandle { self.functions.clone() } /// Adds an extra parent to the commit generated for `version_id`. /// /// Merge uses this to preserve source-branch ancestry. Ordinary writes do /// not call this because commit finalization already parents to the /// version's previous head. pub(crate) fn add_commit_parent( &self, version_id: String, parent_commit_id: String, ) -> Result<(), LixError> { self.staged_writes .add_commit_parent(version_id, parent_commit_id) } /// Advances a version ref without staging tracked rows. /// /// Fast-forward merges use this path because the commit graph already /// contains the source head; the target ref only needs to move to it. pub(crate) async fn advance_version_ref( &mut self, version_id: &str, commit_id: &str, ) -> Result<(), LixError> { let timestamp = self.functions.call_timestamp(); let mut writes = StorageWriteSet::new(); let canonical_row = prepare_version_ref_row(version_id, commit_id, ×tamp)?; self.version_ctx .stage_canonical_ref_rows(&mut writes, &[canonical_row.row])?; writes .apply(&mut self.storage_transaction.as_mut()) .await .map(|_| ()) } /// Returns the commit id currently staged for `version_id`, if tracked rows /// have been staged for that version. pub(crate) fn staged_commit_id(&self, version_id: &str) -> Result, LixError> { self.staged_writes.staged_commit_id(version_id) } /// Stages a commit for `version_id` even if no tracked rows changed. pub(crate) fn stage_empty_commit(&self, version_id: String) -> Result { self.staged_writes.stage_empty_commit(version_id) } /// Creates a version-ref reader scoped to this write transaction. pub(crate) fn version_ref_reader(&mut self) -> impl VersionRefReader + '_ { self.version_ctx .ref_reader(self.storage_transaction.as_mut()) } /// Creates a tracked-state reader scoped to this write transaction. pub(crate) fn tracked_state_reader( &mut self, ) -> TrackedStateStoreReader<&mut dyn StorageWriteTransaction> { self.tracked_state.reader(self.storage_transaction.as_mut()) } /// Creates a commit-graph reader scoped to this write transaction. pub(crate) fn commit_graph_reader( &mut self, ) -> CommitGraphStoreReader<&mut dyn StorageWriteTransaction> { CommitGraphContext::new().reader(self.storage_transaction.as_mut()) } } fn prepare_state_row( normalized: NormalizedTransactionWriteRow, functions: &FunctionProviderHandle, ) -> Result { let NormalizedTransactionWriteRow { row, snapshot, schema_plan_id, facts, } = normalized; let updated_at = row.updated_at.unwrap_or_else(|| functions.call_timestamp()); let snapshot = snapshot .map(|value| stage_json_from_value(value, "prepared row snapshot_content")) .transpose()?; let metadata = row .metadata .map(|value| stage_json_from_value(value, "prepared row metadata")) .transpose()?; Ok(PreparedStateRow { schema_plan_id, facts, entity_id: row.entity_id.ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "normalized transaction write row is missing entity_id", ) })?, schema_key: row.schema_key, file_id: row.file_id, snapshot, metadata, origin: row.origin, created_at: row.created_at.unwrap_or_else(|| updated_at.clone()), updated_at, global: row.global, change_id: if row.untracked { row.change_id } else { Some(row.change_id.unwrap_or_else(|| functions.call_uuid_v7())) }, commit_id: row.commit_id, untracked: row.untracked, version_id: row.version_id, }) } fn remember_adopted_registered_schema( domain: Domain, snapshot_content: Option<&str>, catalog: &mut crate::catalog::CatalogSnapshot, ) -> Result<(), LixError> { let snapshot = snapshot_content .map(|value| { serde_json::from_str::(value).map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("adopted registered schema snapshot_content is invalid JSON: {error}"), ) }) }) .transpose()?; remember_pending_registered_schema(snapshot.as_ref(), domain, catalog) } fn prepare_adopted_state_row( change: TransactionAdoptedChange, schema_plan_id: crate::catalog::SchemaPlanId, ) -> Result { if change.change_id != change.projected_row.change_id { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "adopted change '{}' does not match projected row change_id '{}'", change.change_id, change.projected_row.change_id ), )); } let row = change.projected_row; let snapshot = row .snapshot_content .as_deref() .map(|value| stage_materialized_json_text(value, "adopted row snapshot_content")) .transpose()?; let metadata = row .metadata .as_deref() .map(|value| stage_materialized_json_text(value, "adopted row metadata")) .transpose()?; Ok(PreparedAdoptedStateRow { schema_plan_id, facts: PreparedRowFacts::default(), entity_id: row.entity_id, schema_key: row.schema_key, file_id: row.file_id, snapshot, metadata, created_at: row.created_at, updated_at: row.updated_at, global: change.version_id == GLOBAL_VERSION_ID, change_id: change.change_id, commit_id: String::new(), version_id: change.version_id, }) } fn stage_materialized_json_text( value: &str, context: &str, ) -> Result { let parsed = serde_json::from_str::(value).map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("{context} is invalid JSON: {error}"), ) })?; let prepared = TransactionJson::from_value(parsed, context)?; stage_json_from_value(prepared, context) } pub(crate) struct OpenTransaction { pub(crate) transaction: Transaction, pub(crate) runtime_functions: FunctionContext, } pub(crate) async fn open_transaction( mode: &SessionMode, storage: StorageContext, live_state: Arc, tracked_state: Arc, binary_cas: Arc, commit_store: Arc, version_ctx: Arc, catalog_context: Arc, ) -> Result { Transaction::open( mode, storage, live_state, tracked_state, binary_cas, commit_store, version_ctx, catalog_context, ) .await } #[async_trait] impl SqlWriteExecutionContext for Transaction { fn active_version_id(&self) -> &str { &self.active_version_id } fn functions(&self) -> FunctionProviderHandle { self.functions.clone() } fn list_visible_schemas(&self) -> Result, LixError> { Ok(self.visible_schemas.clone()) } async fn load_bytes_many(&mut self, hashes: &[BlobHash]) -> Result { self.binary_cas .reader(self.storage_transaction.as_mut()) .load_bytes_many(hashes) .await } async fn scan_live_state( &mut self, request: &LiveStateScanRequest, ) -> Result, LixError> { let staged = self.staged_writes.staging_overlay()?; let base = self.live_state.reader(self.storage_transaction.as_mut()); overlay_scan_rows(&base, &staged, request).await } async fn load_version_head(&mut self, version_id: &str) -> Result, LixError> { self.version_ctx .ref_reader(self.storage_transaction.as_mut()) .load_head_commit_id(version_id) .await } async fn stage_write( &mut self, write: TransactionWrite, ) -> Result { Transaction::stage_write(self, write).await } } fn transaction_write_version_ids(write: &TransactionWrite) -> BTreeSet { match write { TransactionWrite::Rows { rows, .. } => transaction_write_row_version_ids(rows), TransactionWrite::RowsWithFileData { rows, file_data, .. } => transaction_write_row_version_ids(rows) .into_iter() .chain(stage_file_data_version_ids(file_data)) .collect(), TransactionWrite::AdoptedChanges { changes } => changes .iter() .map(|change| change.version_id.clone()) .collect(), } } #[cfg(feature = "storage-benches")] fn transaction_write_row_count(write: &TransactionWrite) -> usize { match write { TransactionWrite::Rows { rows, .. } => rows.len(), TransactionWrite::RowsWithFileData { rows, .. } => rows.len(), TransactionWrite::AdoptedChanges { changes } => changes.len(), } } #[cfg(feature = "storage-benches")] fn transaction_write_untracked_row_count(write: &TransactionWrite) -> usize { match write { TransactionWrite::Rows { rows, .. } => rows.iter().filter(|row| row.untracked).count(), TransactionWrite::RowsWithFileData { rows, .. } => { rows.iter().filter(|row| row.untracked).count() } TransactionWrite::AdoptedChanges { .. } => 0, } } fn require_valid_transaction_write_storage_scopes( write: &TransactionWrite, ) -> Result<(), LixError> { match write { TransactionWrite::Rows { rows, .. } => { require_valid_transaction_write_row_storage_scopes(rows) } TransactionWrite::RowsWithFileData { rows, .. } => { require_valid_transaction_write_row_storage_scopes(rows) } TransactionWrite::AdoptedChanges { .. } => Ok(()), } } fn require_valid_transaction_write_row_storage_scopes( rows: &[TransactionWriteRow], ) -> Result<(), LixError> { for row in rows { require_valid_storage_scope(row.version_id.as_str(), row.global)?; } Ok(()) } fn require_valid_storage_scope(version_id: &str, global: bool) -> Result<(), LixError> { if global != (version_id == GLOBAL_VERSION_ID) { return Err(LixError::new( LixError::CODE_INVALID_STORAGE_SCOPE, format!("invalid storage scope: version_id='{version_id}', global={global}"), )); } Ok(()) } fn transaction_write_row_version_ids(rows: &[TransactionWriteRow]) -> BTreeSet { rows.iter().map(|row| row.version_id.clone()).collect() } fn stage_file_data_version_ids(file_data: &[TransactionFileData]) -> BTreeSet { file_data .iter() .map(|write| write.version_id.clone()) .collect() } async fn resolve_active_version_id( mode: &SessionMode, live_state: &LiveStateContext, version_ctx: &VersionContext, transaction: &mut dyn StorageWriteTransaction, ) -> Result { match mode { SessionMode::Pinned { version_id } => Ok(version_id.clone()), SessionMode::Workspace => { load_workspace_version_id(live_state, version_ctx, transaction).await } } } async fn load_workspace_version_id( live_state: &LiveStateContext, version_ctx: &VersionContext, transaction: &mut dyn StorageWriteTransaction, ) -> Result { let row = live_state .reader(&mut *transaction) .load_row(&LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: EntityIdentity::single(WORKSPACE_VERSION_KEY), file_id: NullableKeyFilter::Null, }) .await? .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "workspace version selector is missing lix_key_value:lix_workspace_version_id", ) })?; let snapshot_content = row.snapshot_content.as_deref().ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "workspace version selector is missing snapshot_content", ) })?; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("workspace version selector snapshot is invalid JSON: {error}"), ) })?; let version_id = snapshot .get("value") .and_then(JsonValue::as_str) .filter(|value| !value.is_empty()) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "workspace version selector value must be a non-empty string", ) })? .to_string(); let head = version_ctx .ref_reader(&mut *transaction) .load_head_commit_id(&version_id) .await?; if head.is_none() { return Err(LixError::version_not_found( version_id, "load_workspace_version_id", "workspace_selector", )); } Ok(version_id) } #[cfg(test)] mod tests { use std::sync::Arc; use serde_json::json; use super::*; use crate::backend::testing::UnitTestBackend; use crate::commit_store::{ChangeScanRequest, CommitStoreContext}; use crate::tracked_state::{TrackedStateRowRequest, TrackedStateScanRequest}; use crate::transaction::types::TransactionJson; use crate::untracked_state::{UntrackedStateContext, UntrackedStateRowRequest}; use crate::version::VersionContext; use crate::Backend; use crate::NullableKeyFilter; use crate::GLOBAL_VERSION_ID; fn live_state_context() -> LiveStateContext { LiveStateContext::new( crate::tracked_state::TrackedStateContext::new(), crate::untracked_state::UntrackedStateContext::new(), crate::commit_graph::CommitGraphContext::new(), ) } const SCHEMA_FIXTURE_COMMIT_ID: &str = "schema-fixture-commit"; #[tokio::test] async fn stage_rows_routes_tracked_and_untracked_rows_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = Arc::new(live_state_context()); seed_visible_schema_rows(storage.clone()).await; let binary_cas = Arc::new(BinaryCasContext::new()); let changelog = Arc::new(CommitStoreContext::new()); let commit_store = Arc::new(CommitStoreContext::new()); let version_ctx = Arc::new(VersionContext::new(Arc::new(UntrackedStateContext::new()))); let catalog_context = Arc::new(CatalogContext::new()); let opened = open_transaction( &SessionMode::Pinned { version_id: GLOBAL_VERSION_ID.to_string(), }, storage.clone(), Arc::clone(&live_state), Arc::new(crate::tracked_state::TrackedStateContext::new()), Arc::clone(&binary_cas), Arc::clone(&commit_store), Arc::clone(&version_ctx), Arc::clone(&catalog_context), ) .await .expect("transaction should open"); let mut transaction = opened.transaction; let runtime_functions = opened.runtime_functions; transaction .stage_rows(vec![ key_value_stage_row("tracked-programmatic", "tracked", false), key_value_stage_row("untracked-programmatic", "untracked", true), ]) .await .expect("programmatic rows should stage"); transaction .commit(&runtime_functions) .await .expect("transaction should commit"); let changes = changelog .reader(storage.clone()) .scan_changes(&ChangeScanRequest::default()) .await .expect("changelog should scan"); assert!( changes.iter().any(|change| change .record .entity_id .as_single_string_owned() .as_deref() == Ok("tracked-programmatic")), "tracked staged row should be appended to changelog" ); assert!( !changes.iter().any(|change| change .record .entity_id .as_single_string_owned() .as_deref() == Ok("untracked-programmatic")), "untracked staged row must not be appended to changelog" ); let head_commit_id = version_ctx .ref_reader(storage.clone()) .load_head_commit_id(GLOBAL_VERSION_ID) .await .expect("version ref should load") .expect("tracked commit should advance the global version ref"); let tracked_row = crate::tracked_state::TrackedStateContext::new() .reader(storage.clone()) .load_rows_at_commit( &head_commit_id, &[TrackedStateRowRequest { schema_key: "lix_key_value".to_string(), entity_id: crate::entity_identity::EntityIdentity::single( "tracked-programmatic", ), file_id: NullableKeyFilter::Null, }], ) .await .expect("tracked state should load") .pop() .flatten() .expect("tracked row should be present in tracked state"); assert_eq!(tracked_row.commit_id, head_commit_id); assert_eq!( tracked_row.snapshot_content.as_deref(), Some(r#"{"key":"tracked-programmatic","value":"tracked"}"#) ); let untracked_row = crate::untracked_state::UntrackedStateContext::new() .reader(storage.clone()) .load_row(&UntrackedStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single("untracked-programmatic"), file_id: NullableKeyFilter::Null, }) .await .expect("untracked state should load") .expect("untracked row should be present in untracked state"); assert_eq!( untracked_row.snapshot_content.as_deref(), Some(r#"{"key":"untracked-programmatic","value":"untracked"}"#) ); let live_untracked_row = live_state .reader(storage.clone()) .load_row(&crate::live_state::LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single("untracked-programmatic"), file_id: NullableKeyFilter::Null, }) .await .expect("live state should load") .expect("untracked row should be visible through live state"); assert!(live_untracked_row.untracked); assert!(live_untracked_row.global); assert_eq!(live_untracked_row.version_id, GLOBAL_VERSION_ID); let tracked_rows = crate::tracked_state::TrackedStateContext::new() .reader(storage.clone()) .scan_rows_at_commit(&head_commit_id, &TrackedStateScanRequest::default()) .await .expect("tracked state should scan"); assert!( tracked_rows .iter() .all(|row| row.entity_id.as_single_string_owned().as_deref() != Ok("untracked-programmatic")), "untracked staged rows should not be written into tracked state" ); } #[tokio::test] async fn commit_validates_staged_rows_before_persistence() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let live_state = Arc::new(live_state_context()); seed_visible_schema_rows(storage.clone()).await; let binary_cas = Arc::new(BinaryCasContext::new()); let changelog = Arc::new(CommitStoreContext::new()); let commit_store = Arc::new(CommitStoreContext::new()); let version_ctx = Arc::new(VersionContext::new(Arc::new(UntrackedStateContext::new()))); let catalog_context = Arc::new(CatalogContext::new()); let opened = open_transaction( &SessionMode::Pinned { version_id: GLOBAL_VERSION_ID.to_string(), }, storage.clone(), Arc::clone(&live_state), Arc::new(crate::tracked_state::TrackedStateContext::new()), Arc::clone(&binary_cas), Arc::clone(&commit_store), Arc::clone(&version_ctx), Arc::clone(&catalog_context), ) .await .expect("transaction should open"); let mut transaction = opened.transaction; let runtime_functions = opened.runtime_functions; let mut invalid_row = key_value_stage_row("invalid-programmatic", "invalid", false); invalid_row.snapshot = Some(TransactionJson::from_value_for_test( json!({"key": "invalid-programmatic"}), )); transaction .stage_rows(vec![invalid_row]) .await .expect("invalid row should still reach commit validation"); let error = transaction .commit(&runtime_functions) .await .expect_err("validation should reject before persistence"); assert!( error.message.contains("snapshot_content validation failed"), "validation error should explain the rejected schema data: {error:?}" ); let changes = changelog .reader(storage.clone()) .scan_changes(&ChangeScanRequest::default()) .await .expect("changelog should scan after failed commit"); assert!( changes.iter().all(|change| change .record .entity_id .as_single_string_owned() .as_deref() != Ok("invalid-programmatic")), "validation failure must happen before changelog persistence" ); let head = version_ctx .ref_reader(storage.clone()) .load_head_commit_id(GLOBAL_VERSION_ID) .await .expect("version ref should load after failed commit"); assert_eq!( head.as_deref(), Some(SCHEMA_FIXTURE_COMMIT_ID), "validation failure must not advance the version ref" ); } #[tokio::test] async fn commit_rejects_non_object_metadata_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let (live_state, _binary_cas, changelog, version_ref, runtime_functions, mut transaction) = open_test_transaction(&backend).await; let mut row = key_value_stage_row("invalid-metadata", "value", false); row.metadata = Some(TransactionJson::from_value_for_test(json!("not-an-object"))); transaction .stage_rows(vec![row]) .await .expect("row should stage before metadata validation"); let error = transaction .commit(&runtime_functions) .await .expect_err("non-object metadata should fail commit validation"); assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION); assert!( error.message.contains("metadata") && error.message.contains("JSON object"), "error should explain metadata object validation: {error:?}" ); assert_no_persistence_after_validation_failure( storage.clone(), &live_state, &changelog, &version_ref, "invalid-metadata", ) .await; } #[tokio::test] async fn stage_rows_rejects_unknown_schema_key_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let ( _live_state, _binary_cas, _changelog, _version_ref, _runtime_functions, mut transaction, ) = open_test_transaction(&backend).await; let mut row = key_value_stage_row("unknown-schema", "value", false); row.schema_key = "missing_schema".to_string(); let error = transaction .stage_rows(vec![row]) .await .expect_err("unknown schema should be rejected while staging"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error .message .contains("schema 'missing_schema' is not visible"), "error should explain missing schema visibility: {error:?}" ); } #[tokio::test] async fn stage_rows_rejects_missing_version_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let ( _live_state, _binary_cas, _changelog, _version_ref, _runtime_functions, mut transaction, ) = open_test_transaction(&backend).await; let mut row = key_value_stage_row("ghost-version-row", "value", false); row.version_id = "ghost-version".to_string(); row.global = false; let error = transaction .stage_rows(vec![row]) .await .expect_err("missing version should be rejected before staging"); assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND); assert!( error .message .contains("version 'ghost-version' was not found"), "error should explain missing version: {error:?}" ); } #[tokio::test] async fn stage_rows_rejects_invalid_storage_scope_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let ( _live_state, _binary_cas, _changelog, _version_ref, _runtime_functions, mut transaction, ) = open_test_transaction(&backend).await; let mut row = key_value_stage_row("invalid-storage-scope", "value", false); row.version_id = GLOBAL_VERSION_ID.to_string(); row.global = false; let error = transaction .stage_rows(vec![row]) .await .expect_err("invalid storage scope should be rejected before staging"); assert_eq!(error.code, LixError::CODE_INVALID_STORAGE_SCOPE); assert!( error.message.contains("version_id='global', global=false"), "error should explain invalid storage scope: {error:?}" ); } #[tokio::test] async fn stage_rows_rejects_invalid_snapshot_json_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let ( _live_state, _binary_cas, _changelog, _version_ref, _runtime_functions, mut transaction, ) = open_test_transaction(&backend).await; let mut row = key_value_stage_row("invalid-json", "value", false); row.snapshot = Some(TransactionJson::from_value_for_test(json!("not-an-object"))); let error = transaction .stage_rows(vec![row]) .await .expect_err("non-object snapshot should be rejected while staging"); assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION); assert!( error.message.contains("must be a JSON object"), "error should explain invalid snapshot shape: {error:?}" ); } #[tokio::test] async fn commit_rejects_snapshot_that_violates_json_schema_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(Arc::clone(&backend)); let (live_state, _binary_cas, changelog, version_ref, runtime_functions, mut transaction) = open_test_transaction(&backend).await; let mut row = key_value_stage_row("schema-mismatch", "value", false); row.snapshot = Some(TransactionJson::from_value_for_test( json!({"key": "schema-mismatch"}), )); transaction .stage_rows(vec![row]) .await .expect("row should stage before JSON Schema validation"); let error = transaction .commit(&runtime_functions) .await .expect_err("JSON Schema mismatch should fail commit validation"); assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION); assert!( error.message.contains("snapshot_content validation failed"), "error should explain JSON Schema validation: {error:?}" ); assert_no_persistence_after_validation_failure( storage.clone(), &live_state, &changelog, &version_ref, "schema-mismatch", ) .await; } #[tokio::test] async fn stage_rows_rejects_malformed_registered_schema_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let ( _live_state, _binary_cas, _changelog, _version_ref, _runtime_functions, mut transaction, ) = open_test_transaction(&backend).await; let mut row = key_value_stage_row("malformed-registered-schema", "value", false); row.schema_key = "lix_registered_schema".to_string(); row.snapshot = Some(TransactionJson::from_value_for_test(json!({ "value": { "x-lix-key": "malformed_registered_schema", "x-lix-primary-key": ["id"], "type": "object", "properties": { "id": { "type": "string" } }, "required": ["id"], "additionalProperties": false } }))); row.entity_id = None; let error = transaction .stage_rows(vec![row]) .await .expect_err("malformed registered schema should be rejected while staging"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("x-lix-primary-key"), "error should explain malformed registered schema: {error:?}" ); } #[tokio::test] async fn stage_rows_rejects_primary_key_entity_id_mismatch_without_sql() { let backend: Arc = Arc::new(UnitTestBackend::new()); let ( _live_state, _binary_cas, _changelog, _version_ref, _runtime_functions, mut transaction, ) = open_test_transaction(&backend).await; let mut row = key_value_stage_row("right-id", "value", false); row.entity_id = Some(crate::entity_identity::EntityIdentity::single("wrong-id")); let error = transaction .stage_rows(vec![row]) .await .expect_err("entity id mismatch should be rejected while staging"); assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION); assert!( error .message .contains("does not match x-lix-primary-key derived entity_id"), "error should explain entity id mismatch: {error:?}" ); } async fn open_test_transaction( backend: &Arc, ) -> ( Arc, Arc, Arc, Arc, FunctionContext, Transaction, ) { let storage = StorageContext::new(Arc::clone(backend)); let live_state = Arc::new(live_state_context()); seed_visible_schema_rows(storage.clone()).await; let binary_cas = Arc::new(BinaryCasContext::new()); let changelog = Arc::new(CommitStoreContext::new()); let commit_store = Arc::new(CommitStoreContext::new()); let version_ctx = Arc::new(VersionContext::new(Arc::new(UntrackedStateContext::new()))); let catalog_context = Arc::new(CatalogContext::new()); let opened = open_transaction( &SessionMode::Pinned { version_id: GLOBAL_VERSION_ID.to_string(), }, storage, Arc::clone(&live_state), Arc::new(crate::tracked_state::TrackedStateContext::new()), Arc::clone(&binary_cas), Arc::clone(&commit_store), Arc::clone(&version_ctx), catalog_context, ) .await .expect("transaction should open"); let transaction = opened.transaction; let runtime_functions = opened.runtime_functions; ( live_state, binary_cas, changelog, version_ctx, runtime_functions, transaction, ) } async fn seed_visible_schema_rows(storage: StorageContext) { let mut writes = StorageWriteSet::new(); let rows = crate::schema::seed_schema_definitions() .into_iter() .map(|schema| { let key = crate::schema::schema_key_from_definition(schema) .expect("seed schema key should derive"); let snapshot_content = json!({ "value": schema }).to_string(); crate::tracked_state::MaterializedTrackedStateRow { entity_id: crate::schema::registered_schema_entity_id(&key.schema_key) .expect("registered schema identity should derive"), schema_key: "lix_registered_schema".to_string(), file_id: None, snapshot_content: Some(snapshot_content), metadata: None, deleted: false, created_at: "1970-01-01T00:00:00.000Z".to_string(), updated_at: "1970-01-01T00:00:00.000Z".to_string(), change_id: format!("schema-fixture-{}", key.schema_key), commit_id: SCHEMA_FIXTURE_COMMIT_ID.to_string(), } }) .collect::>(); let version_ref_row = prepare_version_ref_row( GLOBAL_VERSION_ID, SCHEMA_FIXTURE_COMMIT_ID, "1970-01-01T00:00:00.000Z", ) .expect("schema fixture version ref should stage"); let mut storage_transaction = storage .begin_write_transaction() .await .expect("schema fixture transaction should open"); crate::test_support::stage_tracked_root_from_materialized( storage_transaction.as_mut(), &crate::tracked_state::TrackedStateContext::new(), SCHEMA_FIXTURE_COMMIT_ID, None, &rows, ) .await .expect("schema fixture rows should stage"); crate::untracked_state::UntrackedStateContext::new() .writer(&mut writes) .stage_rows([version_ref_row.row.as_ref()]) .expect("schema fixture version ref should stage"); writes .apply(&mut storage_transaction.as_mut()) .await .expect("schema fixture rows should apply"); storage_transaction .commit() .await .expect("schema fixture transaction should commit"); } async fn assert_no_persistence_after_validation_failure( storage: StorageContext, live_state: &LiveStateContext, changelog: &CommitStoreContext, version_ctx: &VersionContext, rejected_entity_id: &str, ) { let changes = changelog .reader(storage.clone()) .scan_changes(&ChangeScanRequest::default()) .await .expect("changelog should scan after failed commit"); assert!( changes.iter().all(|change| change .record .entity_id .as_single_string_owned() .as_deref() != Ok(rejected_entity_id)), "validation failure must happen before changelog persistence" ); let head = version_ctx .ref_reader(storage.clone()) .load_head_commit_id(GLOBAL_VERSION_ID) .await .expect("version ref should load after failed commit"); assert_eq!( head.as_deref(), Some(SCHEMA_FIXTURE_COMMIT_ID), "validation failure must not advance the version ref" ); let row = live_state .reader(storage) .load_row(&crate::live_state::LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single(rejected_entity_id), file_id: NullableKeyFilter::Null, }) .await .expect("live state should load after failed commit"); assert_eq!( row, None, "validation failure must happen before live-state persistence" ); } fn key_value_stage_row(key: &str, value: &str, untracked: bool) -> TransactionWriteRow { TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single(key)), schema_key: "lix_key_value".to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_for_test(json!({ "key": key, "value": value, }))), metadata: None, origin: None, created_at: None, updated_at: None, global: true, change_id: None, commit_id: None, untracked, version_id: GLOBAL_VERSION_ID.to_string(), } } } ================================================ FILE: packages/engine/src/transaction/live_state_overlay.rs ================================================ use std::collections::BTreeSet; use crate::live_state::MaterializedLiveStateRow; use crate::live_state::{LiveStateReader, LiveStateScanRequest}; use crate::transaction::staging::{PreparedStateRowIdentity, PreparedStateRowOverlay}; use crate::LixError; pub(crate) async fn overlay_scan_rows( base: &dyn LiveStateReader, staged: &PreparedStateRowOverlay, request: &LiveStateScanRequest, ) -> Result, LixError> { let staged_parts = staged.scan_parts(request)?; let hidden_identities = staged_parts.hidden_identities; let mut rows = staged_parts.rows; let mut visible_identities = rows .iter() .map(PreparedStateRowIdentity::from) .collect::>(); for row in base.scan_rows(request).await? { let identity = PreparedStateRowIdentity::from(&row); if hidden_identities.contains(&identity) { continue; } if visible_identities.insert(identity) { rows.push(row); } } if let Some(limit) = request.limit { rows.truncate(limit); } Ok(rows) } ================================================ FILE: packages/engine/src/transaction/mod.rs ================================================ mod commit; mod context; mod live_state_overlay; mod normalization; mod prep; mod schema_resolver; mod staging; pub(crate) mod types; mod validation; pub(crate) use context::open_transaction; pub(crate) use context::Transaction; pub(crate) use prep::prepare_version_ref_row; ================================================ FILE: packages/engine/src/transaction/normalization.rs ================================================ use std::sync::Arc; use serde_json::{Map as JsonMap, Value as JsonValue}; use crate::catalog::{CatalogSnapshot, SchemaPlan, SchemaPlanId}; use crate::common::format_json_pointer; use crate::common::normalize_path_segment; use crate::domain::Domain; use crate::entity_identity::{EntityIdentity, EntityIdentityError}; use crate::functions::FunctionProviderHandle; use crate::schema::{ is_seed_schema_key, schema_from_registered_snapshot, validate_lix_schema, validate_lix_schema_definition, }; use crate::transaction::types::{PreparedRowFacts, TransactionJson, TransactionWriteRow}; use crate::LixError; pub(crate) const REGISTERED_SCHEMA_KEY: &str = "lix_registered_schema"; const DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = "lix_directory_descriptor"; const FILE_DESCRIPTOR_SCHEMA_KEY: &str = "lix_file_descriptor"; #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct NormalizedTransactionWriteRow { pub(crate) row: TransactionWriteRow, pub(crate) snapshot: Option, pub(crate) schema_plan_id: SchemaPlanId, pub(crate) facts: PreparedRowFacts, } /// Normalizes one incoming row into a row with final snapshot/entity identity. /// /// This is the canonical schema-semantics boundary for transaction writes. It owns /// schema default application, primary-key identity derivation, and explicit /// identity mismatch validation. SQL providers should not pre-derive primary /// keys for schemas that can be normalized here; they should pass decoded /// snapshots and let this layer complete them. /// /// This function intentionally does not assign timestamps, change ids, or /// commit ids; those are prepared-row fields assigned after semantic /// normalization has produced the final identity. pub(crate) fn normalize_transaction_write_row( mut row: TransactionWriteRow, schema_catalog: &mut CatalogSnapshot, functions: FunctionProviderHandle, ) -> Result { validate_transaction_write_row_schema_identity(&row)?; let Some((schema_plan_id, schema_plan)) = schema_catalog.plan_for_key(&row.schema_key) else { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema '{}' is not visible to this transaction", row.schema_key ), )); }; let normalized_snapshot = if let Some(snapshot) = row.snapshot.take() { let (mut snapshot, normalized) = snapshot_object_from_transaction_json(snapshot, &row)?; let defaults_changed = apply_defaults(&mut snapshot, schema_plan, &row, functions)?; let descriptor_changed = normalize_filesystem_descriptor_snapshot(&row, &mut snapshot)?; let snapshot = JsonValue::Object(snapshot); row.entity_id = Some(resolve_entity_id(&row, schema_plan, &snapshot)?); if defaults_changed || descriptor_changed { Some(TransactionJson::from_value( snapshot, "normalized transaction snapshot_content", )?) } else { Some(TransactionJson::from_parts(Arc::new(snapshot), normalized)) } } else if row.entity_id.is_none() { return Err(LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "tombstone for schema '{}' requires entity_id", row.schema_key ), )); } else { None }; if row.schema_key == REGISTERED_SCHEMA_KEY { if row.file_id.is_some() { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "lix_registered_schema rows must not be scoped to a file", ) .with_hint("Schema definitions are scoped by version and durability only; write them with null file_id.")); } let schema_domain = Domain::schema_catalog(row.schema_scope_version_id().to_string(), row.untracked); remember_pending_registered_schema( normalized_snapshot.as_ref().map(TransactionJson::value), schema_domain, schema_catalog, )?; } Ok(NormalizedTransactionWriteRow { row, snapshot: normalized_snapshot, schema_plan_id, facts: PreparedRowFacts::default(), }) } fn validate_transaction_write_row_schema_identity( row: &TransactionWriteRow, ) -> Result<(), LixError> { if row.schema_key.is_empty() { return Err(LixError::new( LixError::CODE_UNKNOWN, "engine transaction staging requires non-empty schema_key", )); } Ok(()) } fn snapshot_object_from_transaction_json( snapshot: TransactionJson, row: &TransactionWriteRow, ) -> Result<(JsonMap, Arc), LixError> { let (snapshot, normalized) = snapshot.into_parts(); let snapshot = match Arc::try_unwrap(snapshot) { Ok(snapshot) => snapshot, Err(snapshot) => snapshot.as_ref().clone(), }; match snapshot { JsonValue::Object(snapshot) => Ok((snapshot, normalized)), _ => Err(LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "snapshot_content for schema '{}' must be a JSON object", row.schema_key ), )), } } fn apply_defaults( snapshot: &mut JsonMap, schema_plan: &SchemaPlan, row: &TransactionWriteRow, functions: FunctionProviderHandle, ) -> Result { schema_plan .defaults .apply(snapshot, functions, &row.schema_key) } fn normalize_filesystem_descriptor_snapshot( row: &TransactionWriteRow, snapshot: &mut JsonMap, ) -> Result { match row.schema_key.as_str() { DIRECTORY_DESCRIPTOR_SCHEMA_KEY => normalize_directory_descriptor_snapshot(row, snapshot), FILE_DESCRIPTOR_SCHEMA_KEY => normalize_file_descriptor_snapshot(row, snapshot), _ => Ok(false), } } fn normalize_directory_descriptor_snapshot( row: &TransactionWriteRow, snapshot: &mut JsonMap, ) -> Result { let Some(name) = optional_string_field(snapshot, "name", row)? else { return Ok(false); }; let normalized_name = normalize_path_segment(name)?; if name == normalized_name { return Ok(false); } snapshot.insert("name".to_string(), JsonValue::String(normalized_name)); Ok(true) } fn normalize_file_descriptor_snapshot( row: &TransactionWriteRow, snapshot: &mut JsonMap, ) -> Result { let Some(name) = optional_string_field(snapshot, "name", row)? else { return Ok(false); }; let normalized_name = normalize_path_segment(name)?; if name == normalized_name { return Ok(false); } snapshot.insert("name".to_string(), JsonValue::String(normalized_name)); Ok(true) } fn optional_string_field<'a>( snapshot: &'a JsonMap, field: &str, row: &TransactionWriteRow, ) -> Result, LixError> { let Some(value) = snapshot.get(field) else { return Ok(None); }; value.as_str().map(Some).ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "snapshot_content for schema '{}' field '{}' must be a string", row.schema_key, field ), ) }) } fn resolve_entity_id( row: &TransactionWriteRow, schema_plan: &SchemaPlan, snapshot: &JsonValue, ) -> Result { let Some(primary_key_paths) = schema_plan.primary_key.as_ref() else { return row.entity_id.clone().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "write for schema '{}' requires entity_id because the schema has no x-lix-primary-key", row.schema_key ), ) }); }; let derived = EntityIdentity::from_primary_key_paths(snapshot, primary_key_paths) .map_err(|error| entity_id_derivation_error(row, primary_key_paths, error))?; if let Some(entity_id) = row.entity_id.as_ref() { if entity_id != &derived { return Err(LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "entity_id '{}' does not match x-lix-primary-key derived entity_id '{}' for schema '{}'", entity_id.as_json_array_text()?, derived.as_json_array_text()?, row.schema_key ), )); } } Ok(derived) } fn entity_id_derivation_error( row: &TransactionWriteRow, primary_key_paths: &[Vec], error: EntityIdentityError, ) -> LixError { let detail = match error { EntityIdentityError::EmptyPrimaryKey => "empty x-lix-primary-key".to_string(), EntityIdentityError::EmptyPrimaryKeyPath { index } => { format!("empty x-lix-primary-key pointer at index {index}") } EntityIdentityError::EmptyPrimaryKeyValue { index } => { let pointer = primary_key_paths .get(index) .map(|path| format_json_pointer(path)) .unwrap_or_else(|| format!("index {index}")); format!("empty value at primary-key pointer '{pointer}'") } EntityIdentityError::MissingPrimaryKeyValue { index } => { let pointer = format_json_pointer(&primary_key_paths[index]); format!("missing value at primary-key pointer '{pointer}'") } EntityIdentityError::UnsupportedPrimaryKeyValue { index } => { let pointer = format_json_pointer(&primary_key_paths[index]); format!("non-string value at primary-key pointer '{pointer}'") } EntityIdentityError::InvalidEncodedEntityIdentity => { "invalid encoded entity identity".to_string() } }; LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "failed to derive entity_id for schema '{}': {detail}", row.schema_key ), ) } pub(crate) fn remember_pending_registered_schema( snapshot: Option<&JsonValue>, domain: Domain, schema_catalog: &mut CatalogSnapshot, ) -> Result<(), LixError> { let Some(snapshot) = snapshot else { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "lix_registered_schema rows cannot be deleted yet; schema deletion is not supported", )); }; if let Some(schema) = snapshot.get("value") { validate_lix_schema_definition(schema)?; } { let registered_schema_definition = schema_catalog .schema(REGISTERED_SCHEMA_KEY) .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, "lix_registered_schema schema is not visible to this transaction", ) })?; validate_lix_schema(registered_schema_definition, &snapshot)?; } let (key, schema) = schema_from_registered_snapshot(&snapshot)?; if is_seed_schema_key(&key.schema_key) { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema '{}' is a system schema and cannot be registered at runtime", key.schema_key ), )); } validate_lix_schema_definition(&schema)?; schema_catalog.insert_schema_for_domain(domain, key, schema)?; Ok(()) } #[cfg(test)] mod tests { use serde_json::json; use super::*; use crate::functions::{FunctionProvider, SharedFunctionProvider}; use crate::schema::seed_schema_definition; #[test] fn normalization_derives_entity_id_from_primary_key() { let mut catalog = catalog_with(vec![schema_with_default_id()]); let row = TransactionWriteRow { entity_id: None, schema_key: "normalization_schema".to_string(), snapshot: Some(snapshot_json( r#"{"id":"entity-from-snapshot","value":"hello"}"#, )), ..base_stage_row() }; let row = normalize_transaction_write_row(row, &mut catalog, functions()).expect("normalize row"); assert_eq!( row.row.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "entity-from-snapshot" )) ); } #[test] fn normalization_applies_json_and_cel_defaults_before_identity_derivation() { let mut catalog = catalog_with(vec![schema_with_default_id()]); let row = TransactionWriteRow { entity_id: None, schema_key: "normalization_schema".to_string(), snapshot: Some(snapshot_json(r#"{}"#)), ..base_stage_row() }; let row = normalize_transaction_write_row(row, &mut catalog, functions()).expect("normalize row"); let snapshot = normalized_snapshot(&row); assert_eq!( row.row.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single( "uuid-default" )) ); assert_eq!(snapshot["id"], "uuid-default"); assert_eq!(snapshot["value"], "literal-default"); } #[test] fn normalization_applies_cel_defaults_from_snapshot_context() { let mut catalog = catalog_with(vec![schema_with_cel_field_default()]); let row = TransactionWriteRow { entity_id: None, schema_key: "cel_field_default_schema".to_string(), snapshot: Some(snapshot_json(r#"{"id":"entity-1","name":"Sample"}"#)), ..base_stage_row() }; let row = normalize_transaction_write_row(row, &mut catalog, functions()).expect("normalize row"); let snapshot = normalized_snapshot(&row); assert_eq!(snapshot["slug"], "Sample-slug"); } #[test] fn normalization_x_lix_default_overrides_json_default() { let mut catalog = catalog_with(vec![schema_with_overridden_default()]); let row = TransactionWriteRow { entity_id: None, schema_key: "overridden_default_schema".to_string(), snapshot: Some(snapshot_json(r#"{"id":"entity-1"}"#)), ..base_stage_row() }; let row = normalize_transaction_write_row(row, &mut catalog, functions()).expect("normalize row"); let snapshot = normalized_snapshot(&row); assert_eq!(snapshot["status"], "computed"); } #[test] fn normalization_does_not_overwrite_explicit_null_with_default() { let mut catalog = catalog_with(vec![schema_with_nullable_default()]); let row = TransactionWriteRow { entity_id: None, schema_key: "nullable_default_schema".to_string(), snapshot: Some(snapshot_json(r#"{"id":"entity-1","status":null}"#)), ..base_stage_row() }; let row = normalize_transaction_write_row(row, &mut catalog, functions()).expect("normalize row"); let snapshot = normalized_snapshot(&row); assert_eq!(snapshot["status"], JsonValue::Null); } #[test] fn normalization_applies_timestamp_function_default() { let mut catalog = catalog_with(vec![schema_with_timestamp_default()]); let row = TransactionWriteRow { entity_id: None, schema_key: "timestamp_default_schema".to_string(), snapshot: Some(snapshot_json(r#"{"id":"entity-1"}"#)), ..base_stage_row() }; let row = normalize_transaction_write_row(row, &mut catalog, functions()).expect("normalize row"); let snapshot = normalized_snapshot(&row); assert_eq!(snapshot["created_at"], "1970-01-01T00:00:00.000Z"); } #[test] fn normalization_surfaces_cel_default_errors() { let mut catalog = catalog_with(vec![schema_with_unknown_cel_default()]); let row = TransactionWriteRow { entity_id: None, schema_key: "unknown_cel_default_schema".to_string(), snapshot: Some(snapshot_json(r#"{"id":"entity-1"}"#)), ..base_stage_row() }; let error = normalize_transaction_write_row(row, &mut catalog, functions()) .expect_err("default should fail"); assert!(error.message.contains("failed to evaluate x-lix-default")); assert!(error.message.contains("unknown_cel_default_schema.slug")); } #[test] fn normalization_rejects_entity_id_that_disagrees_with_primary_key() { let mut catalog = catalog_with(vec![schema_with_default_id()]); let row = TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("wrong-id")), schema_key: "normalization_schema".to_string(), snapshot: Some(snapshot_json(r#"{"id":"right-id","value":"hello"}"#)), ..base_stage_row() }; let error = normalize_transaction_write_row(row, &mut catalog, functions()) .expect_err("id mismatch fails"); assert!(error .message .contains("does not match x-lix-primary-key derived entity_id")); } #[test] fn normalization_derives_json_array_entity_id_for_composite_primary_key() { let mut catalog = catalog_with(vec![composite_key_schema()]); let row = TransactionWriteRow { entity_id: None, schema_key: "composite_key_schema".to_string(), snapshot: Some(snapshot_json(r#"{"namespace":"a~b","key":"1"}"#)), ..base_stage_row() }; let row = normalize_transaction_write_row(row, &mut catalog, functions()).expect("normalize row"); let entity_id = row.row.entity_id.expect("composite entity id"); let projected_entity_id = entity_id .as_json_array_text() .expect("entity id should project"); assert_eq!(projected_entity_id, "[\"a~b\",\"1\"]"); } #[test] fn normalization_rejects_non_string_primary_key_values() { let mut catalog = catalog_with(vec![composite_key_schema()]); let row = TransactionWriteRow { entity_id: None, schema_key: "composite_key_schema".to_string(), snapshot: Some(snapshot_json(r#"{"namespace":"a~b","key":1}"#)), ..base_stage_row() }; let error = normalize_transaction_write_row(row, &mut catalog, functions()) .expect_err("non-string primary key values should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION); assert!(error .message .contains("non-string value at primary-key pointer '/key'")); } #[test] fn normalization_validates_explicit_composite_entity_id_against_projection() { let mut catalog = catalog_with(vec![composite_key_schema()]); let snapshot = json!({ "namespace": "a~b", "key": "1", }); let derived = EntityIdentity::from_primary_key_paths( &snapshot, &[vec!["namespace".to_string()], vec!["key".to_string()]], ) .expect("identity should derive"); let row = TransactionWriteRow { entity_id: Some(derived.clone()), schema_key: "composite_key_schema".to_string(), snapshot: Some(transaction_json(snapshot.clone())), ..base_stage_row() }; let row = normalize_transaction_write_row(row, &mut catalog, functions()).expect("normalize row"); assert_eq!(row.row.entity_id.as_ref(), Some(&derived)); } #[test] fn normalization_makes_pending_registered_schema_visible_to_later_rows() { let mut catalog = catalog_with(vec![seed_schema_definition(REGISTERED_SCHEMA_KEY) .expect("registered schema builtin") .clone()]); let registered = TransactionWriteRow { entity_id: None, schema_key: REGISTERED_SCHEMA_KEY.to_string(), snapshot: Some(transaction_json(json!({ "value": dynamic_schema_definition(), }))), ..base_stage_row() }; normalize_transaction_write_row(registered, &mut catalog, functions()) .expect("register schema"); let dynamic = TransactionWriteRow { entity_id: None, schema_key: "dynamic_schema".to_string(), snapshot: Some(snapshot_json(r#"{"id":"dynamic-1"}"#)), ..base_stage_row() }; let dynamic = normalize_transaction_write_row(dynamic, &mut catalog, functions()) .expect("dynamic row"); assert_eq!( dynamic.row.entity_id.as_ref(), Some(&crate::entity_identity::EntityIdentity::single("dynamic-1")) ); } #[test] fn normalization_canonicalizes_filesystem_descriptor_segments() { let mut catalog = catalog_with(vec![ builtin_schema(FILE_DESCRIPTOR_SCHEMA_KEY), builtin_schema(DIRECTORY_DESCRIPTOR_SCHEMA_KEY), ]); let file = TransactionWriteRow { entity_id: None, schema_key: FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), snapshot: Some(transaction_json(json!({ "id": "file-cafe", "directory_id": null, "name": "Cafe\u{301}.txt", }))), global: false, ..base_stage_row() }; let file = normalize_transaction_write_row(file, &mut catalog, functions()) .expect("normalize file"); let file_snapshot = normalized_snapshot(&file); assert_eq!(file_snapshot["name"], "Café.txt"); let directory = TransactionWriteRow { entity_id: None, schema_key: DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), snapshot: Some(transaction_json(json!({ "id": "dir-cafe", "parent_id": null, "name": "Cafe\u{301}", }))), global: false, ..base_stage_row() }; let directory = normalize_transaction_write_row(directory, &mut catalog, functions()) .expect("normalize directory"); let directory_snapshot = normalized_snapshot(&directory); assert_eq!(directory_snapshot["name"], "Café"); } #[test] fn normalization_rejects_invalid_filesystem_descriptor_segments() { let mut catalog = catalog_with(vec![ builtin_schema(FILE_DESCRIPTOR_SCHEMA_KEY), builtin_schema(DIRECTORY_DESCRIPTOR_SCHEMA_KEY), ]); let dot_segment = normalize_transaction_write_row( TransactionWriteRow { entity_id: None, schema_key: FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), snapshot: Some(transaction_json(json!({ "id": "file-dotdot", "directory_id": null, "name": "..", }))), global: false, ..base_stage_row() }, &mut catalog, functions(), ) .expect_err("file descriptor name should reject dot segments"); assert_eq!(dot_segment.code, "LIX_ERROR_PATH_DOT_SEGMENT"); let bidi = normalize_transaction_write_row( TransactionWriteRow { entity_id: None, schema_key: FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), snapshot: Some(transaction_json(json!({ "id": "file-bidi", "directory_id": null, "name": "safe\u{202E}txt", }))), global: false, ..base_stage_row() }, &mut catalog, functions(), ) .expect_err("file descriptor name should reject bidi formatting characters"); assert_eq!(bidi.code, "LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT"); let zero_width = normalize_transaction_write_row( TransactionWriteRow { entity_id: None, schema_key: DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), snapshot: Some(transaction_json(json!({ "id": "dir-zero-width", "parent_id": null, "name": "zero\u{200D}width", }))), global: false, ..base_stage_row() }, &mut catalog, functions(), ) .expect_err("directory descriptor name should reject zero-width characters"); assert_eq!(zero_width.code, "LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT"); } #[test] fn normalization_keeps_file_descriptor_name_opaque() { let mut catalog = catalog_with(vec![builtin_schema(FILE_DESCRIPTOR_SCHEMA_KEY)]); let row = normalize_transaction_write_row( TransactionWriteRow { entity_id: None, schema_key: FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), snapshot: Some(transaction_json(json!({ "id": "file-opaque-name", "directory_id": null, "name": "foo.bar", }))), global: false, ..base_stage_row() }, &mut catalog, functions(), ) .expect("file descriptor name should be an opaque basename"); let snapshot = normalized_snapshot(&row); assert_eq!(snapshot["name"], "foo.bar"); } fn normalized_snapshot(row: &NormalizedTransactionWriteRow) -> &JsonValue { row.snapshot .as_ref() .expect("normalized test row should have a snapshot") .value() } fn catalog_with(schemas: Vec) -> CatalogSnapshot { let mut visible_schemas = schemas; if visible_schemas.iter().any(|schema| { schema.get("x-lix-key").and_then(JsonValue::as_str) == Some(FILE_DESCRIPTOR_SCHEMA_KEY) }) && !visible_schemas.iter().any(|schema| { schema.get("x-lix-key").and_then(JsonValue::as_str) == Some(DIRECTORY_DESCRIPTOR_SCHEMA_KEY) }) { visible_schemas.push(builtin_schema(DIRECTORY_DESCRIPTOR_SCHEMA_KEY)); } CatalogSnapshot::from_visible_schemas(&visible_schemas).expect("catalog") } fn builtin_schema(schema_key: &str) -> JsonValue { seed_schema_definition(schema_key) .unwrap_or_else(|| panic!("{schema_key} builtin schema should exist")) .clone() } fn transaction_json(value: JsonValue) -> TransactionJson { TransactionJson::from_value_for_test(value) } fn snapshot_json(value: &str) -> TransactionJson { transaction_json(serde_json::from_str(value).expect("test snapshot should parse")) } fn base_stage_row() -> TransactionWriteRow { TransactionWriteRow { entity_id: Some(crate::entity_identity::EntityIdentity::single("entity-1")), schema_key: "normalization_schema".to_string(), file_id: None, snapshot: Some(snapshot_json(r#"{"id":"entity-1","value":"hello"}"#)), metadata: None, origin: None, created_at: None, updated_at: None, global: true, change_id: None, commit_id: None, untracked: false, version_id: crate::GLOBAL_VERSION_ID.to_string(), } } fn schema_with_default_id() -> JsonValue { json!({ "x-lix-key": "normalization_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string", "x-lix-default": "lix_uuid_v7()" }, "value": { "type": "string", "default": "literal-default" } }, "required": ["id", "value"], "additionalProperties": false }) } fn schema_with_cel_field_default() -> JsonValue { json!({ "x-lix-key": "cel_field_default_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "name": { "type": "string" }, "slug": { "type": "string", "x-lix-default": "name + '-slug'" } }, "required": ["id", "name"], "additionalProperties": false }) } fn schema_with_overridden_default() -> JsonValue { json!({ "x-lix-key": "overridden_default_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "status": { "type": "string", "default": "literal", "x-lix-default": "'computed'" } }, "required": ["id"], "additionalProperties": false }) } fn schema_with_nullable_default() -> JsonValue { json!({ "x-lix-key": "nullable_default_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "status": { "anyOf": [{ "type": "string" }, { "type": "null" }], "x-lix-default": "'computed'" } }, "required": ["id"], "additionalProperties": false }) } fn schema_with_timestamp_default() -> JsonValue { json!({ "x-lix-key": "timestamp_default_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "created_at": { "type": "string", "x-lix-default": "lix_timestamp()" } }, "required": ["id"], "additionalProperties": false }) } fn schema_with_unknown_cel_default() -> JsonValue { json!({ "x-lix-key": "unknown_cel_default_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "slug": { "type": "string", "x-lix-default": "missing_var + '-slug'" } }, "required": ["id"], "additionalProperties": false }) } fn composite_key_schema() -> JsonValue { json!({ "x-lix-key": "composite_key_schema", "x-lix-primary-key": ["/namespace", "/key"], "type": "object", "properties": { "namespace": { "type": "string" }, "key": { "type": "string" } }, "required": ["namespace", "key"], "additionalProperties": false }) } fn dynamic_schema_definition() -> JsonValue { json!({ "x-lix-key": "dynamic_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" } }, "required": ["id"], "additionalProperties": false }) } fn functions() -> FunctionProviderHandle { SharedFunctionProvider::new(Box::new(FixedFunctions) as Box) } struct FixedFunctions; impl FunctionProvider for FixedFunctions { fn uuid_v7(&mut self) -> String { "uuid-default".to_string() } fn timestamp(&mut self) -> String { "1970-01-01T00:00:00.000Z".to_string() } } } ================================================ FILE: packages/engine/src/transaction/prep.rs ================================================ use crate::entity_identity::EntityIdentity; use crate::untracked_state::UntrackedStateRow; use crate::version::VERSION_REF_SCHEMA_KEY; use crate::{LixError, GLOBAL_VERSION_ID}; pub(crate) struct PreparedVersionRefRow { pub(crate) row: UntrackedStateRow, } pub(crate) fn prepare_version_ref_row( version_id: &str, commit_id: &str, timestamp: &str, ) -> Result { let snapshot = serde_json::json!({ "id": version_id, "commit_id": commit_id, }); let snapshot = crate::json_store::NormalizedJson::from_value( &snapshot, "engine version-ref snapshot_content", )?; Ok(PreparedVersionRefRow { row: UntrackedStateRow { entity_id: EntityIdentity::single(version_id), schema_key: VERSION_REF_SCHEMA_KEY.to_string(), file_id: None, snapshot_content: Some(snapshot.as_str().to_string()), metadata: None, created_at: timestamp.to_string(), updated_at: timestamp.to_string(), global: true, version_id: GLOBAL_VERSION_ID.to_string(), }, }) } ================================================ FILE: packages/engine/src/transaction/schema_resolver.rs ================================================ use std::collections::BTreeMap; use std::sync::Arc; use async_trait::async_trait; use crate::catalog::{CatalogContext, CatalogSnapshot, SchemaCatalogFact}; use crate::domain::Domain; use crate::live_state::{ LiveStateReader, LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::transaction::live_state_overlay::overlay_scan_rows; use crate::transaction::staging::PreparedStateRowOverlay; use crate::LixError; pub(crate) struct TransactionSchemaResolver { context: Arc, catalogs_by_domain: BTreeMap, } enum CatalogEntry { SchemaFacts(Vec), Catalog(CatalogSnapshot), } impl TransactionSchemaResolver { pub(crate) fn new(context: Arc) -> Self { Self { context, catalogs_by_domain: BTreeMap::new(), } } async fn load_catalog_for_domain( &mut self, live_state: &dyn LiveStateReader, staged: Option<&PreparedStateRowOverlay>, domain: &Domain, ) -> Result<(), LixError> { let domain = domain.schema_catalog_domain(); let needs_load = !self.catalogs_by_domain.contains_key(&domain); if needs_load { let facts = if let Some(staged) = staged { let reader = TransactionSchemaLiveStateReader { base: live_state, staged, }; self.context .schema_facts_for_domain(&reader, &domain) .await? } else { self.context .schema_facts_for_domain(live_state, &domain) .await? }; self.catalogs_by_domain .insert(domain.clone(), CatalogEntry::SchemaFacts(facts)); } let should_materialize = self .catalogs_by_domain .get(&domain) .is_some_and(|entry| matches!(entry, CatalogEntry::SchemaFacts(_))); if should_materialize { #[cfg(feature = "storage-benches")] crate::storage_bench::record_transaction_schema_catalog_load(); let entry = self .catalogs_by_domain .remove(&domain) .expect("schema catalog entry should exist after load"); let CatalogEntry::SchemaFacts(facts) = entry else { unreachable!("catalog entry was checked as schema facts"); }; let catalog = CatalogSnapshot::from_schema_facts(&facts)?; self.catalogs_by_domain .insert(domain, CatalogEntry::Catalog(catalog)); } Ok(()) } pub(crate) async fn catalog_for_row_normalization( &mut self, live_state: &dyn LiveStateReader, staged: &PreparedStateRowOverlay, domain: &Domain, ) -> Result<&mut CatalogSnapshot, LixError> { self.load_catalog_for_domain(live_state, Some(staged), domain) .await?; let domain = domain.schema_catalog_domain(); match self .catalogs_by_domain .get_mut(&domain) .expect("catalog cache should contain requested version") { CatalogEntry::Catalog(catalog) => Ok(catalog), CatalogEntry::SchemaFacts(_) => { unreachable!("schema catalog should be materialized before mutable access") } } } pub(crate) async fn catalog_for_validation( &mut self, live_state: &dyn LiveStateReader, domain: &Domain, ) -> Result<&CatalogSnapshot, LixError> { self.load_catalog_for_domain(live_state, None, domain) .await?; let domain = domain.schema_catalog_domain(); match self .catalogs_by_domain .get(&domain) .expect("catalog cache should contain requested version") { CatalogEntry::Catalog(catalog) => Ok(catalog), CatalogEntry::SchemaFacts(_) => { unreachable!("schema catalog should be materialized before validation access") } } } pub(crate) fn remember_schema_facts(&mut self, domain: &Domain, facts: Vec) { self.catalogs_by_domain.insert( domain.schema_catalog_domain(), CatalogEntry::SchemaFacts(facts), ); } } struct TransactionSchemaLiveStateReader<'a> { base: &'a dyn LiveStateReader, staged: &'a PreparedStateRowOverlay, } #[async_trait] impl LiveStateReader for TransactionSchemaLiveStateReader<'_> { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { overlay_scan_rows(self.base, self.staged, request).await } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { self.base.load_row(request).await } } ================================================ FILE: packages/engine/src/transaction/staging.rs ================================================ use std::collections::{BTreeMap, BTreeSet, HashMap}; use std::sync::{Arc, Mutex}; use crate::catalog::SchemaPlanId; use crate::domain::{Domain, DomainRowIdentity}; use crate::entity_identity::EntityIdentity; use crate::functions::{FunctionProvider, FunctionProviderHandle}; #[cfg(test)] use crate::live_state::LiveStateRowRequest; use crate::live_state::{LiveStateScanRequest, MaterializedLiveStateRow}; #[cfg(test)] use crate::transaction::types::{stage_json_from_value, TransactionJson}; use crate::transaction::types::{ LogicalPrimaryKey, PreparedTransactionWrite, TransactionFileData, TransactionWriteMode, TransactionWriteOperation, TransactionWriteOrigin, TransactionWriteOutcome, }; use crate::transaction::types::{PreparedAdoptedStateRow, PreparedStateRow, StagedCommitMembers}; use crate::GLOBAL_VERSION_ID; use crate::{LixError, NullableKeyFilter}; /// Transaction-local write buffer after transaction-boundary preparation. /// /// This is the engine seam between SQL execution and transaction ownership: /// write frontends pass decoded `TransactionWriteRow`s to `Transaction`, the /// transaction prepares them into stable `PreparedStateRow`s, reads build a /// `PreparedStateRowOverlay` from those rows, and commit drains the same rows. pub(crate) struct TransactionWriteBuffer { functions: FunctionProviderHandle, rows: Mutex>>, adopted_rows: Mutex>>, by_identity: Mutex>, insert_identities: Mutex>>, commit_members_by_version: Mutex>, extra_commit_parents_by_version: Mutex>>, file_data_writes: Mutex>, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum RowSlot { State(usize), Adopted(usize), } /// Drained prepared transaction writes ready for commit. pub(crate) struct PreparedWriteSet { pub(crate) state_rows: Vec, pub(crate) adopted_rows: Vec, pub(crate) insert_identities: BTreeMap>, pub(crate) commit_members_by_version: BTreeMap, pub(crate) extra_commit_parents_by_version: BTreeMap>, pub(crate) file_data_writes: Vec, } pub(crate) struct PreparedWriteValidationSet<'a> { rows: Vec>, constraint_rows: Vec>, insert_identities: Vec<( &'a PreparedStateRowIdentity, Option<&'a TransactionWriteOrigin>, )>, } pub(crate) struct PreparedWriteValidationIndex<'a> { rows_by_schema_scope: BTreeMap>>, insert_identities_by_schema_scope: BTreeMap< Domain, Vec<( &'a PreparedStateRowIdentity, Option<&'a TransactionWriteOrigin>, )>, >, } #[derive(Clone, Copy)] pub(crate) enum PreparedValidationRow<'a> { State(&'a PreparedStateRow), Adopted(&'a PreparedAdoptedStateRow), } impl<'a> PreparedValidationRow<'a> { pub(crate) fn entity_id(&self) -> &EntityIdentity { match self { Self::State(row) => &row.entity_id, Self::Adopted(row) => &row.entity_id, } } pub(crate) fn schema_plan_id(&self) -> SchemaPlanId { match self { Self::State(row) => row.schema_plan_id, Self::Adopted(row) => row.schema_plan_id, } } pub(crate) fn schema_key(&self) -> &str { match self { Self::State(row) => &row.schema_key, Self::Adopted(row) => &row.schema_key, } } pub(crate) fn file_id(&self) -> &Option { match self { Self::State(row) => &row.file_id, Self::Adopted(row) => &row.file_id, } } #[cfg(test)] pub(crate) fn snapshot_content(&self) -> Option<&str> { match self { Self::State(row) => row .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()), Self::Adopted(row) => row .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()), } } pub(crate) fn snapshot_json(self) -> Option<&'a serde_json::Value> { match self { Self::State(row) => row .snapshot .as_ref() .map(|snapshot| snapshot.value.as_ref()), Self::Adopted(row) => row .snapshot .as_ref() .map(|snapshot| snapshot.value.as_ref()), } } pub(crate) fn metadata_json(self) -> Option<&'a serde_json::Value> { match self { Self::State(row) => row .metadata .as_ref() .map(|metadata| metadata.value.as_ref()), Self::Adopted(row) => row .metadata .as_ref() .map(|metadata| metadata.value.as_ref()), } } pub(crate) fn untracked(&self) -> bool { match self { Self::State(row) => row.untracked, Self::Adopted(_) => false, } } pub(crate) fn version_id(&self) -> &str { match self { Self::State(row) => &row.version_id, Self::Adopted(row) => &row.version_id, } } pub(crate) fn domain(&self) -> Domain { Domain::exact_file( self.version_id().to_string(), self.untracked(), self.file_id().clone(), ) } pub(crate) fn domain_row_identity(&self) -> DomainRowIdentity { DomainRowIdentity::in_domain( self.domain(), self.schema_key().to_string(), self.entity_id().clone(), ) } } impl<'a> PreparedWriteValidationIndex<'a> { pub(crate) fn schema_scopes(&self) -> impl Iterator { self.rows_by_schema_scope.keys() } pub(crate) fn validation_set_for_schema_scope( &self, schema_scope: &Domain, ) -> PreparedWriteValidationSet<'a> { let constraint_rows = self .rows_by_schema_scope .iter() .flat_map(|(target_scope, rows)| { rows.iter().copied().filter(move |row| { schema_scope.validation_scope_contains_constraint_domain(target_scope) || (row.snapshot_json().is_none() && target_scope.tombstone_domain_affects_validation_scope(schema_scope)) }) }) .collect(); PreparedWriteValidationSet { rows: self .rows_by_schema_scope .get(schema_scope) .cloned() .unwrap_or_default(), constraint_rows, insert_identities: self .insert_identities_by_schema_scope .get(schema_scope) .cloned() .unwrap_or_default(), } } } impl<'a> PreparedWriteValidationSet<'a> { pub(crate) fn rows(&self) -> impl Iterator> + '_ { self.rows.iter().copied() } pub(crate) fn constraint_rows(&self) -> impl Iterator> + '_ { self.constraint_rows.iter().copied() } pub(crate) fn insert_identities( &self, ) -> impl Iterator)> { self.insert_identities .iter() .map(|(identity, origin)| (*identity, *origin)) } } impl PreparedWriteSet { #[cfg(test)] pub(crate) fn validation_rows(&self) -> impl Iterator> + '_ { self.state_rows .iter() .map(PreparedValidationRow::State) .chain(self.adopted_rows.iter().map(PreparedValidationRow::Adopted)) } pub(crate) fn validation_index(&self) -> PreparedWriteValidationIndex<'_> { let mut rows_by_schema_scope = BTreeMap::>>::new(); for row in &self.state_rows { let row = PreparedValidationRow::State(row); rows_by_schema_scope .entry(row.domain().schema_catalog_domain()) .or_default() .push(row); } for row in &self.adopted_rows { let row = PreparedValidationRow::Adopted(row); rows_by_schema_scope .entry(row.domain().schema_catalog_domain()) .or_default() .push(row); } let mut insert_identities_by_schema_scope = BTreeMap::< Domain, Vec<(&PreparedStateRowIdentity, Option<&TransactionWriteOrigin>)>, >::new(); for (identity, origin) in &self.insert_identities { insert_identities_by_schema_scope .entry(identity.domain().schema_catalog_domain()) .or_default() .push((identity, origin.as_ref())); } PreparedWriteValidationIndex { rows_by_schema_scope, insert_identities_by_schema_scope, } } #[cfg(test)] pub(crate) fn validation_set_for_tests(&self) -> PreparedWriteValidationSet<'_> { let rows: Vec<_> = self.validation_rows().collect(); let insert_identities = self .insert_identities .iter() .map(|(identity, origin)| (identity, origin.as_ref())) .collect(); PreparedWriteValidationSet { constraint_rows: rows.clone(), rows, insert_identities, } } } impl TransactionWriteBuffer { pub(crate) fn new(functions: FunctionProviderHandle) -> Self { Self { functions, rows: Mutex::new(Vec::new()), adopted_rows: Mutex::new(Vec::new()), by_identity: Mutex::new(HashMap::new()), insert_identities: Mutex::new(BTreeMap::new()), commit_members_by_version: Mutex::new(BTreeMap::new()), extra_commit_parents_by_version: Mutex::new(BTreeMap::new()), file_data_writes: Mutex::new(Vec::new()), } } /// Drains staged writes for commit. pub(crate) fn drain(&self) -> Result { let mut rows_guard = self.rows.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged writes lock", ) })?; let mut adopted_rows_guard = self.adopted_rows.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged adopted writes lock", ) })?; let mut by_identity_guard = self.by_identity.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged identity index lock", ) })?; let mut file_data_guard = self.file_data_writes.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged file data lock", ) })?; let mut insert_identities_guard = self.insert_identities.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged insert identity lock", ) })?; let mut commit_members_guard = self.commit_members_by_version.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged commit membership lock", ) })?; let mut extra_parents_guard = self.extra_commit_parents_by_version.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged extra commit parents lock", ) })?; let result = Ok(PreparedWriteSet { state_rows: std::mem::take(&mut *rows_guard) .into_iter() .flatten() .collect(), adopted_rows: std::mem::take(&mut *adopted_rows_guard) .into_iter() .flatten() .collect(), insert_identities: std::mem::take(&mut *insert_identities_guard), commit_members_by_version: std::mem::take(&mut *commit_members_guard), extra_commit_parents_by_version: std::mem::take(&mut *extra_parents_guard), file_data_writes: std::mem::take(&mut *file_data_guard), }); by_identity_guard.clear(); result } /// Records an additional parent for the commit generated for `version_id`. /// /// Normal writes parent the new commit to the version's previous head. /// Merges add the source version head as an extra parent so the commit graph /// preserves branch ancestry while tracked-state roots still apply source /// rows onto the target root. pub(crate) fn add_commit_parent( &self, version_id: String, parent_commit_id: String, ) -> Result<(), LixError> { let mut guard = self.extra_commit_parents_by_version.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged extra commit parents lock", ) })?; let parents = guard.entry(version_id).or_default(); if !parents.contains(&parent_commit_id) { parents.push(parent_commit_id); } Ok(()) } pub(crate) fn staged_commit_id(&self, version_id: &str) -> Result, LixError> { let guard = self.commit_members_by_version.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged commit membership lock", ) })?; Ok(guard .get(version_id) .map(|members| members.commit_id.clone())) } /// Stages a commit for `version_id` even if no tracked state rows changed. /// /// Merge uses this to record graph ancestry for convergent merges where the /// target already has the same final state as the source, but the source /// head is not reachable from the target head. pub(crate) fn stage_empty_commit(&self, version_id: String) -> Result { let mut functions = self.functions.clone(); let mut guard = self.commit_members_by_version.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged commit membership lock", ) })?; let members = guard.entry(version_id).or_insert_with(|| { StagedCommitMembers::new( functions.uuid_v7(), functions.uuid_v7(), functions.timestamp(), ) }); members.allow_empty(); Ok(members.commit_id.clone()) } /// Builds the transaction-local read overlay from currently staged writes. pub(crate) fn staging_overlay(self: &Arc) -> Result { let by_identity_guard = self.by_identity.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged identity index lock", ) })?; let slots = by_identity_guard .iter() .map(|(identity, slot)| (identity.clone(), *slot)) .collect(); Ok(PreparedStateRowOverlay { staged_writes: Arc::clone(self), slots, }) } /// Stages one prepared write batch into this transaction. /// /// Frontends hand raw `TransactionWriteRow`s to `Transaction`; normalization prepares /// stable `PreparedStateRow`s before this method indexes them for transaction- /// local reads and commit routing. pub(crate) fn stage_write( &self, write: PreparedTransactionWrite, ) -> Result { let (mode, count) = match &write { PreparedTransactionWrite::Rows { mode, rows } => (Some(*mode), rows.len() as u64), PreparedTransactionWrite::RowsWithFileData { mode, count, .. } => (Some(*mode), *count), PreparedTransactionWrite::AdoptedChanges { rows } => (None, rows.len() as u64), }; let mut functions = self.functions.clone(); let (rows, adopted_rows, file_data_writes) = self.state_rows_from_stage_write(write)?; for row in &rows { validate_commit_membership_support(row)?; } for row in &adopted_rows { validate_adopted_commit_membership_support(row)?; } reject_duplicate_present_rows_in_batch(&rows)?; let mut guard = self.rows.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged writes lock", ) })?; let mut adopted_guard = self.adopted_rows.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged adopted writes lock", ) })?; let mut by_identity_guard = self.by_identity.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged identity index lock", ) })?; let mut commit_members_guard = self.commit_members_by_version.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged commit membership lock", ) })?; let mut insert_identities_guard = self.insert_identities.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged insert identity lock", ) })?; for mut row in rows { let identity = PreparedStateRowIdentity::from(&row); if mode == Some(TransactionWriteMode::Insert) && by_identity_guard.contains_key(&identity) { return Err(duplicate_insert_identity_error(&row)); } if matches!(by_identity_guard.get(&identity), Some(RowSlot::Adopted(_))) { return Err(conflicting_adopted_identity_error(&row)); } let existing_slot = by_identity_guard.remove(&identity); if let Some(RowSlot::State(index)) = existing_slot { if let Some(previous) = guard.get_mut(index).and_then(Option::take) { remove_row_from_commit_members(&mut commit_members_guard, &previous); } } add_row_to_commit_members(&mut commit_members_guard, &mut row, &mut functions); let identity = PreparedStateRowIdentity::from(&row); if mode == Some(TransactionWriteMode::Insert) { insert_identities_guard.insert(identity.clone(), row.origin.clone()); } let slot = match existing_slot { Some(RowSlot::State(index)) => { guard[index] = Some(row); RowSlot::State(index) } _ => { let index = guard.len(); guard.push(Some(row)); RowSlot::State(index) } }; by_identity_guard.insert(identity, slot); } for mut row in adopted_rows { let identity = PreparedStateRowIdentity::from(&row); if by_identity_guard.contains_key(&identity) { return Err(conflicting_adopted_projection_error(&row)); } add_adopted_row_to_commit_members(&mut commit_members_guard, &mut row, &mut functions); let identity = PreparedStateRowIdentity::from(&row); let index = adopted_guard.len(); adopted_guard.push(Some(row)); by_identity_guard.insert(identity, RowSlot::Adopted(index)); } if !file_data_writes.is_empty() { self.file_data_writes .lock() .map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged file data lock", ) })? .extend(file_data_writes); } Ok(TransactionWriteOutcome { count }) } fn state_rows_from_stage_write( &self, write: PreparedTransactionWrite, ) -> Result< ( Vec, Vec, Vec, ), LixError, > { let mut state_rows = Vec::new(); let mut adopted_rows = Vec::new(); let mut file_data_writes = Vec::new(); match write { PreparedTransactionWrite::Rows { rows, .. } => { state_rows.extend(rows); } PreparedTransactionWrite::RowsWithFileData { rows, file_data, .. } => { state_rows.extend(rows); file_data_writes.extend(file_data); } PreparedTransactionWrite::AdoptedChanges { rows } => { adopted_rows.extend(rows); } } Ok((state_rows, adopted_rows, file_data_writes)) } } /// Read overlay derived from staged transaction writes. pub(crate) struct PreparedStateRowOverlay { staged_writes: Arc, slots: BTreeMap, } pub(crate) struct StagedScanParts { pub(crate) rows: Vec, pub(crate) hidden_identities: BTreeSet, } impl PreparedStateRowOverlay { /// Returns staged rows visible for a scan request. #[cfg(test)] pub(crate) fn scan( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self.scan_parts(request)?.rows) } /// Returns staged rows and base-row identities hidden by staged rows in one pass. /// /// Tombstones hide base rows even when the request does not include /// tombstone rows in the visible result set. pub(crate) fn scan_parts( &self, request: &LiveStateScanRequest, ) -> Result { let rows_guard = self.staged_writes.rows.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged writes lock", ) })?; let adopted_guard = self.staged_writes.adopted_rows.lock().map_err(|_| { LixError::new( "LIX_ERROR_UNKNOWN", "failed to acquire transaction staged adopted writes lock", ) })?; let mut rows = Vec::new(); let mut hidden_identities = BTreeSet::new(); for (identity, slot) in &self.slots { match *slot { RowSlot::State(index) => { let Some(row) = rows_guard.get(index).and_then(Option::as_ref) else { continue; }; if !staged_row_identity_matches_scan(row, request) { continue; } hidden_identities.insert(identity.clone()); if row.snapshot.is_some() || request.filter.include_tombstones { rows.push(MaterializedLiveStateRow::from(row)); } } RowSlot::Adopted(index) => { let Some(row) = adopted_guard.get(index).and_then(Option::as_ref) else { continue; }; if !adopted_row_identity_matches_scan(row, request) { continue; } hidden_identities.insert(identity.clone()); if row.snapshot.is_some() || request.filter.include_tombstones { rows.push(MaterializedLiveStateRow::from(row)); } } } } Ok(StagedScanParts { rows, hidden_identities, }) } /// Returns a staged exact-row answer, if this transaction has one. #[cfg(test)] pub(crate) fn load_exact(&self, request: &LiveStateRowRequest) -> Option { let untracked_identity = PreparedStateRowIdentity::from_exact_request(request, true)?; if let Some(row) = self.load_state_slot(&untracked_identity) { return Some(if row.snapshot.is_none() { StagedExactRow::Tombstone } else { StagedExactRow::Row(MaterializedLiveStateRow::from(&row)) }); } let identity = PreparedStateRowIdentity::from_exact_request(request, false)?; if let Some(row) = self.load_state_slot(&identity) { return Some(if row.snapshot.is_none() { StagedExactRow::Tombstone } else { StagedExactRow::Row(MaterializedLiveStateRow::from(&row)) }); } self.load_adopted_slot(&identity).map(|row| { if row.snapshot.is_none() { StagedExactRow::Tombstone } else { StagedExactRow::Row(MaterializedLiveStateRow::from(&row)) } }) } #[cfg(test)] fn load_state_slot(&self, identity: &PreparedStateRowIdentity) -> Option { let Some(RowSlot::State(index)) = self.slots.get(identity).copied() else { return None; }; self.staged_writes .rows .lock() .ok()? .get(index)? .as_ref() .cloned() } #[cfg(test)] fn load_adopted_slot( &self, identity: &PreparedStateRowIdentity, ) -> Option { let Some(RowSlot::Adopted(index)) = self.slots.get(identity).copied() else { return None; }; self.staged_writes .adopted_rows .lock() .ok()? .get(index)? .as_ref() .cloned() } } #[cfg(test)] pub(crate) enum StagedExactRow { Row(MaterializedLiveStateRow), Tombstone, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub(crate) struct PreparedStateRowIdentity { untracked: bool, schema_key: String, entity_id: crate::entity_identity::EntityIdentity, file_id: Option, version_id: String, } impl PreparedStateRowIdentity { fn from_staged_row(row: &PreparedStateRow) -> Self { Self { untracked: row.untracked, schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), file_id: row.file_id.clone(), version_id: row.version_id.clone(), } } #[cfg(test)] fn from_exact_request(request: &LiveStateRowRequest, untracked: bool) -> Option { let file_id = match &request.file_id { NullableKeyFilter::Null => None, NullableKeyFilter::Value(value) => Some(value.clone()), // Exact overlay lookup requires a concrete row identity. NullableKeyFilter::Any => return None, }; Some(Self { untracked, schema_key: request.schema_key.clone(), entity_id: request.entity_id.clone(), file_id, version_id: request.version_id.clone(), }) } pub(crate) fn schema_key(&self) -> &str { &self.schema_key } pub(crate) fn entity_id(&self) -> &crate::entity_identity::EntityIdentity { &self.entity_id } pub(crate) fn domain(&self) -> Domain { Domain::exact_file( self.version_id.clone(), self.untracked, self.file_id.clone(), ) } } impl From<&PreparedStateRow> for PreparedStateRowIdentity { fn from(row: &PreparedStateRow) -> Self { Self::from_staged_row(row) } } impl From<&PreparedAdoptedStateRow> for PreparedStateRowIdentity { fn from(row: &PreparedAdoptedStateRow) -> Self { Self { untracked: false, schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), file_id: row.file_id.clone(), version_id: row.version_id.clone(), } } } impl From<&MaterializedLiveStateRow> for PreparedStateRowIdentity { fn from(row: &MaterializedLiveStateRow) -> Self { Self { untracked: row.untracked, schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), file_id: row.file_id.clone(), version_id: row.version_id.clone(), } } } fn validate_commit_membership_support(row: &PreparedStateRow) -> Result<(), LixError> { if row.global && row.version_id != GLOBAL_VERSION_ID { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "engine global staged rows must use the global version id", )); } Ok(()) } fn validate_adopted_commit_membership_support( row: &PreparedAdoptedStateRow, ) -> Result<(), LixError> { if row.global && row.version_id != GLOBAL_VERSION_ID { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "engine global adopted rows must use the global version id", )); } Ok(()) } fn reject_duplicate_present_rows_in_batch(rows: &[PreparedStateRow]) -> Result<(), LixError> { let mut pending_present_rows = BTreeMap::::new(); for row in rows { let identity = PreparedStateRowIdentity::from(row); if row.snapshot.is_none() { pending_present_rows.remove(&identity); continue; } if let Some(previous) = pending_present_rows.insert(identity, row) { return Err(duplicate_staged_present_row_error(row, previous)); } } Ok(()) } fn duplicate_staged_present_row_error( row: &PreparedStateRow, previous: &PreparedStateRow, ) -> LixError { let message = logical_primary_key_violation_message(row.origin.as_ref()) .unwrap_or_else(|| { format!( "primary-key constraint violation on schema '{}': duplicate staged rows for entity_id '{}' in version '{}'", row.schema_key, previous .entity_id .as_json_array_text() .unwrap_or_else(|_| "".to_string()), row.version_id ) }); LixError::new(LixError::CODE_UNIQUE, message) } pub(crate) fn duplicate_insert_identity_message( schema_key: &str, entity_id: &crate::entity_identity::EntityIdentity, version_id: Option<&str>, origin: Option<&TransactionWriteOrigin>, ) -> String { if let Some(message) = logical_primary_key_violation_message(origin) { return message; } let entity_id = entity_id .as_json_array_text() .unwrap_or_else(|_| "".to_string()); match version_id { Some(version_id) => format!( "primary-key constraint violation on schema '{schema_key}': INSERT would duplicate entity_id '{entity_id}' in version '{version_id}'" ), None => format!( "primary-key constraint violation on schema '{schema_key}': INSERT would duplicate entity_id '{entity_id}'" ), } } fn duplicate_insert_identity_error(row: &PreparedStateRow) -> LixError { let message = duplicate_insert_identity_message( &row.schema_key, &row.entity_id, Some(&row.version_id), row.origin.as_ref(), ); LixError::new(LixError::CODE_UNIQUE, message) } fn logical_primary_key_violation_message( origin: Option<&TransactionWriteOrigin>, ) -> Option { let origin = origin?; if origin.operation != TransactionWriteOperation::Insert { return None; } let primary_key = origin.primary_key.as_ref()?; Some(format!( "primary-key constraint violation on table '{}': INSERT would duplicate {}", origin.surface, format_logical_primary_key(primary_key) )) } fn format_logical_primary_key(primary_key: &LogicalPrimaryKey) -> String { primary_key .columns .iter() .enumerate() .map(|(index, column)| { let value = primary_key .values .get(index) .map(String::as_str) .unwrap_or(""); format!("{column} '{value}'") }) .collect::>() .join(", ") } fn conflicting_adopted_identity_error(row: &PreparedStateRow) -> LixError { LixError::new( LixError::CODE_UNIQUE, format!( "transaction cannot stage a new row and an adopted projection for schema '{}' entity_id '{}' in version '{}'", row.schema_key, row.entity_id .as_json_array_text() .unwrap_or_else(|_| "".to_string()), row.version_id ), ) } fn conflicting_adopted_projection_error(row: &PreparedAdoptedStateRow) -> LixError { LixError::new( LixError::CODE_UNIQUE, format!( "transaction cannot stage duplicate adopted projections for schema '{}' entity_id '{}' in version '{}'", row.schema_key, row.entity_id .as_json_array_text() .unwrap_or_else(|_| "".to_string()), row.version_id ), ) } fn add_row_to_commit_members( members_by_version: &mut BTreeMap, row: &mut PreparedStateRow, functions: &mut dyn FunctionProvider, ) { if row.untracked { return; } let change_id = row .change_id .clone() .expect("tracked staged rows must carry change_id for commit membership"); let members = members_by_version .entry(row.version_id.clone()) .or_insert_with(|| { StagedCommitMembers::new( functions.uuid_v7(), functions.uuid_v7(), functions.timestamp(), ) }); row.commit_id = Some(members.commit_id.clone()); members.add_change_id(change_id); } fn add_adopted_row_to_commit_members( members_by_version: &mut BTreeMap, row: &mut PreparedAdoptedStateRow, functions: &mut dyn FunctionProvider, ) { let members = members_by_version .entry(row.version_id.clone()) .or_insert_with(|| { StagedCommitMembers::new( functions.uuid_v7(), functions.uuid_v7(), functions.timestamp(), ) }); row.commit_id = members.commit_id.clone(); members.add_change_id(row.change_id.clone()); } fn remove_row_from_commit_members( members_by_version: &mut BTreeMap, row: &PreparedStateRow, ) { if row.untracked { return; } let Some(members) = members_by_version.get_mut(&row.version_id) else { return; }; let Some(change_id) = row.change_id.as_deref() else { return; }; members.remove_change_id(change_id); if members.is_empty() { members_by_version.remove(&row.version_id); } } fn adopted_row_identity_matches_scan( row: &PreparedAdoptedStateRow, request: &LiveStateScanRequest, ) -> bool { if !request.filter.schema_keys.is_empty() && !request.filter.schema_keys.contains(&row.schema_key) { return false; } if !request.filter.entity_ids.is_empty() && !request.filter.entity_ids.contains(&row.entity_id) { return false; } if !request.filter.version_ids.is_empty() && !request.filter.version_ids.contains(&row.version_id) { return false; } if request.filter.untracked == Some(true) { return false; } nullable_key_matches_filters(&row.file_id, &request.filter.file_ids) } fn staged_row_identity_matches_scan( row: &PreparedStateRow, request: &LiveStateScanRequest, ) -> bool { if !request.filter.schema_keys.is_empty() && !request.filter.schema_keys.contains(&row.schema_key) { return false; } if !request.filter.entity_ids.is_empty() && !request.filter.entity_ids.contains(&row.entity_id) { return false; } if !request.filter.version_ids.is_empty() && !request.filter.version_ids.contains(&row.version_id) { return false; } if request .filter .untracked .is_some_and(|untracked| row.untracked != untracked) { return false; } nullable_key_matches_filters(&row.file_id, &request.filter.file_ids) } fn nullable_key_matches_filters( value: &Option, filters: &[NullableKeyFilter], ) -> bool { filters.is_empty() || filters .iter() .any(|filter| nullable_key_matches_filter(value, filter)) } fn nullable_key_matches_filter(value: &Option, filter: &NullableKeyFilter) -> bool { match filter { NullableKeyFilter::Any => true, NullableKeyFilter::Null => value.is_none(), NullableKeyFilter::Value(expected) => value.as_ref() == Some(expected), } } #[cfg(test)] mod tests { use super::*; use crate::functions::SharedFunctionProvider; use crate::live_state::{LiveStateFilter, LiveStateRowRequest}; #[tokio::test] async fn staging_overlay_uses_last_staged_row_for_exact_load() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("sql2-duplicate-key", "first")], }) .expect("initial row should stage"); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("sql2-duplicate-key", "second")], }) .expect("staging rows should succeed"); let overlay = staged_writes .staging_overlay() .expect("overlay should build from staged rows"); let row = overlay .load_exact(&LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: "global".to_string(), entity_id: crate::entity_identity::EntityIdentity::single("sql2-duplicate-key"), file_id: NullableKeyFilter::Null, }) .expect("staged row should be visible"); let StagedExactRow::Row(row) = row else { panic!("latest staged row should not be a tombstone"); }; assert_eq!( row.snapshot_content.as_deref(), Some("{\"key\":\"sql2-duplicate-key\",\"value\":\"second\"}") ); } #[tokio::test] async fn staging_overlay_scan_returns_only_latest_row_per_identity() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("sql2-duplicate-key", "first")], }) .expect("initial row should stage"); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("sql2-duplicate-key", "second")], }) .expect("staging rows should succeed"); let overlay = staged_writes .staging_overlay() .expect("overlay should build from staged rows"); let rows = overlay .scan(&scan_request_for_key("sql2-duplicate-key", false)) .expect("overlay scan should succeed"); assert_eq!(rows.len(), 1); assert_eq!( rows[0].snapshot_content.as_deref(), Some("{\"key\":\"sql2-duplicate-key\",\"value\":\"second\"}") ); } #[tokio::test] async fn staging_overlay_delete_hides_prior_staged_insert() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![ state_row("sql2-delete-key", "visible"), tombstone_row("sql2-delete-key"), ], }) .expect("staging rows should succeed"); let overlay = staged_writes .staging_overlay() .expect("overlay should build from staged rows"); let exact = overlay .load_exact(&exact_request_for_key("sql2-delete-key")) .expect("staged tombstone should answer exact load"); assert!(matches!(exact, StagedExactRow::Tombstone)); assert!(overlay .scan(&scan_request_for_key("sql2-delete-key", false)) .expect("overlay scan should succeed") .is_empty()); let tombstones = overlay .scan(&scan_request_for_key("sql2-delete-key", true)) .expect("overlay scan should succeed"); assert_eq!(tombstones.len(), 1); assert_eq!(tombstones[0].snapshot_content, None); } #[tokio::test] async fn staging_overlay_insert_after_delete_resurrects_row() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![ tombstone_row("sql2-resurrect-key"), state_row("sql2-resurrect-key", "visible-again"), ], }) .expect("staging rows should succeed"); let overlay = staged_writes .staging_overlay() .expect("overlay should build from staged rows"); let exact = overlay .load_exact(&exact_request_for_key("sql2-resurrect-key")) .expect("staged row should answer exact load"); let StagedExactRow::Row(row) = exact else { panic!("latest staged row should be visible"); }; assert_eq!( row.snapshot_content.as_deref(), Some("{\"key\":\"sql2-resurrect-key\",\"value\":\"visible-again\"}") ); assert_eq!( overlay .scan(&scan_request_for_key("sql2-resurrect-key", false)) .expect("overlay scan should succeed") .len(), 1 ); } #[tokio::test] async fn staged_writes_drain_returns_coalesced_latest_rows() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![ state_row("sql2-key-a", "first"), state_row("sql2-key-b", "only"), ], }) .expect("initial rows should stage"); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("sql2-key-a", "second")], }) .expect("staging rows should succeed"); let drained = staged_writes.drain().expect("drain should succeed"); assert_eq!(drained.state_rows.len(), 2); assert!(drained.state_rows.iter().any(|row| { row.entity_id == crate::entity_identity::EntityIdentity::single("sql2-key-a") && row .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) == Some("{\"key\":\"sql2-key-a\",\"value\":\"second\"}") })); assert!(drained.state_rows.iter().any(|row| { row.entity_id == crate::entity_identity::EntityIdentity::single("sql2-key-b") && row .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) == Some("{\"key\":\"sql2-key-b\",\"value\":\"only\"}") })); } #[tokio::test] async fn staged_writes_drain_preserves_file_data_payloads() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::RowsWithFileData { mode: TransactionWriteMode::Replace, rows: vec![state_row("file-readme", "descriptor")], file_data: vec![TransactionFileData { file_id: "file-readme".to_string(), version_id: "global".to_string(), untracked: true, data: b"hello".to_vec(), }], count: 1, }) .expect("staging rows with file data should succeed"); let drained = staged_writes.drain().expect("drain should succeed"); assert_eq!(drained.state_rows.len(), 1); assert_eq!(drained.file_data_writes.len(), 1); assert_eq!(drained.file_data_writes[0].file_id, "file-readme"); assert_eq!(drained.file_data_writes[0].data, b"hello"); } #[tokio::test] async fn staged_writes_track_commit_members_for_tracked_global_rows() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("tracked-key", "value").with_tracked()], }) .expect("tracked global row should stage"); let drained = staged_writes.drain().expect("drain should succeed"); let members = drained .commit_members_by_version .get("global") .expect("global commit members should exist"); assert_eq!( members.change_ids.iter().cloned().collect::>(), vec!["test-change-id".to_string()] ); } #[tokio::test] async fn staged_writes_do_not_track_untracked_rows_as_commit_members() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("untracked-key", "value")], }) .expect("untracked row should stage"); let drained = staged_writes.drain().expect("drain should succeed"); assert!(drained.commit_members_by_version.is_empty()); } #[tokio::test] async fn staged_writes_replace_commit_member_on_tracked_overwrite() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("overwrite-key", "first") .with_tracked() .with_change_id("change-first")], }) .expect("initial tracked row should stage"); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("overwrite-key", "second") .with_tracked() .with_change_id("change-second")], }) .expect("tracked overwrite should stage"); let drained = staged_writes.drain().expect("drain should succeed"); let members = drained .commit_members_by_version .get("global") .expect("global commit members should exist"); assert_eq!( members.change_ids.iter().cloned().collect::>(), vec!["change-second".to_string()] ); } #[tokio::test] async fn staged_writes_keep_tracked_and_untracked_domains_separate() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![ state_row("tracked-to-untracked-key", "tracked") .with_tracked() .with_change_id("change-tracked"), state_row("tracked-to-untracked-key", "untracked") .with_change_id("change-untracked"), ], }) .expect("untracked overwrite should stage"); let drained = staged_writes.drain().expect("drain should succeed"); assert_eq!(drained.state_rows.len(), 2); assert!(drained .state_rows .iter() .any(|row| { row.change_id.as_deref() == Some("change-tracked") && !row.untracked })); assert!(drained .state_rows .iter() .any(|row| { row.change_id.as_deref() == Some("change-untracked") && row.untracked })); let members = drained .commit_members_by_version .get("global") .expect("tracked commit member should remain in tracked domain"); assert_eq!( members.change_ids.iter().cloned().collect::>(), vec!["change-tracked".to_string()] ); } #[tokio::test] async fn staged_writes_reject_duplicate_present_rows_in_one_batch() { let staged_writes = test_staged_writes(); let error = staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![ state_row("duplicate-present-key", "first"), state_row("duplicate-present-key", "second"), ], }) .expect_err("same-batch duplicate present rows should fail"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("primary-key constraint violation"), "error should explain the duplicate primary key: {error:?}" ); } #[tokio::test] async fn staged_writes_insert_keeps_tracked_and_untracked_rows_as_distinct_identities() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Insert, rows: vec![ state_row("shared-domain-key", "tracked").with_tracked(), state_row("shared-domain-key", "untracked"), ], }) .expect("tracked and untracked rows are distinct domain identities"); let drained = staged_writes.drain().expect("drain should succeed"); assert_eq!(drained.state_rows.len(), 2); assert!(drained.state_rows.iter().any(|row| { row.entity_id == crate::entity_identity::EntityIdentity::single("shared-domain-key") && !row.untracked })); assert!(drained.state_rows.iter().any(|row| { row.entity_id == crate::entity_identity::EntityIdentity::single("shared-domain-key") && row.untracked })); } #[tokio::test] async fn staged_writes_track_active_version_members_separately() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("active-version-key", "value") .with_tracked() .with_version("version-a")], }) .expect("active-version tracked staging should accumulate members"); let drained = staged_writes.drain().expect("drain should succeed"); let members = drained .commit_members_by_version .get("version-a") .expect("active-version commit members should exist"); assert_eq!( members.change_ids.iter().cloned().collect::>(), vec!["test-change-id".to_string()] ); } #[tokio::test] async fn staged_writes_reject_global_rows_with_non_global_version_id() { let staged_writes = test_staged_writes(); let error = staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![{ let mut row = state_row("invalid-global-key", "value"); row.version_id = "version-a".to_string(); row }], }) .expect_err("global row with non-global version should fail"); assert!(error .message .contains("global staged rows must use the global version id")); } #[tokio::test] async fn staging_overlay_identity_matches_live_state_conflict_key() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("shared-entity", "base")], }) .expect("initial same-identity row should stage"); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![ state_row("shared-entity", "base"), state_row("shared-entity", "other-version").with_version("version-b"), state_row("shared-entity", "other-schema").with_schema("other_schema"), state_row("shared-entity", "other-file").with_file_id("file-a"), state_row("shared-entity", "tracked").with_tracked(), ], }) .expect("staging rows should succeed"); let overlay = staged_writes .staging_overlay() .expect("overlay should build from staged rows"); let rows = overlay .scan(&LiveStateScanRequest { filter: LiveStateFilter { entity_ids: vec![crate::entity_identity::EntityIdentity::single( "shared-entity", )], include_tombstones: true, ..LiveStateFilter::default() }, ..LiveStateScanRequest::default() }) .expect("overlay scan should succeed"); assert_eq!(rows.len(), 5); assert_eq!( rows.iter() .filter(|row| row.entity_id == crate::entity_identity::EntityIdentity::single("shared-entity") && row.version_id == "global" && row.schema_key == "lix_key_value" && row.file_id.is_none()) .count(), 2 ); assert!(rows.iter().any(|row| { row.snapshot_content.as_deref() == Some("{\"key\":\"shared-entity\",\"value\":\"tracked\"}") })); } #[tokio::test] async fn staged_writes_use_injected_function_provider_for_commit_metadata() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("sql2-functions-key", "value").with_tracked()], }) .expect("staging rows should succeed"); let drained = staged_writes.drain().expect("drain should succeed"); let members = drained .commit_members_by_version .get("global") .expect("global commit members should exist"); assert_eq!(members.commit_id, "test-uuid-1"); assert_eq!(members.commit_change_id, "test-uuid-2"); assert_eq!(members.created_at, "test-timestamp-1"); } #[tokio::test] async fn staged_writes_stamp_tracked_rows_with_commit_id_during_staging() { let staged_writes = test_staged_writes(); staged_writes .stage_write(PreparedTransactionWrite::Rows { mode: TransactionWriteMode::Replace, rows: vec![state_row("tracked-commit-key", "value").with_tracked()], }) .expect("tracked row should stage"); let drained = staged_writes.drain().expect("drain should succeed"); assert_eq!(drained.state_rows.len(), 1); assert_eq!( drained.state_rows[0].commit_id.as_deref(), Some("test-uuid-1") ); assert_eq!( drained .commit_members_by_version .get("global") .expect("global commit members should exist") .commit_id, "test-uuid-1" ); } fn test_staged_writes() -> Arc { Arc::new(TransactionWriteBuffer::new(SharedFunctionProvider::new( Box::new(TestFunctionProvider::default()) as Box, ))) } #[derive(Default)] struct TestFunctionProvider { uuid_count: usize, timestamp_count: usize, } impl FunctionProvider for TestFunctionProvider { fn uuid_v7(&mut self) -> String { self.uuid_count += 1; format!("test-uuid-{}", self.uuid_count) } fn timestamp(&mut self) -> String { self.timestamp_count += 1; format!("test-timestamp-{}", self.timestamp_count) } } fn state_row(key: &str, value: &str) -> PreparedStateRow { let snapshot = stage_json_from_value( TransactionJson::from_value_for_test(serde_json::json!({ "key": key, "value": value })), "test staged row snapshot_content", ) .expect("test snapshot should prepare"); PreparedStateRow { schema_plan_id: SchemaPlanId::for_test(0), facts: crate::transaction::types::PreparedRowFacts::default(), entity_id: crate::entity_identity::EntityIdentity::single(key), schema_key: "lix_key_value".to_string(), file_id: None, snapshot: Some(snapshot), metadata: None, origin: None, created_at: "test-created-at".to_string(), updated_at: "test-updated-at".to_string(), global: true, change_id: None, commit_id: None, untracked: true, version_id: "global".to_string(), } } fn tombstone_row(key: &str) -> PreparedStateRow { let mut row = state_row(key, "deleted"); row.snapshot = None; row } fn exact_request_for_key(key: &str) -> LiveStateRowRequest { LiveStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: "global".to_string(), entity_id: crate::entity_identity::EntityIdentity::single(key), file_id: NullableKeyFilter::Null, } } fn scan_request_for_key(key: &str, include_tombstones: bool) -> LiveStateScanRequest { LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec!["lix_key_value".to_string()], entity_ids: vec![crate::entity_identity::EntityIdentity::single(key)], version_ids: vec!["global".to_string()], file_ids: vec![NullableKeyFilter::Null], include_tombstones, ..LiveStateFilter::default() }, ..LiveStateScanRequest::default() } } trait StateRowTestExt { fn with_schema(self, schema_key: &str) -> Self; fn with_file_id(self, file_id: &str) -> Self; fn with_tracked(self) -> Self; fn with_version(self, version_id: &str) -> Self; fn with_change_id(self, change_id: &str) -> Self; } impl StateRowTestExt for PreparedStateRow { fn with_schema(mut self, schema_key: &str) -> Self { self.schema_key = schema_key.to_string(); self } fn with_file_id(mut self, file_id: &str) -> Self { self.file_id = Some(file_id.to_string()); self } fn with_tracked(mut self) -> Self { self.untracked = false; if self.change_id.is_none() { self.change_id = Some("test-change-id".to_string()); } self } fn with_version(mut self, version_id: &str) -> Self { self.version_id = version_id.to_string(); self.global = version_id == GLOBAL_VERSION_ID; self } fn with_change_id(mut self, change_id: &str) -> Self { self.change_id = Some(change_id.to_string()); self } } } ================================================ FILE: packages/engine/src/transaction/types.rs ================================================ use std::{collections::BTreeSet, fmt, ops::Deref, sync::Arc}; use crate::catalog::SchemaPlanId; use crate::entity_identity::EntityIdentity; use crate::json_store::JsonRef; use crate::live_state::MaterializedLiveStateRow; use crate::tracked_state::MaterializedTrackedStateRow; use crate::untracked_state::MaterializedUntrackedStateRow; use crate::LixError; use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde_json::Value as JsonValue; #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TransactionJson { value: Arc, normalized: Arc, } impl TransactionJson { pub(crate) fn from_value(value: JsonValue, context: &str) -> Result { let normalized: Arc = serde_json::to_string(&value) .map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("{context} failed to serialize as normalized JSON: {error}"), ) })? .into(); Ok(Self { value: Arc::new(value), normalized, }) } pub(crate) fn from_value_unchecked(value: JsonValue) -> Self { Self::from_value(value, "transaction JSON") .expect("serializing serde_json::Value should not fail") } #[cfg(test)] pub(crate) fn from_value_for_test(value: JsonValue) -> Self { Self::from_value(value, "test transaction JSON").expect("test JSON should normalize") } pub(crate) fn from_parts(value: Arc, normalized: Arc) -> Self { Self { value, normalized } } pub(crate) fn value(&self) -> &JsonValue { self.value.as_ref() } pub(crate) fn normalized(&self) -> &str { self.normalized.as_ref() } pub(crate) fn into_parts(self) -> (Arc, Arc) { (self.value, self.normalized) } } impl Deref for TransactionJson { type Target = JsonValue; fn deref(&self) -> &Self::Target { self.value() } } impl PartialEq for TransactionJson { fn eq(&self, other: &JsonValue) -> bool { self.value() == other } } impl PartialEq for JsonValue { fn eq(&self, other: &TransactionJson) -> bool { self == other.value() } } impl fmt::Display for TransactionJson { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(self.normalized()) } } impl Serialize for TransactionJson { fn serialize(&self, serializer: S) -> Result where S: Serializer, { self.value.serialize(serializer) } } impl<'de> Deserialize<'de> for TransactionJson { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { let value = JsonValue::deserialize(deserializer)?; Self::from_value(value, "transaction JSON").map_err(serde::de::Error::custom) } } /// State row accepted at the transaction write boundary. /// /// External SQL/provider code must parse any textual JSON before constructing /// this type. The transaction receives `TransactionJson`, applies schema /// defaults and identity derivation, then prepares JSON refs in /// `PreparedStateRow` without serializing already-normalized JSON again. /// /// SQL providers stage semantic rows, not final storage rows. INSERT providers /// may omit defaulted snapshot fields and leave `entity_id` unset when the /// target schema has an `x-lix-primary-key`; transaction normalization applies /// schema defaults and derives the final identity. Typed UPDATE providers must /// stage full rewritten snapshots after applying column assignments to the /// existing row. Raw `lix_state` snapshot updates are replacement writes, not /// implicit patches. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct TransactionWriteRow { pub(crate) entity_id: Option, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot: Option, pub(crate) metadata: Option, pub(crate) origin: Option, pub(crate) created_at: Option, pub(crate) updated_at: Option, pub(crate) global: bool, pub(crate) change_id: Option, pub(crate) commit_id: Option, pub(crate) untracked: bool, pub(crate) version_id: String, } impl TransactionWriteRow { pub(crate) fn schema_scope_version_id(&self) -> &str { if self.global { crate::GLOBAL_VERSION_ID } else { self.version_id.as_str() } } } /// User-facing write operation that produced one physical staged row. /// /// Composite SQL surfaces such as `lix_file` lower one logical row into /// multiple state rows. The transaction layer owns final constraint validation, /// but error messages should stay in the vocabulary of the logical operation /// when the caller did not write the physical state schema directly. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct TransactionWriteOrigin { pub(crate) surface: String, pub(crate) operation: TransactionWriteOperation, pub(crate) primary_key: Option, } #[derive(Debug, Clone, Copy, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) enum TransactionWriteOperation { Insert, Update, Delete, } #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct LogicalPrimaryKey { pub(crate) columns: Vec, pub(crate) values: Vec, } /// Incoming file payload paired with transaction write rows. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TransactionFileData { pub(crate) file_id: String, pub(crate) version_id: String, pub(crate) untracked: bool, pub(crate) data: Vec, } /// Existing canonical change adopted into another version's tracked projection. /// /// Merges use this path when the source side already owns the canonical /// changelog fact. The target commit references that existing change id and /// writes a target-version projection row without appending a copied change. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TransactionAdoptedChange { pub(crate) version_id: String, pub(crate) change_id: String, pub(crate) projected_row: MaterializedTrackedStateRow, } /// One decoded write batch accepted by the transaction boundary. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) enum TransactionWrite { Rows { mode: TransactionWriteMode, rows: Vec, }, RowsWithFileData { mode: TransactionWriteMode, rows: Vec, file_data: Vec, count: u64, }, AdoptedChanges { changes: Vec, }, } /// One decoded write batch after semantic normalization and JSON preparation. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) enum PreparedTransactionWrite { Rows { mode: TransactionWriteMode, rows: Vec, }, RowsWithFileData { mode: TransactionWriteMode, rows: Vec, file_data: Vec, count: u64, }, AdoptedChanges { rows: Vec, }, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum TransactionWriteMode { Insert, Replace, } /// Result returned after the transaction accepts a write batch. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct TransactionWriteOutcome { pub(crate) count: u64, } #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct StageJson { pub(crate) value: Arc, pub(crate) normalized: Arc, pub(crate) json_ref: JsonRef, } impl StageJson { pub(crate) fn materialize(&self) -> String { self.normalized.as_ref().to_string() } } pub(crate) fn stage_json_from_value( value: TransactionJson, _context: &str, ) -> Result { let (value, normalized) = value.into_parts(); let json_ref = JsonRef::for_content(normalized.as_bytes()); Ok(StageJson { value, normalized, json_ref, }) } #[derive(Debug, Clone, Default, PartialEq, Eq)] pub(crate) struct PreparedRowFacts { /// Placeholder for the next cut: row-derived constraint facts will be /// computed once during normalization and consumed by validation. pub(crate) _sealed: (), } /// Prepared state row owned by the transaction write buffer. /// /// This is the first boundary that owns `StageJson`: JSON has been normalized /// and assigned a content-addressed `JsonRef`. Durable placement belongs to the /// JSON store at batch staging time, not row preparation time. /// Storage owners must receive only the ref-backed row forms derived from this /// type. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct PreparedStateRow { pub(crate) schema_plan_id: SchemaPlanId, pub(crate) facts: PreparedRowFacts, pub(crate) entity_id: EntityIdentity, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot: Option, pub(crate) metadata: Option, pub(crate) origin: Option, pub(crate) created_at: String, pub(crate) updated_at: String, pub(crate) global: bool, pub(crate) change_id: Option, pub(crate) commit_id: Option, pub(crate) untracked: bool, pub(crate) version_id: String, } /// Transaction-hydrated projection for an adopted canonical change. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct PreparedAdoptedStateRow { pub(crate) schema_plan_id: SchemaPlanId, pub(crate) facts: PreparedRowFacts, pub(crate) entity_id: EntityIdentity, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot: Option, pub(crate) metadata: Option, pub(crate) created_at: String, pub(crate) updated_at: String, pub(crate) global: bool, pub(crate) change_id: String, pub(crate) commit_id: String, pub(crate) version_id: String, } impl From for MaterializedLiveStateRow { fn from(row: PreparedStateRow) -> Self { let deleted = row.snapshot.is_none(); MaterializedLiveStateRow { entity_id: row.entity_id, schema_key: row.schema_key, file_id: row.file_id, snapshot_content: row.snapshot.map(|snapshot| snapshot.materialize()), metadata: row.metadata.map(|metadata| metadata.materialize()), deleted, created_at: row.created_at, updated_at: row.updated_at, global: row.global, change_id: row.change_id, commit_id: row.commit_id, untracked: row.untracked, version_id: row.version_id, } } } impl From<&PreparedStateRow> for MaterializedLiveStateRow { fn from(row: &PreparedStateRow) -> Self { MaterializedLiveStateRow { entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: row.file_id.clone(), snapshot_content: row.snapshot.as_ref().map(StageJson::materialize), metadata: row.metadata.as_ref().map(StageJson::materialize), deleted: row.snapshot.is_none(), created_at: row.created_at.clone(), updated_at: row.updated_at.clone(), global: row.global, change_id: row.change_id.clone(), commit_id: row.commit_id.clone(), untracked: row.untracked, version_id: row.version_id.clone(), } } } impl From for MaterializedLiveStateRow { fn from(row: PreparedAdoptedStateRow) -> Self { let deleted = row.snapshot.is_none(); MaterializedLiveStateRow { entity_id: row.entity_id, schema_key: row.schema_key, file_id: row.file_id, snapshot_content: row.snapshot.map(|snapshot| snapshot.materialize()), metadata: row.metadata.map(|metadata| metadata.materialize()), deleted, created_at: row.created_at, updated_at: row.updated_at, global: row.global, change_id: Some(row.change_id), commit_id: Some(row.commit_id), untracked: false, version_id: row.version_id, } } } impl From<&PreparedAdoptedStateRow> for MaterializedLiveStateRow { fn from(row: &PreparedAdoptedStateRow) -> Self { MaterializedLiveStateRow { entity_id: row.entity_id.clone(), schema_key: row.schema_key.clone(), file_id: row.file_id.clone(), snapshot_content: row.snapshot.as_ref().map(StageJson::materialize), metadata: row.metadata.as_ref().map(StageJson::materialize), deleted: row.snapshot.is_none(), created_at: row.created_at.clone(), updated_at: row.updated_at.clone(), global: row.global, change_id: Some(row.change_id.clone()), commit_id: Some(row.commit_id.clone()), untracked: false, version_id: row.version_id.clone(), } } } impl From for MaterializedUntrackedStateRow { fn from(row: PreparedStateRow) -> Self { let deleted = row.snapshot.is_none(); MaterializedUntrackedStateRow { entity_id: row.entity_id, schema_key: row.schema_key, file_id: row.file_id, snapshot_content: row.snapshot.map(|snapshot| snapshot.materialize()), metadata: row.metadata.map(|metadata| metadata.materialize()), deleted, created_at: row.created_at, updated_at: row.updated_at, global: row.global, version_id: row.version_id, } } } /// Transaction-local introduced-change membership accumulated while rows are staged. /// /// Final commit row materialization owns commit ids, parent heads, and commit /// row timestamps. Staging only tracks which hydrated tracked changes the /// future commit introduces for a version. #[derive(Debug, Clone, Default, PartialEq, Eq)] pub(crate) struct StagedCommitMembers { pub(crate) commit_id: String, pub(crate) commit_change_id: String, pub(crate) created_at: String, pub(crate) change_ids: BTreeSet, pub(crate) allow_empty: bool, } impl StagedCommitMembers { pub(crate) fn new(commit_id: String, commit_change_id: String, created_at: String) -> Self { Self { commit_id, commit_change_id, created_at, change_ids: BTreeSet::new(), allow_empty: false, } } pub(crate) fn add_change_id(&mut self, change_id: String) { self.change_ids.insert(change_id); } pub(crate) fn remove_change_id(&mut self, change_id: &str) { self.change_ids.remove(change_id); } pub(crate) fn is_empty(&self) -> bool { self.change_ids.is_empty() } pub(crate) fn allow_empty(&mut self) { self.allow_empty = true; } } ================================================ FILE: packages/engine/src/transaction/validation.rs ================================================ use std::collections::{BTreeMap, BTreeSet}; use serde_json::Value as JsonValue; use crate::catalog::{ CatalogSnapshot, ForeignKeyPlan, SchemaCatalogKey, SchemaPlan, StateDeleteReferencePlan, StateForeignKeyPlan, }; use crate::common::format_json_pointer; #[cfg(test)] use crate::common::parse_json_pointer; use crate::common::{json_pointer_get, validate_row_metadata}; use crate::domain::{Domain, DomainFileScope, DomainRowIdentity}; use crate::entity_identity::{canonical_json_text, EntityIdentity, EntityIdentityError}; #[cfg(test)] use crate::live_state::LiveStateRowIdentity; use crate::live_state::{ LiveStateFilter, LiveStateReader, LiveStateScanRequest, MaterializedLiveStateRow, }; use crate::schema::{ format_lix_schema_validation_errors, schema_from_registered_snapshot, validate_schema_amendment, }; #[cfg(test)] use crate::schema::{ is_seed_schema_key, validate_lix_schema, validate_lix_schema_definition, SchemaKey, }; use crate::transaction::staging::duplicate_insert_identity_message; #[cfg(test)] use crate::transaction::staging::PreparedWriteSet; use crate::transaction::staging::{PreparedValidationRow, PreparedWriteValidationSet}; #[cfg(test)] use crate::transaction::types::PreparedStateRow; use crate::transaction::types::TransactionWriteOrigin; use crate::version::{VERSION_DESCRIPTOR_SCHEMA_KEY, VERSION_REF_SCHEMA_KEY}; use crate::LixError; const REGISTERED_SCHEMA_KEY: &str = "lix_registered_schema"; const DIRECTORY_DESCRIPTOR_SCHEMA_KEY: &str = "lix_directory_descriptor"; const FILE_DESCRIPTOR_SCHEMA_KEY: &str = "lix_file_descriptor"; const STATE_SURFACE_SCHEMA_KEY: &str = "lix_state"; const MAX_DIRECTORY_PARENT_DEPTH: usize = 1024; /// Immutable view of the final transaction write set before persistence. /// /// Validation intentionally runs after staging has coalesced overwrites and /// hydrated generated fields, but before changelog, tracked-state, untracked /// state, or binary CAS writes are flushed. pub(crate) struct TransactionValidationInput<'a> { staged_writes: &'a PreparedWriteValidationSet<'a>, schema_catalog: &'a CatalogSnapshot, live_state: &'a dyn LiveStateReader, } impl<'a> TransactionValidationInput<'a> { pub(crate) fn new( staged_writes: &'a PreparedWriteValidationSet<'a>, schema_catalog: &'a CatalogSnapshot, live_state: &'a dyn LiveStateReader, ) -> Self { Self { staged_writes, schema_catalog, live_state, } } #[cfg(test)] fn from_visible_schemas_for_tests( staged_writes: &'a PreparedWriteSet, visible_schemas: &'a [JsonValue], live_state: &'a dyn LiveStateReader, ) -> Self { let catalog = Box::leak(Box::new( CatalogSnapshot::from_visible_schemas(visible_schemas) .expect("test schema catalog should build"), )); let validation_set = Box::leak(Box::new(staged_writes.validation_set_for_tests())); Self::new(validation_set, catalog, live_state) } } async fn scan_committed_constraint_rows( live_state: &dyn LiveStateReader, domain: &Domain, schema_keys: Vec, entity_ids: Vec, include_tombstones: bool, ) -> Result, LixError> { let rows = live_state .scan_rows(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: schema_keys.clone(), entity_ids: entity_ids.clone(), version_ids: vec![domain.version_id().to_string()], file_ids: domain.file_filters(), untracked: Some(domain.untracked()), include_tombstones, ..Default::default() }, ..Default::default() }) .await?; Ok(rows .into_iter() .filter(|row| { domain.contains(row) && (schema_keys.is_empty() || schema_keys.contains(&row.schema_key)) && (entity_ids.is_empty() || entity_ids.contains(&row.entity_id)) }) .collect()) } async fn load_committed_constraint_row( live_state: &dyn LiveStateReader, domain: &Domain, schema_key: &str, entity_id: EntityIdentity, include_tombstones: bool, ) -> Result, LixError> { Ok(scan_committed_constraint_rows( live_state, domain, vec![schema_key.to_string()], vec![entity_id], include_tombstones, ) .await? .into_iter() .next()) } /// Validates the final transaction write set before durable persistence. /// /// The validator owns semantic write correctness for every engine write /// frontend. It builds one transaction-visible schema catalog, validates pending /// schema registrations, checks exact schema existence, and validates each /// non-tombstone snapshot against the compiled JSON Schema for its /// `schema_key`. /// /// Cross-row constraints such as `x-lix-unique` and foreign keys should also /// live here so they can share transaction-local indexes and see the final /// coalesced staged write set. pub(crate) async fn validate_prepared_writes( input: TransactionValidationInput<'_>, ) -> Result<(), LixError> { validate_foreign_key_definitions(input.schema_catalog)?; let staged_rows = input.staged_writes.rows().collect::>(); let constraint_rows = input.staged_writes.constraint_rows().collect::>(); let pending_file_descriptors = PendingFileDescriptorIndex::from_rows(&constraint_rows); let pending_schema_domains = PendingSchemaDomains::from_staged_rows(&staged_rows)?; validate_registered_schema_identity_is_canonical(&input, &staged_rows).await?; let mut pending_constraints = PendingConstraintIndexes::default(); let mut staged_snapshots = Vec::new(); for row in &constraint_rows { let row = *row; let Some(snapshot) = row.snapshot_json() else { pending_constraints.remember_tombstone(row); continue; }; let schema_plan = schema_plan_for_row(input.schema_catalog, &pending_schema_domains, row)?; validate_schema_matches_row(row, schema_plan)?; validate_snapshot_content(row, schema_plan)?; pending_constraints.remember_row(row, schema_plan, snapshot)?; } for row in &staged_rows { let row = *row; validate_staged_row_shape(row)?; validate_staged_row_metadata(row)?; let schema_plan = schema_plan_for_row(input.schema_catalog, &pending_schema_domains, row)?; validate_schema_matches_row(row, schema_plan)?; let snapshot = validate_snapshot_content(row, schema_plan)?; if let Some(snapshot) = snapshot { validate_file_owner_reference(&input, &pending_file_descriptors, row).await?; validate_primary_key_identity(row, schema_plan, snapshot)?; pending_constraints.remember_foreign_key_references(row, schema_plan, snapshot)?; staged_snapshots.push((row, schema_plan, snapshot)); } else { pending_constraints.remember_tombstone(row); } } let unresolved_foreign_keys = validate_pending_foreign_keys(&pending_constraints, &staged_snapshots)?; validate_pending_delete_restrictions(input.schema_catalog, &pending_constraints)?; let unresolved_foreign_keys = validate_committed_foreign_keys(&input, &pending_constraints, &unresolved_foreign_keys) .await?; reject_unresolved_foreign_keys(&unresolved_foreign_keys)?; validate_committed_delete_restrictions(&input, input.schema_catalog, &pending_constraints) .await?; validate_file_descriptor_delete_restrictions(&input, &pending_constraints).await?; validate_version_ref_delete_restrictions(&input, &pending_constraints).await?; validate_committed_insert_identities(&input, &pending_constraints).await?; validate_committed_unique_constraints(&input, &pending_constraints).await?; validate_directory_descriptor_parent_graph(&input, &staged_rows, &constraint_rows).await?; validate_filesystem_namespace(&input, &staged_rows).await?; Ok(()) } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct DirectoryDescriptorScope { domain: Domain, } #[derive(Debug, Clone, serde::Deserialize)] struct DirectoryDescriptorSnapshot { id: String, parent_id: Option, name: String, } #[derive(Debug, Clone, serde::Deserialize)] struct FileDescriptorSnapshot { directory_id: Option, name: String, } async fn validate_directory_descriptor_parent_graph( input: &TransactionValidationInput<'_>, staged_rows: &[PreparedValidationRow<'_>], constraint_rows: &[PreparedValidationRow<'_>], ) -> Result<(), LixError> { let scopes = staged_directory_descriptor_scopes(staged_rows); for scope in scopes { let mut parents = committed_directory_parent_map(input.live_state, &scope).await?; apply_staged_directory_parent_rows(constraint_rows, &scope, &mut parents)?; validate_directory_parent_map(&scope, &parents)?; } Ok(()) } async fn validate_registered_schema_identity_is_canonical( input: &TransactionValidationInput<'_>, staged_rows: &[PreparedValidationRow<'_>], ) -> Result<(), LixError> { let pending_schema_rows = staged_rows .iter() .filter(|row| row.schema_key() == REGISTERED_SCHEMA_KEY && row.snapshot_json().is_some()) .collect::>(); if pending_schema_rows.is_empty() { return Ok(()); } for pending_row in pending_schema_rows { let Some(row) = load_committed_constraint_row( input.live_state, &pending_row.domain().with_exact_file_scope(None), REGISTERED_SCHEMA_KEY, pending_row.entity_id().clone(), false, ) .await? else { continue; }; let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot = parse_registered_schema_snapshot(snapshot_content)?; let pending_snapshot = pending_row .snapshot_json() .expect("pending registered schema row has snapshot_content"); if &snapshot != pending_snapshot { let (key, pending_schema) = schema_from_registered_snapshot(pending_snapshot)?; let (_, committed_schema) = schema_from_registered_snapshot(&snapshot)?; validate_schema_amendment(&committed_schema, &pending_schema).map_err(|_| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema '{}' is already registered with a different definition; schema identity must be canonical", key.schema_key ), ) })?; continue; } } Ok(()) } fn parse_registered_schema_snapshot(snapshot_content: &str) -> Result { serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("registered schema snapshot_content is invalid JSON: {error}"), ) }) } fn staged_directory_descriptor_scopes( staged_rows: &[PreparedValidationRow<'_>], ) -> BTreeSet { staged_rows .iter() .filter(|row| row.schema_key() == DIRECTORY_DESCRIPTOR_SCHEMA_KEY) .map(|row| DirectoryDescriptorScope { domain: row.domain(), }) .collect() } async fn committed_directory_parent_map( live_state: &dyn LiveStateReader, scope: &DirectoryDescriptorScope, ) -> Result>, LixError> { let mut parents = BTreeMap::new(); for domain in scope.domain.directory_parent_domains() { let rows = scan_committed_constraint_rows( live_state, &domain, vec![DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string()], Vec::new(), false, ) .await?; for row in rows { if !committed_directory_row_is_in_domain(&row, scope, &domain) { continue; } let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot = parse_directory_descriptor_snapshot(snapshot_content)?; parents.insert(snapshot.id, snapshot.parent_id); } } Ok(parents) } fn committed_directory_row_is_in_domain( row: &MaterializedLiveStateRow, _scope: &DirectoryDescriptorScope, domain: &Domain, ) -> bool { row.schema_key == DIRECTORY_DESCRIPTOR_SCHEMA_KEY && domain.contains(row) } fn apply_staged_directory_parent_rows( staged_rows: &[PreparedValidationRow<'_>], scope: &DirectoryDescriptorScope, parents: &mut BTreeMap>, ) -> Result<(), LixError> { let reachable_domains = scope.domain.directory_parent_domains(); for row in staged_rows { if row.schema_key() != DIRECTORY_DESCRIPTOR_SCHEMA_KEY || !reachable_domains.contains(&row.domain()) { continue; } let id = row.entity_id().as_single_string_owned()?; let Some(snapshot) = row.snapshot_json() else { parents.remove(&id); continue; }; let snapshot = directory_descriptor_snapshot_from_value(snapshot)?; parents.insert(snapshot.id, snapshot.parent_id); } Ok(()) } fn parse_directory_descriptor_snapshot( snapshot_content: &str, ) -> Result { serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!("lix_directory_descriptor snapshot_content is invalid JSON: {error}"), ) }) } fn directory_descriptor_snapshot_from_value( snapshot: &JsonValue, ) -> Result { Ok(DirectoryDescriptorSnapshot { id: required_snapshot_string(snapshot, "lix_directory_descriptor", "id")?, parent_id: optional_snapshot_string(snapshot, "lix_directory_descriptor", "parent_id")?, name: required_snapshot_string(snapshot, "lix_directory_descriptor", "name")?, }) } fn file_descriptor_snapshot_from_value( snapshot: &JsonValue, ) -> Result { Ok(FileDescriptorSnapshot { directory_id: optional_snapshot_string(snapshot, "lix_file_descriptor", "directory_id")?, name: required_snapshot_string(snapshot, "lix_file_descriptor", "name")?, }) } fn required_snapshot_string( snapshot: &JsonValue, schema_key: &str, field: &str, ) -> Result { let Some(value) = snapshot.get(field) else { return Err(LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!("{schema_key} snapshot_content is missing field '{field}'"), )); }; value.as_str().map(str::to_string).ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!("{schema_key} snapshot_content field '{field}' must be a string"), ) }) } fn optional_snapshot_string( snapshot: &JsonValue, schema_key: &str, field: &str, ) -> Result, LixError> { let Some(value) = snapshot.get(field) else { return Ok(None); }; if value.is_null() { return Ok(None); } value .as_str() .map(|value| Some(value.to_string())) .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!("{schema_key} snapshot_content field '{field}' must be a string or null"), ) }) } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct FilesystemNamespaceIdentity { schema_key: String, entity_id: EntityIdentity, } #[derive(Debug, Clone, PartialEq, Eq)] enum FilesystemNamespaceOccupant { Directory { entity_id: EntityIdentity, parent_id: Option, name: String, }, File { entity_id: EntityIdentity, directory_id: Option, entry_name: String, }, } impl FilesystemNamespaceOccupant { fn entity_id(&self) -> &EntityIdentity { match self { Self::Directory { entity_id, .. } | Self::File { entity_id, .. } => entity_id, } } fn kind(&self) -> &'static str { match self { Self::Directory { .. } => "directory", Self::File { .. } => "file", } } fn parent_id(&self) -> &Option { match self { Self::Directory { parent_id, .. } => parent_id, Self::File { directory_id, .. } => directory_id, } } fn entry_name(&self) -> &str { match self { Self::Directory { name, .. } => name, Self::File { entry_name, .. } => entry_name, } } } async fn validate_filesystem_namespace( input: &TransactionValidationInput<'_>, staged_rows: &[PreparedValidationRow<'_>], ) -> Result<(), LixError> { // Filesystem namespace constraints are storage-scope local. Global rows are // validated in the global scope and may be projected into version reads, but // projected globals do not participate in version-local constraint checks. let domains = staged_filesystem_namespace_domains(staged_rows); for domain in domains { let mut occupants = committed_filesystem_namespace_occupants(input.live_state, &domain).await?; apply_staged_filesystem_namespace_rows(staged_rows, &domain, &mut occupants)?; validate_filesystem_namespace_occupants(&domain, occupants)?; } Ok(()) } fn staged_filesystem_namespace_domains( staged_rows: &[PreparedValidationRow<'_>], ) -> BTreeSet { staged_rows .iter() .filter(|row| { row.schema_key() == DIRECTORY_DESCRIPTOR_SCHEMA_KEY || row.schema_key() == FILE_DESCRIPTOR_SCHEMA_KEY }) .map(|row| row.domain()) .collect() } async fn committed_filesystem_namespace_occupants( live_state: &dyn LiveStateReader, domain: &Domain, ) -> Result, LixError> { let rows = scan_committed_constraint_rows( live_state, domain, vec![ DIRECTORY_DESCRIPTOR_SCHEMA_KEY.to_string(), FILE_DESCRIPTOR_SCHEMA_KEY.to_string(), ], Vec::new(), false, ) .await?; let mut occupants = BTreeMap::new(); for row in rows { if !committed_filesystem_row_is_in_domain(&row, domain) { continue; } if let Some((identity, occupant)) = filesystem_namespace_occupant_from_live_row(&row)? { occupants.insert(identity, occupant); } } Ok(occupants) } fn committed_filesystem_row_is_in_domain(row: &MaterializedLiveStateRow, domain: &Domain) -> bool { (row.schema_key == DIRECTORY_DESCRIPTOR_SCHEMA_KEY || row.schema_key == FILE_DESCRIPTOR_SCHEMA_KEY) && domain.contains(row) } fn apply_staged_filesystem_namespace_rows( staged_rows: &[PreparedValidationRow<'_>], domain: &Domain, occupants: &mut BTreeMap, ) -> Result<(), LixError> { for row in staged_rows { if (row.schema_key() != DIRECTORY_DESCRIPTOR_SCHEMA_KEY && row.schema_key() != FILE_DESCRIPTOR_SCHEMA_KEY) || row.domain() != *domain { continue; } let identity = FilesystemNamespaceIdentity { schema_key: row.schema_key().to_string(), entity_id: row.entity_id().clone(), }; let Some(snapshot) = row.snapshot_json() else { occupants.remove(&identity); continue; }; occupants.insert( identity, filesystem_namespace_occupant_from_staged_row(*row, snapshot)?, ); } Ok(()) } fn filesystem_namespace_occupant_from_live_row( row: &MaterializedLiveStateRow, ) -> Result, LixError> { let Some(snapshot_content) = row.snapshot_content.as_deref() else { return Ok(None); }; let identity = FilesystemNamespaceIdentity { schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), }; let occupant = match row.schema_key.as_str() { DIRECTORY_DESCRIPTOR_SCHEMA_KEY => { directory_namespace_occupant(&row.entity_id, snapshot_content)? } FILE_DESCRIPTOR_SCHEMA_KEY => file_namespace_occupant(&row.entity_id, snapshot_content)?, _ => return Ok(None), }; Ok(Some((identity, occupant))) } fn filesystem_namespace_occupant_from_staged_row( row: PreparedValidationRow<'_>, snapshot: &JsonValue, ) -> Result { match row.schema_key() { DIRECTORY_DESCRIPTOR_SCHEMA_KEY => { directory_namespace_occupant_from_value(row.entity_id(), snapshot) } FILE_DESCRIPTOR_SCHEMA_KEY => file_namespace_occupant_from_value(row.entity_id(), snapshot), _ => Err(LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "filesystem namespace validation cannot parse schema '{}'", row.schema_key() ), )), } } fn directory_namespace_occupant( entity_id: &EntityIdentity, snapshot_content: &str, ) -> Result { let snapshot = parse_directory_descriptor_snapshot(snapshot_content)?; Ok(FilesystemNamespaceOccupant::Directory { entity_id: entity_id.clone(), parent_id: snapshot.parent_id, name: snapshot.name, }) } fn directory_namespace_occupant_from_value( entity_id: &EntityIdentity, snapshot: &JsonValue, ) -> Result { let snapshot = directory_descriptor_snapshot_from_value(snapshot)?; Ok(FilesystemNamespaceOccupant::Directory { entity_id: entity_id.clone(), parent_id: snapshot.parent_id, name: snapshot.name, }) } fn file_namespace_occupant( entity_id: &EntityIdentity, snapshot_content: &str, ) -> Result { let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!("lix_file_descriptor snapshot_content is invalid JSON: {error}"), ) })?; Ok(FilesystemNamespaceOccupant::File { entity_id: entity_id.clone(), directory_id: snapshot.directory_id, entry_name: snapshot.name, }) } fn file_namespace_occupant_from_value( entity_id: &EntityIdentity, snapshot: &JsonValue, ) -> Result { let snapshot = file_descriptor_snapshot_from_value(snapshot)?; Ok(FilesystemNamespaceOccupant::File { entity_id: entity_id.clone(), directory_id: snapshot.directory_id, entry_name: snapshot.name, }) } fn validate_filesystem_namespace_occupants( domain: &Domain, occupants: BTreeMap, ) -> Result<(), LixError> { let mut by_parent_and_name = BTreeMap::<(Option, String), FilesystemNamespaceOccupant>::new(); for occupant in occupants.into_values() { let key = ( occupant.parent_id().clone(), occupant.entry_name().to_string(), ); if let Some(existing) = by_parent_and_name.insert(key.clone(), occupant.clone()) { if existing != occupant { return Err(filesystem_namespace_conflict_error( domain, &key.0, &key.1, &existing, &occupant, )); } } } Ok(()) } fn filesystem_namespace_conflict_error( domain: &Domain, parent_id: &Option, entry_name: &str, existing: &FilesystemNamespaceOccupant, conflicting: &FilesystemNamespaceOccupant, ) -> LixError { let parent = parent_id.as_deref().unwrap_or(""); let existing_id = existing .entity_id() .as_single_string_owned() .unwrap_or_else(|_| "".to_string()); let conflicting_id = conflicting .entity_id() .as_single_string_owned() .unwrap_or_else(|_| "".to_string()); LixError::new( LixError::CODE_UNIQUE, format!( "filesystem namespace conflict in version '{}' for parent {parent:?} entry {entry_name:?}: {} '{}' conflicts with {} '{}'", domain.version_id(), existing.kind(), existing_id, conflicting.kind(), conflicting_id ), ) } fn validate_directory_parent_map( scope: &DirectoryDescriptorScope, parents: &BTreeMap>, ) -> Result<(), LixError> { for directory_id in parents.keys() { validate_directory_parent_chain(scope, parents, directory_id)?; } Ok(()) } fn validate_directory_parent_chain( scope: &DirectoryDescriptorScope, parents: &BTreeMap>, start_id: &str, ) -> Result<(), LixError> { let mut current_id = start_id; let mut seen = BTreeSet::::new(); for depth in 0..=MAX_DIRECTORY_PARENT_DEPTH { if !seen.insert(current_id.to_string()) { return Err(directory_parent_cycle_error(scope, start_id, current_id)); } let Some(parent_id) = parents.get(current_id) else { return Err(directory_parent_missing_error(scope, start_id, current_id)); }; let Some(parent_id) = parent_id.as_deref() else { return Ok(()); }; current_id = parent_id; if depth == MAX_DIRECTORY_PARENT_DEPTH { return Err(directory_parent_depth_error(scope, start_id)); } } Err(directory_parent_depth_error(scope, start_id)) } fn directory_parent_cycle_error( scope: &DirectoryDescriptorScope, start_id: &str, repeated_id: &str, ) -> LixError { LixError::new( LixError::CODE_CONSTRAINT_VIOLATION, format!( "lix_directory_descriptor parent_id cycle in version '{}': directory '{}' reaches ancestor '{}' twice", scope.domain.version_id(), start_id, repeated_id ), ) .with_hint("Set parent_id to null or to an existing directory outside the directory's descendants.") } fn directory_parent_missing_error( scope: &DirectoryDescriptorScope, start_id: &str, missing_id: &str, ) -> LixError { LixError::new( LixError::CODE_FOREIGN_KEY, format!( "lix_directory_descriptor parent_id chain in version '{}' for directory '{}' references missing directory '{}'", scope.domain.version_id(), start_id, missing_id ), ) } fn directory_parent_depth_error(scope: &DirectoryDescriptorScope, start_id: &str) -> LixError { LixError::new( LixError::CODE_CONSTRAINT_VIOLATION, format!( "lix_directory_descriptor parent_id chain in version '{}' for directory '{}' exceeds maximum depth {}", scope.domain.version_id(), start_id, MAX_DIRECTORY_PARENT_DEPTH ), ) } async fn validate_committed_insert_identities( input: &TransactionValidationInput<'_>, pending_constraints: &PendingConstraintIndexes, ) -> Result<(), LixError> { let pending_identity_targets = pending_constraints .identity_targets .iter() .map(|target| target.identity.clone()) .collect::>(); let mut checks_by_domain_schema = BTreeMap::<(Domain, String), Vec<(EntityIdentity, Option)>>::new(); for (identity, origin) in input.staged_writes.insert_identities() { let pending_identity = DomainRowIdentity::in_domain( identity.domain(), identity.schema_key().to_string(), identity.entity_id().clone(), ); if !pending_identity_targets.contains(&pending_identity) { continue; } checks_by_domain_schema .entry(( pending_identity.domain().clone(), pending_identity.schema_key_owned(), )) .or_default() .push((pending_identity.entity_id_owned(), origin.cloned())); } for ((domain, schema_key), checks) in checks_by_domain_schema { let entity_ids = checks .iter() .map(|(entity_id, _)| entity_id.clone()) .collect::>(); let committed_rows = scan_committed_constraint_rows( input.live_state, &domain, vec![schema_key.clone()], entity_ids, false, ) .await?; let committed_rows_by_entity_id = committed_rows .into_iter() .filter(|row| { row.snapshot_content.is_some() && !pending_constraints.tombstones_identity(row) }) .map(|row| (row.entity_id.clone(), row)) .collect::>(); for (entity_id, origin) in checks { if !committed_rows_by_entity_id.contains_key(&entity_id) { continue; } return Err(LixError::new( LixError::CODE_UNIQUE, duplicate_insert_identity_message(&schema_key, &entity_id, None, origin.as_ref()), )); } } Ok(()) } async fn validate_version_ref_delete_restrictions( input: &TransactionValidationInput<'_>, pending_constraints: &PendingConstraintIndexes, ) -> Result<(), LixError> { for tombstone in &pending_constraints.tombstones { if tombstone.identity.schema_key() != VERSION_REF_SCHEMA_KEY { continue; } for source_domain in tombstone .identity .domain() .version_descriptor_domains_for_ref_delete() { let descriptor_identity = DomainRowIdentity::in_domain( source_domain, VERSION_DESCRIPTOR_SCHEMA_KEY, tombstone.identity.entity_id_owned(), ); if pending_constraints.tombstones_target_identity(&descriptor_identity) { continue; } if pending_constraints.has_identity_target(&descriptor_identity) { return Err(version_ref_delete_restriction_error( &tombstone.identity, &descriptor_identity, )?); } let Some(descriptor_row) = load_committed_constraint_row( input.live_state, descriptor_identity.domain(), descriptor_identity.schema_key(), descriptor_identity.entity_id_owned(), false, ) .await? else { continue; }; if descriptor_row.snapshot_content.is_some() && !pending_constraints.tombstones_identity(&descriptor_row) { return Err(version_ref_delete_restriction_error( &tombstone.identity, &descriptor_identity, )?); } } } Ok(()) } fn version_ref_delete_restriction_error( ref_identity: &DomainRowIdentity, descriptor_identity: &DomainRowIdentity, ) -> Result { Ok(LixError::new( LixError::CODE_FOREIGN_KEY, format!( "cannot delete '{}' row '{}' in version '{}' because matching '{}' row '{}' would remain without a version ref", ref_identity.schema_key(), ref_identity.entity_id().as_single_string_owned()?, ref_identity.domain().version_id(), descriptor_identity.schema_key(), descriptor_identity.entity_id().as_single_string_owned()?, ), )) } #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum PendingFileDescriptorState { Present, Tombstone, } #[derive(Debug, Clone, Default)] struct PendingFileDescriptorIndex { by_identity: BTreeMap, } impl PendingFileDescriptorIndex { fn from_rows(staged_rows: &[PreparedValidationRow<'_>]) -> Self { let mut index = Self::default(); for row in staged_rows { if row.schema_key() != FILE_DESCRIPTOR_SCHEMA_KEY || row.file_id().is_some() { continue; } if row.entity_id().as_single_string_owned().is_ok() { let state = if (*row).snapshot_json().is_some() { PendingFileDescriptorState::Present } else { PendingFileDescriptorState::Tombstone }; index.by_identity.insert(row.domain_row_identity(), state); } } index } fn state_in_domain( &self, domain: &Domain, file_id: &str, ) -> Option { self.by_identity .get(&DomainRowIdentity::in_domain( domain.with_exact_file_scope(None), FILE_DESCRIPTOR_SCHEMA_KEY, EntityIdentity::single(file_id), )) .copied() } } async fn validate_file_owner_reference( input: &TransactionValidationInput<'_>, pending_file_descriptors: &PendingFileDescriptorIndex, row: PreparedValidationRow<'_>, ) -> Result<(), LixError> { let Some(file_id) = row.file_id().as_deref() else { return Ok(()); }; let row_domain = row.domain(); let target_domains = row_domain .with_untracked(row.untracked()) .file_owner_domains(); for domain in &target_domains { if pending_file_descriptors.state_in_domain(domain, file_id) == Some(PendingFileDescriptorState::Present) { return Ok(()); } } for domain in &target_domains { if pending_file_descriptors.state_in_domain(domain, file_id) == Some(PendingFileDescriptorState::Tombstone) { continue; } if committed_file_descriptor_exists_in_domain(input.live_state, domain, file_id).await? { return Ok(()); } } Err(missing_file_owner_reference_error(row, file_id)?) } async fn committed_file_descriptor_exists_in_domain( live_state: &dyn LiveStateReader, domain: &Domain, file_id: &str, ) -> Result { let Some(row) = load_committed_constraint_row( live_state, &domain.with_exact_file_scope(None), FILE_DESCRIPTOR_SCHEMA_KEY, EntityIdentity::single(file_id), false, ) .await? else { return Ok(false); }; Ok(row.snapshot_content.is_some() && row.schema_key == FILE_DESCRIPTOR_SCHEMA_KEY && row.entity_id == EntityIdentity::single(file_id) && row.file_id.is_none()) } fn missing_file_owner_reference_error( row: PreparedValidationRow<'_>, file_id: &str, ) -> Result { Ok(LixError::new( LixError::CODE_FILE_NOT_FOUND, format!( "file ownership validation failed for schema '{}': entity '{}' references missing file_id '{}' in effective file scope for version '{}'", row.schema_key(), row.entity_id().as_json_array_text()?, file_id, row.version_id() ), ) .with_hint("Insert a row into lix_file with this id first, or use null for a global entity.")) } fn validate_staged_row_shape(row: PreparedValidationRow<'_>) -> Result<(), LixError> { if row.schema_key().is_empty() { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "engine transaction validation requires non-empty schema_key", )); } if row.schema_key() == REGISTERED_SCHEMA_KEY && row.file_id().is_some() { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, "lix_registered_schema rows must not be scoped to a file", ) .with_hint("Schema definitions are scoped by version and durability only; write them with null file_id.")); } Ok(()) } fn validate_staged_row_metadata(row: PreparedValidationRow<'_>) -> Result<(), LixError> { let Some(metadata) = row.metadata_json() else { return Ok(()); }; validate_row_metadata( metadata, format!("metadata for schema '{}'", row.schema_key()), )?; Ok(()) } #[derive(Default)] struct PendingSchemaDomains { domains_by_key: BTreeMap>, } impl PendingSchemaDomains { fn from_staged_rows(staged_rows: &[PreparedValidationRow<'_>]) -> Result { let mut domains_by_key = BTreeMap::>::new(); for row in staged_rows { if row.schema_key() != REGISTERED_SCHEMA_KEY { continue; } let Some(snapshot) = row.snapshot_json() else { continue; }; let (key, _) = schema_from_registered_snapshot(snapshot)?; domains_by_key .entry(SchemaCatalogKey::from_schema_key(key)) .or_default() .insert(row.domain()); } Ok(Self { domains_by_key }) } fn validate_row_schema_domain(&self, row: PreparedValidationRow<'_>) -> Result<(), LixError> { let key = SchemaCatalogKey { schema_key: row.schema_key().to_string(), }; let Some(domains) = self.domains_by_key.get(&key) else { return Ok(()); }; let row_domain = row.domain(); if domains.contains(&row_domain) { return Ok(()); } Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema '{}' is pending in another validation domain", row.schema_key() ), )) } } fn schema_plan_for_row<'a>( schema_catalog: &'a CatalogSnapshot, pending_schema_domains: &PendingSchemaDomains, row: PreparedValidationRow<'_>, ) -> Result<&'a SchemaPlan, LixError> { pending_schema_domains.validate_row_schema_domain(row)?; if let Some(plan) = schema_catalog.plan(row.schema_plan_id()) { if plan.key.schema_key == row.schema_key() { return Ok(plan); } } #[cfg(test)] if let Some((_, plan)) = schema_catalog.plan_for_key(row.schema_key()) { return Ok(plan); } Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema plan for schema '{}' is not visible to this transaction", row.schema_key() ), )) } fn validate_schema_matches_row( row: PreparedValidationRow<'_>, schema_plan: &SchemaPlan, ) -> Result<(), LixError> { if schema_plan.key.schema_key != row.schema_key() { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema plan mismatch: row targets schema '{}' but plan is schema '{}'", row.schema_key(), schema_plan.key.schema_key, ), )); } Ok(()) } fn validate_snapshot_content<'a>( row: PreparedValidationRow<'a>, schema_plan: &SchemaPlan, ) -> Result, LixError> { let Some(snapshot) = row.snapshot_json() else { return Ok(None); }; if let Err(errors) = schema_plan.compiled_schema.validate(&snapshot) { let details = format_lix_schema_validation_errors(errors); return Err(LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "snapshot_content validation failed for schema '{}': {details}", row.schema_key() ), )); } Ok(Some(snapshot)) } fn validate_primary_key_identity( row: PreparedValidationRow<'_>, schema_plan: &SchemaPlan, snapshot: &JsonValue, ) -> Result<(), LixError> { let Some(primary_key_paths) = schema_plan.primary_key.as_ref() else { return Ok(()); }; let derived = EntityIdentity::from_primary_key_paths(snapshot, &primary_key_paths) .map_err(|error| primary_key_identity_error(row, &primary_key_paths, error))?; if row.entity_id() != &derived { return Err(LixError::new( LixError::CODE_UNIQUE, format!( "primary-key constraint violation on schema '{}': entity_id '{}' does not match derived primary key '{}'", row.schema_key(), row.entity_id().as_json_array_text()?, derived.as_json_array_text()? ), )); } Ok(()) } #[derive(Default)] struct PendingConstraintIndexes { unique_values: BTreeMap, identity_targets: Vec, fk_targets: BTreeMap>, fk_references: BTreeMap>, tombstones: Vec, } impl PendingConstraintIndexes { fn remember_tombstone(&mut self, row: PreparedValidationRow<'_>) { self.tombstones.push(PendingTombstone { identity: row.domain_row_identity(), }); } fn remember_row( &mut self, row: PreparedValidationRow<'_>, schema_plan: &SchemaPlan, snapshot: &JsonValue, ) -> Result<(), LixError> { self.remember_identity_target(row); self.remember_primary_key_target(row, schema_plan, snapshot); self.remember_unique_targets(row, schema_plan, snapshot)?; Ok(()) } fn remember_identity_target(&mut self, row: PreparedValidationRow<'_>) { self.identity_targets.push(PendingIdentityTarget { identity: row.domain_row_identity(), }); } fn remember_primary_key_target( &mut self, row: PreparedValidationRow<'_>, schema_plan: &SchemaPlan, snapshot: &JsonValue, ) { if let Some(primary_key_paths) = schema_plan.primary_key.as_ref() { self.remember_fk_target(row, &primary_key_paths, snapshot); } } fn remember_unique_targets( &mut self, row: PreparedValidationRow<'_>, schema_plan: &SchemaPlan, snapshot: &JsonValue, ) -> Result<(), LixError> { for unique_paths in &schema_plan.uniques { let Some(value) = UniqueConstraintValue::from_snapshot(snapshot, &unique_paths) else { continue; }; self.remember_fk_target(row, &unique_paths, snapshot); let key = PendingUniqueKey { schema_key: row.schema_key().to_string(), domain: row.domain(), pointer_group: unique_paths.clone(), value, }; if let Some(existing_entity_id) = self .unique_values .insert(key.clone(), row.entity_id().clone()) { if existing_entity_id != *row.entity_id() { return Err(LixError::new( LixError::CODE_UNIQUE, format!( "unique constraint violation on {}.{} for value {}: rows '{}' and '{}' conflict", row.schema_key(), format_pointer_group(&key.pointer_group), key.value.display(), existing_entity_id.as_json_array_text()?, row.entity_id().as_json_array_text()? ), )); } } } Ok(()) } fn remember_fk_target( &mut self, row: PreparedValidationRow<'_>, pointer_group: &[Vec], snapshot: &JsonValue, ) { let Some(value) = UniqueConstraintValue::from_snapshot(snapshot, pointer_group) else { return; }; self.fk_targets .entry(PendingForeignKeyTargetKey { schema_key: row.schema_key().to_string(), domain: row.domain(), pointer_group: pointer_group.to_vec(), value, }) .or_default() .push(PendingForeignKeyTarget { entity_id: row.entity_id().clone(), }); } fn remember_foreign_key_references( &mut self, row: PreparedValidationRow<'_>, schema_plan: &SchemaPlan, snapshot: &JsonValue, ) -> Result<(), LixError> { for foreign_key in &schema_plan.foreign_keys { let Some(local_value) = UniqueConstraintValue::from_snapshot_non_null( snapshot, &foreign_key.local_properties, ) else { continue; }; let target = PendingForeignKeyReferenceTarget::Key(PendingForeignKeyTargetKey { schema_key: foreign_key.referenced_schema.schema_key.clone(), domain: row.domain(), pointer_group: foreign_key.referenced_properties.clone(), value: local_value, }); self.fk_references .entry(target) .or_default() .push(PendingForeignKeyReference { identity: row.domain_row_identity(), }); } for foreign_key in &schema_plan.state_foreign_keys { let target = PendingForeignKeyReferenceTarget::StateSurfaceIdentity( state_surface_target_identity(row.domain(), foreign_key, snapshot)?, ); self.fk_references .entry(target) .or_default() .push(PendingForeignKeyReference { identity: row.domain_row_identity(), }); } Ok(()) } fn tombstones_identity(&self, row: &MaterializedLiveStateRow) -> bool { let identity = DomainRowIdentity::from_live_row(row); self.tombstones .iter() .any(|tombstone| tombstone.identity == identity) } fn has_identity_target(&self, identity: &DomainRowIdentity) -> bool { self.identity_targets .iter() .any(|target| target.identity == *identity) } fn has_reachable_identity_target(&self, identity: &DomainRowIdentity) -> bool { identity .reachable_target_identities() .iter() .any(|candidate| self.has_identity_target(candidate)) } fn tombstones_target_identity(&self, identity: &DomainRowIdentity) -> bool { self.tombstones .iter() .any(|tombstone| tombstone.identity == *identity) } fn has_fk_target_key(&self, key: &PendingForeignKeyTargetKey) -> bool { self.fk_targets .get(key) .is_some_and(|targets| !targets.is_empty()) } fn has_reachable_fk_target_key(&self, key: &PendingForeignKeyTargetKey) -> bool { key.domain.fk_target_domains().iter().any(|domain| { self.has_fk_target_key(&PendingForeignKeyTargetKey { domain: domain.clone(), ..key.clone() }) }) } fn active_references_to( &self, target: &PendingForeignKeyReferenceTarget, ) -> Vec<&PendingForeignKeyReference> { self.fk_references .get(target) .into_iter() .flat_map(|references| references.iter()) .filter(|reference| !self.tombstones_target_identity(&reference.identity)) .collect() } fn active_references_to_any( &self, targets: &[PendingForeignKeyReferenceTarget], ) -> Vec<&PendingForeignKeyReference> { let mut references = Vec::new(); for target in targets { references.extend(self.active_references_to(target)); } references } #[cfg(test)] fn has_fk_reference_to_key( &self, schema_key: &str, version_id: &str, file_id: Option<&str>, pointer_group: &[&str], value: UniqueConstraintValue, ) -> Result { let pointer_group = pointer_group .iter() .map(|pointer| parse_json_pointer(pointer)) .collect::, _>>()?; let key = PendingForeignKeyReferenceTarget::Key(PendingForeignKeyTargetKey { schema_key: schema_key.to_string(), domain: Domain::exact_file(version_id.to_string(), false, file_id.map(str::to_string)), pointer_group, value, }); Ok(self.fk_references.contains_key(&key)) } #[cfg(test)] fn has_fk_reference_to_identity(&self, identity: DomainRowIdentity) -> bool { self.fk_references .contains_key(&PendingForeignKeyReferenceTarget::StateSurfaceIdentity( identity, )) } #[cfg(test)] fn has_fk_target( &self, schema_key: &str, version_id: &str, file_id: Option<&str>, pointer_group: &[&str], value: UniqueConstraintValue, ) -> Result { let pointer_group = pointer_group .iter() .map(|pointer| parse_json_pointer(pointer)) .collect::, _>>()?; let key = PendingForeignKeyTargetKey { schema_key: schema_key.to_string(), domain: Domain::exact_file(version_id.to_string(), false, file_id.map(str::to_string)), pointer_group, value, }; Ok(self.fk_targets.contains_key(&key)) } } #[derive(Debug, Clone, PartialEq, Eq)] struct PendingTombstone { identity: DomainRowIdentity, } #[derive(Debug, Clone, PartialEq, Eq)] struct PendingIdentityTarget { identity: DomainRowIdentity, } #[derive(Debug, Clone, PartialEq, Eq)] struct PendingForeignKeyTarget { entity_id: EntityIdentity, } #[derive(Debug, Clone, PartialEq, Eq)] struct PendingForeignKeyReference { identity: DomainRowIdentity, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct PendingUniqueKey { schema_key: String, domain: Domain, pointer_group: Vec>, value: UniqueConstraintValue, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct PendingUniqueConstraintScope { schema_key: String, domain: Domain, pointer_group: Vec>, } impl From<&PendingUniqueKey> for PendingUniqueConstraintScope { fn from(key: &PendingUniqueKey) -> Self { Self { schema_key: key.schema_key.clone(), domain: key.domain.clone(), pointer_group: key.pointer_group.clone(), } } } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct PendingForeignKeyTargetKey { schema_key: String, domain: Domain, pointer_group: Vec>, value: UniqueConstraintValue, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] enum PendingForeignKeyReferenceTarget { Key(PendingForeignKeyTargetKey), StateSurfaceIdentity(DomainRowIdentity), } fn validate_pending_delete_restrictions( schema_catalog: &CatalogSnapshot, pending_constraints: &PendingConstraintIndexes, ) -> Result<(), LixError> { for tombstone in &pending_constraints.tombstones { let identity_targets = tombstone .identity .source_identities_that_can_reach() .into_iter() .map(PendingForeignKeyReferenceTarget::StateSurfaceIdentity) .collect::>(); reject_pending_delete_references( &tombstone.identity, &identity_targets, pending_constraints.active_references_to_any(&identity_targets), )?; let Some((_, schema_plan)) = schema_catalog.plan_for_key(tombstone.identity.schema_key()) else { continue; }; if let Some(primary_key_paths) = schema_plan.primary_key.as_ref() { let targets = tombstone .identity .domain() .fk_source_domains_for_target() .into_iter() .map(|domain| { PendingForeignKeyReferenceTarget::Key(PendingForeignKeyTargetKey { schema_key: tombstone.identity.schema_key_owned(), domain, pointer_group: primary_key_paths.clone(), value: UniqueConstraintValue::from_entity_identity( tombstone.identity.entity_id(), ), }) }) .collect::>(); reject_pending_delete_references( &tombstone.identity, &targets, pending_constraints.active_references_to_any(&targets), )?; } } Ok(()) } fn reject_pending_delete_references( deleted_identity: &DomainRowIdentity, targets: &[PendingForeignKeyReferenceTarget], references: Vec<&PendingForeignKeyReference>, ) -> Result<(), LixError> { let Some(reference) = references.first() else { return Ok(()); }; let target = targets .first() .expect("delete restriction callers provide at least one target"); Err(LixError::new( LixError::CODE_FOREIGN_KEY, format!( "cannot delete '{}' row '{}' in version '{}' because pending row '{}' references it{}", deleted_identity.schema_key(), deleted_identity.entity_id().as_json_array_text()?, deleted_identity.domain().version_id(), reference.identity.entity_id().as_json_array_text()?, pending_foreign_key_reference_target_description(target)? ), )) } fn pending_foreign_key_reference_target_description( target: &PendingForeignKeyReferenceTarget, ) -> Result { match target { PendingForeignKeyReferenceTarget::Key(target) => Ok(format!( " through '{}.{}' value {}", target.schema_key, format_pointer_group(&target.pointer_group), target.value.display() )), PendingForeignKeyReferenceTarget::StateSurfaceIdentity(target) => Ok(format!( " through '{}:{}'", target.schema_key(), target.entity_id().as_json_array_text()? )), } } async fn validate_committed_delete_restrictions( input: &TransactionValidationInput<'_>, schema_catalog: &CatalogSnapshot, pending_constraints: &PendingConstraintIndexes, ) -> Result<(), LixError> { let mut state_batches = BTreeMap::>::new(); for tombstone in &pending_constraints.tombstones { let delete_plan = schema_catalog.delete_plan_for_key(tombstone.identity.schema_key()); if !delete_plan.has_committed_checks() { continue; } for reference in delete_plan.foreign_key_references { validate_committed_normal_delete_restriction( input.live_state, pending_constraints, tombstone, &reference.source_key, &reference.foreign_key, ) .await?; } for reference in delete_plan.state_foreign_key_references { for source_domain in tombstone.identity.domain().fk_source_domains_for_target() { state_batches .entry(StateDeleteRestrictionBatchKey { source_key: reference.source_key.clone(), source_domain: source_domain.with_file_scope(DomainFileScope::Any), foreign_key: reference.clone(), }) .or_default() .push(tombstone.identity.clone()); } } } validate_committed_state_surface_delete_restriction_batches( input.live_state, pending_constraints, state_batches, ) .await?; Ok(()) } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct StateDeleteRestrictionBatchKey { source_key: SchemaCatalogKey, source_domain: Domain, foreign_key: StateDeleteReferencePlan, } async fn validate_file_descriptor_delete_restrictions( input: &TransactionValidationInput<'_>, pending_constraints: &PendingConstraintIndexes, ) -> Result<(), LixError> { for tombstone in &pending_constraints.tombstones { if tombstone.identity.schema_key() != FILE_DESCRIPTOR_SCHEMA_KEY { continue; } if !tombstone.identity.domain().is_exact_file(&None) { continue; } let file_id = tombstone.identity.entity_id().as_single_string_owned()?; for source_domain in tombstone .identity .domain() .file_scoped_row_domains_for_file_descriptor_delete() { let rows = scan_committed_constraint_rows( input.live_state, &source_domain.with_exact_file_scope(Some(file_id.clone())), Vec::new(), Vec::new(), false, ) .await?; for row in rows { if pending_constraints.tombstones_identity(&row) || row.snapshot_content.is_none() { continue; } return Err(LixError::new( LixError::CODE_FOREIGN_KEY, format!( "cannot delete file descriptor '{}' in version '{}' because committed row '{}' in schema '{}' is still scoped to that file", file_id, tombstone.identity.domain().version_id(), row.entity_id.as_json_array_text()?, row.schema_key, ), )); } } } Ok(()) } async fn validate_committed_normal_delete_restriction( live_state: &dyn LiveStateReader, pending_constraints: &PendingConstraintIndexes, tombstone: &PendingTombstone, source_key: &SchemaCatalogKey, foreign_key: &ForeignKeyPlan, ) -> Result<(), LixError> { let Some(deleted_value) = committed_deleted_row_value(live_state, tombstone, &foreign_key.referenced_properties) .await? else { return Ok(()); }; for source_domain in tombstone.identity.domain().fk_source_domains_for_target() { let rows = scan_committed_constraint_rows( live_state, &source_domain, vec![source_key.schema_key.clone()], Vec::new(), false, ) .await?; for row in rows { if pending_constraints.tombstones_identity(&row) { continue; } let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot = parse_committed_snapshot(&row, snapshot_content)?; if UniqueConstraintValue::from_snapshot_non_null( &snapshot, &foreign_key.local_properties, ) .as_ref() == Some(&deleted_value) { return Err(committed_delete_restriction_error( &tombstone.identity, &row, &foreign_key.local_properties, )?); } } } Ok(()) } async fn validate_committed_state_surface_delete_restriction_batches( live_state: &dyn LiveStateReader, pending_constraints: &PendingConstraintIndexes, batches: BTreeMap>, ) -> Result<(), LixError> { for (batch, tombstones) in batches { let rows = scan_committed_constraint_rows( live_state, &batch.source_domain, vec![batch.source_key.schema_key.clone()], Vec::new(), false, ) .await?; for row in rows { if pending_constraints.tombstones_identity(&row) { continue; } let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot = parse_committed_snapshot(&row, snapshot_content)?; let target_identity = state_surface_target_identity( Domain::for_live_row(&row), &batch.foreign_key.foreign_key, &snapshot, )?; let Some(tombstone) = tombstones.iter().find(|tombstone| { target_identity .reachable_target_identities() .contains(*tombstone) }) else { continue; }; return Err(committed_delete_restriction_error( tombstone, &row, &batch.foreign_key.foreign_key.local_properties(), )?); } } Ok(()) } async fn committed_deleted_row_value( live_state: &dyn LiveStateReader, tombstone: &PendingTombstone, referenced_properties: &[Vec], ) -> Result, LixError> { let Some(row) = load_committed_constraint_row( live_state, tombstone.identity.domain(), tombstone.identity.schema_key(), tombstone.identity.entity_id_owned(), true, ) .await? else { return Ok(None); }; let Some(snapshot_content) = row.snapshot_content.as_deref() else { return Ok(None); }; let snapshot = parse_committed_snapshot(&row, snapshot_content)?; Ok(UniqueConstraintValue::from_snapshot( &snapshot, referenced_properties, )) } fn committed_delete_restriction_error( deleted_identity: &DomainRowIdentity, referencing_row: &MaterializedLiveStateRow, local_properties: &[Vec], ) -> Result { Ok(LixError::new( LixError::CODE_FOREIGN_KEY, format!( "cannot delete '{}' row '{}' in version '{}' because committed row '{}' references it through {}", deleted_identity.schema_key(), deleted_identity.entity_id().as_json_array_text()?, deleted_identity.domain().version_id(), referencing_row.entity_id.as_json_array_text()?, format_pointer_group(local_properties) ), )) } fn parse_committed_snapshot( row: &MaterializedLiveStateRow, snapshot_content: &str, ) -> Result { serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "committed snapshot_content for schema '{}' is invalid JSON: {error}", row.schema_key ), ) }) } #[derive(Debug, Clone, PartialEq, Eq)] struct UnresolvedForeignKeyCheck { source_identity: DomainRowIdentity, source_schema_key: String, source_pointer_group: Vec>, target: UnresolvedForeignKeyTarget, } #[derive(Debug, Clone, PartialEq, Eq)] enum UnresolvedForeignKeyTarget { Key(PendingForeignKeyTargetKey), StateSurfaceIdentity(DomainRowIdentity), } fn validate_pending_foreign_keys( pending_constraints: &PendingConstraintIndexes, staged_snapshots: &[(PreparedValidationRow<'_>, &SchemaPlan, &JsonValue)], ) -> Result, LixError> { let mut unresolved = Vec::new(); for (row, schema_plan, snapshot) in staged_snapshots { for foreign_key in &schema_plan.foreign_keys { let Some(local_value) = UniqueConstraintValue::from_snapshot_non_null( snapshot, &foreign_key.local_properties, ) else { continue; }; if let Some(check) = validate_pending_normal_foreign_key( *row, foreign_key, local_value, pending_constraints, )? { unresolved.push(check); } } for foreign_key in &schema_plan.state_foreign_keys { if let Some(check) = validate_pending_state_surface_foreign_key( *row, foreign_key, snapshot, pending_constraints, )? { unresolved.push(check); } } } Ok(unresolved) } fn validate_pending_normal_foreign_key( row: PreparedValidationRow<'_>, foreign_key: &ForeignKeyPlan, local_value: UniqueConstraintValue, pending_constraints: &PendingConstraintIndexes, ) -> Result, LixError> { let key = PendingForeignKeyTargetKey { schema_key: foreign_key.referenced_schema.schema_key.clone(), domain: row.domain(), pointer_group: foreign_key.referenced_properties.clone(), value: local_value, }; if pending_constraints.has_reachable_fk_target_key(&key) { return Ok(None); } Ok(Some(UnresolvedForeignKeyCheck { source_identity: row.domain_row_identity(), source_schema_key: row.schema_key().to_string(), source_pointer_group: foreign_key.local_properties.clone(), target: UnresolvedForeignKeyTarget::Key(key), })) } fn validate_pending_state_surface_foreign_key( row: PreparedValidationRow<'_>, foreign_key: &StateForeignKeyPlan, snapshot: &JsonValue, pending_constraints: &PendingConstraintIndexes, ) -> Result, LixError> { let local_properties = foreign_key.local_properties(); let target_identity = state_surface_target_identity(row.domain(), foreign_key, snapshot)?; if pending_constraints.has_reachable_identity_target(&target_identity) { return Ok(None); } Ok(Some(UnresolvedForeignKeyCheck { source_identity: row.domain_row_identity(), source_schema_key: row.schema_key().to_string(), source_pointer_group: local_properties, target: UnresolvedForeignKeyTarget::StateSurfaceIdentity(target_identity), })) } async fn validate_committed_foreign_keys( input: &TransactionValidationInput<'_>, pending_constraints: &PendingConstraintIndexes, unresolved_checks: &[UnresolvedForeignKeyCheck], ) -> Result, LixError> { let mut still_unresolved = Vec::new(); for check in unresolved_checks { let resolved = match &check.target { UnresolvedForeignKeyTarget::Key(target) => { committed_normal_foreign_key_target_exists( input.live_state, pending_constraints, target, ) .await? } UnresolvedForeignKeyTarget::StateSurfaceIdentity(target_identity) => { committed_state_surface_foreign_key_target_exists( input.live_state, pending_constraints, target_identity, ) .await? } }; if !resolved { still_unresolved.push(check.clone()); } } Ok(still_unresolved) } fn reject_unresolved_foreign_keys( unresolved_checks: &[UnresolvedForeignKeyCheck], ) -> Result<(), LixError> { let Some(check) = unresolved_checks.first() else { return Ok(()); }; Err(LixError::new( LixError::CODE_FOREIGN_KEY, format!( "foreign key on schema '{}' row '{}' via {} has no matching target in version '{}'{}", check.source_schema_key, check.source_identity.entity_id().as_json_array_text()?, format_pointer_group(&check.source_pointer_group), check.source_identity.domain().version_id(), unresolved_foreign_key_target_description(&check.target)? ), )) } fn unresolved_foreign_key_target_description( target: &UnresolvedForeignKeyTarget, ) -> Result { match target { UnresolvedForeignKeyTarget::Key(target) => Ok(format!( " for target '{}.{}' value {}", target.schema_key, format_pointer_group(&target.pointer_group), target.value.display() )), UnresolvedForeignKeyTarget::StateSurfaceIdentity(target) => Ok(format!( " for target '{}:{}'", target.schema_key(), target.entity_id().as_json_array_text()? )), } } async fn committed_normal_foreign_key_target_exists( live_state: &dyn LiveStateReader, pending_constraints: &PendingConstraintIndexes, target: &PendingForeignKeyTargetKey, ) -> Result { for domain in target.domain.fk_target_domains() { let rows = scan_committed_constraint_rows( live_state, &domain, vec![target.schema_key.clone()], Vec::new(), false, ) .await?; for row in rows { if pending_constraints.tombstones_identity(&row) { continue; } if row.schema_key != target.schema_key { continue; } let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "committed snapshot_content for schema '{}' is invalid JSON: {error}", row.schema_key ), ) })?; if UniqueConstraintValue::from_snapshot(&snapshot, &target.pointer_group).as_ref() == Some(&target.value) { return Ok(true); } } } Ok(false) } async fn committed_state_surface_foreign_key_target_exists( live_state: &dyn LiveStateReader, pending_constraints: &PendingConstraintIndexes, target_identity: &DomainRowIdentity, ) -> Result { for candidate in target_identity.reachable_target_identities() { let rows = scan_committed_constraint_rows( live_state, candidate.domain(), vec![candidate.schema_key_owned()], vec![candidate.entity_id_owned()], false, ) .await?; for row in rows { if pending_constraints.tombstones_identity(&row) { continue; } if candidate.matches_parts(&Domain::for_live_row(&row), &row.schema_key, &row.entity_id) { return Ok(true); } } } Ok(false) } fn state_surface_target_identity( source_domain: Domain, foreign_key: &StateForeignKeyPlan, snapshot: &JsonValue, ) -> Result { let entity_id = state_surface_local_json_value(snapshot, &foreign_key.entity_id_property, "entity_id")?; let schema_key = state_surface_local_value(snapshot, &foreign_key.schema_key_property, "schema_key")?; let file_id = state_surface_nullable_local_value(snapshot, &foreign_key.file_id_property, "file_id")?; Ok(DomainRowIdentity::in_domain( source_domain.with_exact_file_scope(file_id), schema_key, EntityIdentity::from_json_array_value(entity_id).map_err(|error| { LixError::new( LixError::CODE_FOREIGN_KEY, format!("state-surface foreign key entity_id is invalid: {error}"), ) })?, )) } fn state_surface_local_json_value<'a>( snapshot: &'a JsonValue, local_pointer: &[String], state_address_part: &str, ) -> Result<&'a JsonValue, LixError> { state_surface_optional_local_json_value(snapshot, local_pointer)?.ok_or_else(|| { LixError::new( LixError::CODE_FOREIGN_KEY, format!( "state-surface foreign key {state_address_part} at '{}' is missing", format_json_pointer(local_pointer) ), ) }) } fn state_surface_local_value( snapshot: &JsonValue, local_pointer: &[String], state_address_part: &str, ) -> Result { state_surface_nullable_local_value(snapshot, local_pointer, state_address_part)?.ok_or_else( || { LixError::new( LixError::CODE_FOREIGN_KEY, format!( "state-surface foreign key {state_address_part} at '{}' is missing", format_json_pointer(local_pointer) ), ) }, ) } fn state_surface_nullable_local_value( snapshot: &JsonValue, local_pointer: &[String], state_address_part: &str, ) -> Result, LixError> { let Some(value) = json_pointer_get(snapshot, local_pointer) else { return Err(LixError::new( LixError::CODE_FOREIGN_KEY, format!( "state-surface foreign key {state_address_part} at '{}' is missing", format_json_pointer(local_pointer) ), )); }; if value.is_null() { return Ok(None); } value .as_str() .map(|value| Some(value.to_string())) .ok_or_else(|| { LixError::new( LixError::CODE_FOREIGN_KEY, format!( "state-surface foreign key {state_address_part} at '{}' must be a string or null", format_json_pointer(local_pointer) ), ) }) } fn state_surface_optional_local_json_value<'a>( snapshot: &'a JsonValue, local_pointer: &[String], ) -> Result, LixError> { let Some(value) = json_pointer_get(snapshot, local_pointer) else { return Ok(None); }; if value.is_null() { return Ok(None); } Ok(Some(value)) } async fn validate_committed_unique_constraints( input: &TransactionValidationInput<'_>, pending_constraints: &PendingConstraintIndexes, ) -> Result<(), LixError> { let mut pending_by_scope = BTreeMap::< PendingUniqueConstraintScope, BTreeMap>, >::new(); for (key, pending_entity_id) in &pending_constraints.unique_values { pending_by_scope .entry(PendingUniqueConstraintScope::from(key)) .or_default() .entry(key.value.clone()) .or_default() .push(pending_entity_id); } for (scope, pending_values) in pending_by_scope { let committed_rows = scan_committed_constraint_rows( input.live_state, &scope.domain, vec![scope.schema_key.clone()], Vec::new(), false, ) .await?; for committed_row in committed_rows { if !committed_row_is_in_exact_unique_scope(&committed_row, &scope) { continue; } if pending_constraints.tombstones_identity(&committed_row) { continue; } let Some(snapshot_content) = committed_row.snapshot_content.as_deref() else { continue; }; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_VALIDATION, format!( "committed snapshot_content for schema '{}' is invalid JSON: {error}", committed_row.schema_key ), ) })?; let Some(committed_value) = UniqueConstraintValue::from_snapshot(&snapshot, &scope.pointer_group) else { continue; }; let Some(pending_entity_ids) = pending_values.get(&committed_value) else { continue; }; for pending_entity_id in pending_entity_ids { if committed_row.entity_id == **pending_entity_id { continue; } return Err(LixError::new( LixError::CODE_UNIQUE, format!( "unique constraint violation on {}.{} for value {}: committed row '{}' conflicts with staged row '{}'", scope.schema_key, format_pointer_group(&scope.pointer_group), committed_value.display(), committed_row.entity_id.as_json_array_text()?, pending_entity_id.as_json_array_text()? ), )); } } } Ok(()) } fn committed_row_is_in_exact_unique_scope( row: &MaterializedLiveStateRow, scope: &PendingUniqueConstraintScope, ) -> bool { // LiveStateReader may return serving projections such as global rows // projected into a requested version. Constraint validation is root-local: // only rows authored in the exact version participate. scope.domain.contains(row) && row.schema_key == scope.schema_key } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct UniqueConstraintValue(Vec); impl UniqueConstraintValue { #[cfg(test)] fn string_values(values: [&str; N]) -> Self { Self( values .into_iter() .map(|value| format!("{value:?}")) .collect(), ) } fn from_entity_identity(identity: &EntityIdentity) -> Self { Self( identity .parts .iter() .map(|part| format!("{part:?}")) .collect(), ) } fn from_snapshot(snapshot: &JsonValue, pointers: &[Vec]) -> Option { let mut values = Vec::with_capacity(pointers.len()); for pointer in pointers { let value = json_pointer_get(snapshot, pointer)?; values.push(stable_unique_value(value)); } Some(Self(values)) } fn from_snapshot_non_null(snapshot: &JsonValue, pointers: &[Vec]) -> Option { let mut values = Vec::with_capacity(pointers.len()); for pointer in pointers { let value = json_pointer_get(snapshot, pointer)?; if value.is_null() { return None; } values.push(stable_unique_value(value)); } Some(Self(values)) } fn display(&self) -> String { if let [value] = self.0.as_slice() { return value.clone(); } format!("({})", self.0.join(", ")) } } fn stable_unique_value(value: &JsonValue) -> String { match value { JsonValue::String(value) => format!("{value:?}"), JsonValue::Number(value) => value.to_string(), JsonValue::Bool(value) => value.to_string(), JsonValue::Null => "null".to_string(), JsonValue::Array(_) | JsonValue::Object(_) => { canonical_json_text(value).unwrap_or_else(|_| value.to_string()) } } } fn format_pointer_group(group: &[Vec]) -> String { let pointers = group .iter() .map(|pointer| format_json_pointer(pointer)) .collect::>(); if let [pointer] = pointers.as_slice() { pointer.clone() } else { format!("({})", pointers.join(", ")) } } fn primary_key_identity_error( row: PreparedValidationRow<'_>, primary_key_paths: &[Vec], error: EntityIdentityError, ) -> LixError { let reason = match error { EntityIdentityError::EmptyPrimaryKey => "empty x-lix-primary-key".to_string(), EntityIdentityError::EmptyPrimaryKeyPath { index } => { format!("empty x-lix-primary-key pointer at index {index}") } EntityIdentityError::EmptyPrimaryKeyValue { index } => { let pointer = primary_key_paths .get(index) .map(|path| format_json_pointer(path)) .unwrap_or_else(|| format!("index {index}")); format!("empty value at primary-key pointer '{pointer}'") } EntityIdentityError::MissingPrimaryKeyValue { index } => { let pointer = format_json_pointer(&primary_key_paths[index]); format!("missing value at primary-key pointer '{pointer}'") } EntityIdentityError::UnsupportedPrimaryKeyValue { index } => { let pointer = format_json_pointer(&primary_key_paths[index]); format!("non-string value at primary-key pointer '{pointer}'") } EntityIdentityError::InvalidEncodedEntityIdentity => { "invalid encoded entity identity".to_string() } }; LixError::new( LixError::CODE_UNIQUE, format!( "primary-key constraint violation on schema '{}': {reason}", row.schema_key() ), ) } fn validate_foreign_key_definition( catalog: &CatalogSnapshot, source_key: &SchemaCatalogKey, source_schema: &JsonValue, foreign_key: &ForeignKeyPlan, ) -> Result<(), LixError> { for pointer in &foreign_key.local_properties { validate_schema_field_pointer(source_schema, pointer).map_err(|detail| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references missing local property '{}': {detail}", source_key.schema_key, format_json_pointer(pointer) ), ) })?; } if foreign_key.referenced_schema.schema_key == STATE_SURFACE_SCHEMA_KEY { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' must not reference schemaKey 'lix_state'; use x-lix-state-foreign-keys with pointers ordered as [entity_id, schema_key, file_id]", source_key.schema_key ), )); } let target_plan = catalog .plan(foreign_key.referenced_plan_id) .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references missing bound schema plan '{}'", source_key.schema_key, foreign_key.referenced_schema.schema_key, ), ) })?; let target_schema = target_plan.schema.as_ref(); if target_plan.key != foreign_key.referenced_schema { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' is bound to schema '{}' but declares schema '{}'", source_key.schema_key, target_plan.key.schema_key, foreign_key.referenced_schema.schema_key, ), )); } for pointer in &foreign_key.referenced_properties { validate_schema_field_pointer(target_schema, pointer).map_err(|detail| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references missing target property '{}.{}': {detail}", source_key.schema_key, foreign_key.referenced_schema.schema_key, format_json_pointer(pointer) ), ) })?; } if !referenced_properties_are_keyed(target_plan, &foreign_key.referenced_properties) { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "foreign key on schema '{}' references '{}.{}', but referenced properties must match the target primary key or a unique constraint", source_key.schema_key, foreign_key.referenced_schema.schema_key, format_pointer_group(&foreign_key.referenced_properties) ), )); } Ok(()) } fn validate_state_foreign_key_definition( source_key: &SchemaCatalogKey, source_schema: &JsonValue, foreign_key: &StateForeignKeyPlan, ) -> Result<(), LixError> { let local_properties = foreign_key.local_properties(); for pointer in &local_properties { validate_schema_field_pointer(source_schema, pointer).map_err(|detail| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "state foreign key on schema '{}' references missing local property '{}': {detail}", source_key.schema_key, format_json_pointer(pointer) ), ) })?; } Ok(()) } fn validate_schema_field_pointer(schema: &JsonValue, pointer: &[String]) -> Result<(), String> { if pointer.is_empty() { return Err("empty pointer does not name a field".to_string()); } let mut current = schema; for segment in pointer { let properties = current .get("properties") .and_then(JsonValue::as_object) .ok_or_else(|| { format!( "schema segment before '{}' has no object properties", segment ) })?; current = properties .get(segment) .ok_or_else(|| format!("property '{}' does not exist", segment))?; } Ok(()) } fn referenced_properties_are_keyed( target_plan: &SchemaPlan, referenced_properties: &[Vec], ) -> bool { if let Some(primary_key) = target_plan.primary_key.as_ref() { if primary_key == referenced_properties { return true; } } target_plan .uniques .iter() .any(|unique_group| unique_group == referenced_properties) } fn validate_foreign_key_definitions(catalog: &CatalogSnapshot) -> Result<(), LixError> { for plan in catalog.plans() { for foreign_key in &plan.foreign_keys { validate_foreign_key_definition(catalog, &plan.key, plan.schema.as_ref(), foreign_key)?; } for foreign_key in &plan.state_foreign_keys { validate_state_foreign_key_definition(&plan.key, plan.schema.as_ref(), foreign_key)?; } } Ok(()) } #[cfg(test)] fn validate_pending_registered_schema( row: PreparedValidationRow<'_>, registered_schema_definition: &JsonValue, ) -> Result<(SchemaKey, JsonValue), LixError> { let snapshot_content = row.snapshot_content().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, "registered schema write requires snapshot_content", ) })?; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!("pending registered schema snapshot_content is invalid JSON: {error}"), ) })?; if !snapshot.get("value").is_some_and(JsonValue::is_object) { validate_lix_schema(registered_schema_definition, &snapshot)?; } // A registered-schema row stores the schema definition under `value`. // Validate both layers: the outer row must match the builtin // `lix_registered_schema` schema, and the inner definition must be a valid // Lix schema before it can extend the transaction-visible catalog. let (key, schema) = schema_from_registered_snapshot(&snapshot)?; reject_seed_schema_registration(&key)?; validate_lix_schema_definition(&schema)?; validate_lix_schema(registered_schema_definition, &snapshot)?; Ok((key, schema)) } #[cfg(test)] fn reject_seed_schema_registration(key: &SchemaKey) -> Result<(), LixError> { if is_seed_schema_key(&key.schema_key) { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "schema '{}' is a system schema and cannot be registered at runtime", key.schema_key ), )); } Ok(()) } #[cfg(test)] mod tests { use std::sync::atomic::{AtomicUsize, Ordering}; use async_trait::async_trait; use serde_json::json; use super::*; use crate::live_state::{LiveStateRowRequest, LiveStateScanRequest, MaterializedLiveStateRow}; use crate::schema::{schema_key_from_definition, seed_schema_definition}; use crate::transaction::types::{StageJson, TransactionJson}; struct EmptyLiveStateReader; fn test_stage_json(value: &str) -> StageJson { let parsed = test_json_text(value).expect("test staged JSON should parse"); crate::transaction::types::stage_json_from_value( TransactionJson::from_value_for_test(parsed), "test staged JSON", ) .expect("test staged JSON should prepare") } fn test_json_text(value: &str) -> Result { serde_json::from_str::(value).map_err(|error| { LixError::new( LixError::CODE_UNKNOWN, format!("test staged JSON is invalid JSON: {error}"), ) }) } fn test_plan_from_schema(schema: JsonValue) -> &'static SchemaPlan { let key = schema_key_from_definition(&schema).expect("test schema should have key"); let visible_schemas = match key.schema_key.as_str() { "fk_child_schema" => vec![fk_parent_schema(), schema], FILE_DESCRIPTOR_SCHEMA_KEY => vec![directory_descriptor_schema(), schema], DIRECTORY_DESCRIPTOR_SCHEMA_KEY => vec![schema], _ => vec![schema], }; let catalog = Box::leak(Box::new( CatalogSnapshot::from_visible_schemas(&visible_schemas) .expect("test schema plan catalog should build"), )); catalog .plan_for_key(&key.schema_key) .expect("test schema key should resolve") .1 } #[async_trait] impl LiveStateReader for EmptyLiveStateReader { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(test_file_descriptor_rows() .into_iter() .filter(|row| live_state_row_matches_scan(row, request)) .collect()) } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(test_file_descriptor_rows() .into_iter() .find(|row| live_state_row_matches_load(row, request))) } } fn validation_input<'a>( staged_writes: &'a PreparedWriteSet, visible_schemas: &'a [JsonValue], ) -> TransactionValidationInput<'a> { let catalog = Box::leak(Box::new( catalog_from_transaction_parts_unchecked(staged_writes, visible_schemas) .expect("test schema catalog should build"), )); let validation_set = Box::leak(Box::new(staged_writes.validation_set_for_tests())); TransactionValidationInput::new(validation_set, catalog, &EmptyLiveStateReader) } fn catalog_from_transaction_input<'a>( input: &'a TransactionValidationInput<'a>, ) -> Result<&'a CatalogSnapshot, LixError> { validate_foreign_key_definitions(input.schema_catalog)?; Ok(input.schema_catalog) } fn catalog_from_transaction_parts( staged_writes: &PreparedWriteSet, visible_schemas: &[JsonValue], ) -> Result { let catalog = catalog_from_transaction_parts_unchecked(staged_writes, visible_schemas)?; let mut pending_keys = BTreeMap::::new(); for row in staged_writes .validation_rows() .filter(|row| row.schema_key() == REGISTERED_SCHEMA_KEY) { let snapshot_content = row.snapshot_content().ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, "registered schema write requires snapshot_content", ) })?; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "pending registered schema snapshot_content is invalid JSON: {error}" ), ) })?; let (key, _) = schema_from_registered_snapshot(&snapshot)?; let catalog_key = SchemaCatalogKey::from_schema_key(key); if let Some(existing_entity_id) = pending_keys.insert(catalog_key.clone(), row.entity_id().clone()) { return Err(LixError::new( LixError::CODE_SCHEMA_DEFINITION, format!( "duplicate pending registered schema '{}' in transaction: rows '{}' and '{}'", catalog_key.schema_key, existing_entity_id.as_json_array_text()?, row.entity_id().as_json_array_text()? ), )); } } validate_foreign_key_definitions(&catalog)?; Ok(catalog) } fn catalog_from_transaction_parts_unchecked( staged_writes: &PreparedWriteSet, visible_schemas: &[JsonValue], ) -> Result { let mut catalog = CatalogSnapshot::from_visible_schemas(visible_schemas)?; for row in staged_writes .validation_rows() .filter(|row| row.schema_key() == REGISTERED_SCHEMA_KEY) { let registered_schema_definition = catalog .schema(REGISTERED_SCHEMA_KEY) .cloned() .ok_or_else(|| { LixError::new( LixError::CODE_SCHEMA_DEFINITION, "lix_registered_schema schema is not visible to this transaction", ) })?; let (key, schema) = validate_pending_registered_schema(row, ®istered_schema_definition)?; catalog.insert_schema_for_domain(row.domain(), key, schema)?; } Ok(catalog) } struct StaticLiveStateReader { rows: Vec, } #[async_trait] impl LiveStateReader for StaticLiveStateReader { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self .rows .iter() .cloned() .chain(test_file_descriptor_rows()) .filter(|row| live_state_row_matches_scan(row, request)) .collect()) } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(self .rows .iter() .cloned() .chain(test_file_descriptor_rows()) .find(|row| { row.schema_key == request.schema_key && row.version_id == request.version_id && row.entity_id == request.entity_id && request.file_id.matches(row.file_id.as_ref()) })) } } struct OverlayingStaticLiveStateReader { rows: Vec, } #[async_trait] impl LiveStateReader for OverlayingStaticLiveStateReader { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { let rows = self .rows .iter() .cloned() .chain(test_file_descriptor_rows()) .filter(|row| live_state_row_matches_scan(row, request)) .collect::>(); if request.filter.untracked.is_some() { return Ok(rows); } let tracked_rows = rows .iter() .filter(|row| !row.untracked) .cloned() .collect::>(); let untracked_rows = rows .into_iter() .filter(|row| row.untracked) .collect::>(); Ok(overlay_untracked_rows_for_test( tracked_rows, untracked_rows, )) } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(self .scan_rows(&LiveStateScanRequest { filter: LiveStateFilter { schema_keys: vec![request.schema_key.clone()], entity_ids: vec![request.entity_id.clone()], version_ids: vec![request.version_id.clone()], file_ids: vec![request.file_id.clone()], ..Default::default() }, ..Default::default() }) .await? .into_iter() .next()) } } fn overlay_untracked_rows_for_test( tracked_rows: Vec, untracked_rows: Vec, ) -> Vec { let mut rows_by_identity = BTreeMap::new(); for row in tracked_rows { rows_by_identity.insert(LiveStateRowIdentity::from_row(&row), row); } for row in untracked_rows { rows_by_identity.insert(LiveStateRowIdentity::from_row(&row), row); } rows_by_identity.into_values().collect() } struct StrictEmptyLiveStateReader; #[async_trait] impl LiveStateReader for StrictEmptyLiveStateReader { async fn scan_rows( &self, _request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(Vec::new()) } async fn load_row( &self, _request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(None) } } struct StrictStaticLiveStateReader { rows: Vec, } #[async_trait] impl LiveStateReader for StrictStaticLiveStateReader { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { Ok(self .rows .iter() .filter(|row| live_state_row_matches_scan(row, request)) .cloned() .collect()) } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(self .rows .iter() .find(|row| live_state_row_matches_load(row, request)) .cloned()) } } struct CountingStaticLiveStateReader { rows: Vec, scan_count: AtomicUsize, } #[async_trait] impl LiveStateReader for CountingStaticLiveStateReader { async fn scan_rows( &self, request: &LiveStateScanRequest, ) -> Result, LixError> { self.scan_count.fetch_add(1, Ordering::Relaxed); Ok(self .rows .iter() .cloned() .chain(test_file_descriptor_rows()) .filter(|row| live_state_row_matches_scan(row, request)) .collect()) } async fn load_row( &self, request: &LiveStateRowRequest, ) -> Result, LixError> { Ok(self .rows .iter() .cloned() .chain(test_file_descriptor_rows()) .find(|row| live_state_row_matches_load(row, request))) } } #[test] fn schema_catalog_indexes_visible_schemas_by_key_and_version() { let visible_schemas = vec![json!({ "x-lix-key": "visible_schema", "type": "object", })]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let catalog = catalog_from_transaction_input(&input).expect("schema catalog should build"); assert_eq!(catalog.len(), 1); assert!(catalog.contains("visible_schema")); } #[test] fn schema_catalog_includes_pending_registered_schema_rows() { let visible_schemas = vec![ registered_schema(), json!({ "x-lix-key": "visible_schema", "type": "object", }), ]; let staged_writes = PreparedWriteSet { state_rows: vec![pending_registered_schema_row("pending_schema")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let input = validation_input(&staged_writes, &visible_schemas); let catalog = catalog_from_transaction_input(&input).expect("schema catalog should build"); assert_eq!(catalog.len(), 3); assert!(catalog.contains("visible_schema")); assert!(catalog.contains("pending_schema")); } #[test] fn schema_catalog_rejects_pending_schema_duplicate_of_visible_identity() { let visible_schemas = vec![ registered_schema(), json!({ "x-lix-key": "same_schema", "type": "object", "properties": { "old": { "type": "string" } } }), ]; let staged_writes = PreparedWriteSet { state_rows: vec![pending_registered_schema_row("same_schema")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = catalog_from_transaction_parts_unchecked(&staged_writes, &visible_schemas) .expect_err("pending schema must not override a visible domain fact"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!(error.message.contains("more than one schema domain")); } #[test] fn pending_registered_schema_requires_snapshot_content() { let mut row = pending_registered_schema_row("missing_snapshot"); row.snapshot = None; let error = validate_pending_registered_schema( PreparedValidationRow::State(&row), ®istered_schema(), ) .expect_err("registered schema writes require snapshot_content"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[test] fn pending_registered_schema_rejects_invalid_snapshot_json() { let error = test_json_text("{not-json").expect_err("invalid JSON should fail before validation"); assert_eq!(error.code, LixError::CODE_UNKNOWN); } #[test] fn pending_registered_schema_uses_builtin_schema_for_outer_value_shape() { let mut row = pending_registered_schema_row("missing_value"); row.snapshot = Some(test_stage_json(&json!({}).to_string())); let error = validate_pending_registered_schema( PreparedValidationRow::State(&row), ®istered_schema(), ) .expect_err("builtin lix_registered_schema validation should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION); } #[test] fn pending_registered_schema_rejects_malformed_nested_lix_schema_definition() { let mut row = pending_registered_schema_row("bad_schema_definition"); row.snapshot = Some(test_stage_json( &json!({ "value": { "x-lix-key": "bad_schema_definition", "x-lix-primary-key": ["id"], "type": "object", "properties": { "id": { "type": "string" } }, "required": ["id"], "additionalProperties": false, } }) .to_string(), )); let error = validate_pending_registered_schema( PreparedValidationRow::State(&row), ®istered_schema(), ) .expect_err("nested Lix schema definition should be rejected"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[test] fn schema_catalog_rejects_duplicate_pending_registered_schema_identity() { let mut duplicate = pending_registered_schema_row("duplicate_schema"); duplicate.entity_id = registered_schema_entity_id("duplicate_schema_duplicate"); let staged_writes = PreparedWriteSet { state_rows: vec![pending_registered_schema_row("duplicate_schema"), duplicate], ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas) .expect_err("duplicate pending schema keys should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[test] fn schema_catalog_allows_pending_foreign_key_to_pending_schema() { let staged_writes = PreparedWriteSet { state_rows: vec![ pending_registered_schema_from_definition(fk_parent_schema()), pending_registered_schema_from_definition(fk_child_schema()), ], ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let input = validation_input(&staged_writes, &visible_schemas); let catalog = catalog_from_transaction_input(&input) .expect("pending parent schema should satisfy pending child foreign key"); assert!(catalog.contains("fk_parent_schema")); assert!(catalog.contains("fk_child_schema")); } #[test] fn schema_catalog_rejects_foreign_key_missing_target_schema() { let staged_writes = PreparedWriteSet { state_rows: vec![pending_registered_schema_from_definition(fk_child_schema())], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas) .expect_err("missing referenced schema should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[test] fn schema_catalog_rejects_foreign_key_missing_local_field() { let mut child = fk_child_schema(); child["x-lix-foreign-keys"][0]["properties"] = json!(["/missing_parent_id"]); let staged_writes = PreparedWriteSet { state_rows: vec![ pending_registered_schema_from_definition(fk_parent_schema()), pending_registered_schema_from_definition(child), ], ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas) .expect_err("missing local FK field should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[test] fn schema_catalog_rejects_foreign_key_missing_referenced_field() { let mut child = fk_child_schema(); child["x-lix-foreign-keys"][0]["references"]["properties"] = json!(["/missing_id"]); let staged_writes = PreparedWriteSet { state_rows: vec![ pending_registered_schema_from_definition(fk_parent_schema()), pending_registered_schema_from_definition(child), ], ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas) .expect_err("missing referenced FK field should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[test] fn schema_catalog_rejects_foreign_key_to_non_unique_target_field() { let mut parent = fk_parent_schema(); parent["properties"]["name"] = json!({ "type": "string" }); let mut child = fk_child_schema(); child["x-lix-foreign-keys"][0]["references"]["properties"] = json!(["/name"]); let staged_writes = PreparedWriteSet { state_rows: vec![ pending_registered_schema_from_definition(parent), pending_registered_schema_from_definition(child), ], ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas) .expect_err("FK target must be primary-key or unique"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[test] fn schema_catalog_allows_state_surface_foreign_key_target() { let staged_writes = PreparedWriteSet { state_rows: vec![pending_registered_schema_from_definition( state_surface_ref_schema(), )], ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let input = validation_input(&staged_writes, &visible_schemas); let catalog = catalog_from_transaction_input(&input) .expect("x-lix-state-foreign-keys should validate as a state-surface FK target"); assert!(catalog.contains("state_surface_ref_schema")); } #[test] fn schema_catalog_rejects_normal_foreign_key_to_lix_state() { let mut schema = fk_child_schema(); schema["x-lix-foreign-keys"][0]["properties"] = json!(["/parent_id"]); schema["x-lix-foreign-keys"][0]["references"] = json!({ "schemaKey": "lix_state", "properties": ["/entity_id"] }); let staged_writes = PreparedWriteSet { state_rows: vec![pending_registered_schema_from_definition(schema)], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let error = catalog_from_transaction_parts(&staged_writes, &visible_schemas) .expect_err("normal FK must not use fake lix_state schema key"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("x-lix-state-foreign-keys"), "unexpected error: {error:?}" ); } #[test] fn schema_catalog_rejects_state_surface_foreign_key_without_full_address_tuple() { let mut schema = state_surface_ref_schema(); schema["x-lix-state-foreign-keys"][0] = json!(["/target_entity_id"]); let staged_writes = PreparedWriteSet { state_rows: vec![pending_registered_schema_from_definition(schema)], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let visible_schemas = vec![registered_schema()]; let error = catalog_from_transaction_parts_unchecked(&staged_writes, &visible_schemas) .expect_err("state FK target must include entity_id, schema_key, and file_id"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("[entity_id, schema_key, file_id]"), "unexpected error: {error:?}" ); } #[tokio::test] async fn validation_rejects_unknown_schema_key() { let visible_schemas = vec![key_value_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![staged_row("unknown_schema", Some(json!({}).to_string()))], ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("unknown schema_key should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[tokio::test] async fn validation_checks_schema_existence_for_tombstones() { let visible_schemas = vec![key_value_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![staged_row("unknown_schema", None)], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("tombstone with unknown schema should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[tokio::test] async fn validation_allows_pending_registered_schema_to_validate_later_rows() { let visible_schemas = vec![key_value_schema(), registered_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![ pending_registered_schema_row("pending_schema"), staged_row( "pending_schema", Some(json!({ "id": "entity-1" }).to_string()), ), ], ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("pending registered schema should be visible to later staged rows"); } #[tokio::test] async fn validation_rejects_tracked_row_using_pending_untracked_schema_definition() { let visible_schemas = vec![registered_schema()]; let mut untracked_schema = pending_registered_schema_row("untracked_only_schema"); mark_prepared_row_untracked(&mut untracked_schema); let mut tracked_row = staged_row( "untracked_only_schema", Some(json!({ "id": "row-1" }).to_string()), ); tracked_row.entity_id = EntityIdentity::single("row-1"); let staged_writes = PreparedWriteSet { state_rows: vec![untracked_schema, tracked_row], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("tracked rows must not validate against untracked schema definitions"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } #[tokio::test] async fn validation_validates_snapshot_content_against_schema() { let visible_schemas = vec![key_value_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![staged_row( "lix_key_value", Some(json!({ "key": "k" }).to_string()), )], ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("missing required snapshot field should fail"); assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION); } #[tokio::test] async fn validation_rejects_invalid_snapshot_json() { let error = test_json_text("{not-json") .expect_err("invalid snapshot JSON should fail before validation"); assert_eq!(error.code, LixError::CODE_UNKNOWN); } #[tokio::test] async fn validation_skips_snapshot_validation_for_tombstones() { let visible_schemas = vec![key_value_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![staged_row("lix_key_value", None)], adopted_rows: Vec::new(), ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("tombstone should only require schema existence"); } #[tokio::test] async fn validation_rejects_missing_file_owner_reference() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "first")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &StrictEmptyLiveStateReader, )) .await .expect_err("non-null file_id should require a file descriptor"); assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND); } #[tokio::test] async fn validation_allows_pending_file_owner_reference() { let visible_schemas = vec![ unique_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let staged_writes = PreparedWriteSet { state_rows: vec![ staged_file_descriptor_row("file-a", "version-a"), unique_row("post-1", "hello-world", "first"), ], ..empty_staged_write_set() }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &StrictEmptyLiveStateReader, )) .await .expect("same-transaction file descriptor should satisfy file ownership"); } #[tokio::test] async fn validation_rejects_tracked_file_owner_reference_pending_only_as_untracked() { let visible_schemas = vec![ unique_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut untracked_file_descriptor = staged_file_descriptor_row("file-a", "version-a"); mark_prepared_row_untracked(&mut untracked_file_descriptor); let staged_writes = PreparedWriteSet { state_rows: vec![ untracked_file_descriptor, unique_row("post-1", "hello-world", "first"), ], ..empty_staged_write_set() }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &StrictEmptyLiveStateReader, )) .await .expect_err("tracked file owner must not resolve through pending untracked descriptor"); assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND); } #[tokio::test] async fn validation_allows_untracked_file_owner_reference_pending_as_tracked() { let visible_schemas = vec![ unique_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut untracked_row = unique_row("post-1", "hello-world", "first"); mark_prepared_row_untracked(&mut untracked_row); let staged_writes = PreparedWriteSet { state_rows: vec![ staged_file_descriptor_row("file-a", "version-a"), untracked_row, ], ..empty_staged_write_set() }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &StrictEmptyLiveStateReader, )) .await .expect("untracked file owner should resolve through pending tracked descriptor"); } #[tokio::test] async fn validation_rejects_file_owner_reference_when_descriptor_tombstoned_in_transaction() { let visible_schemas = vec![ unique_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut file_descriptor_delete = staged_file_descriptor_row("file-a", "version-a"); file_descriptor_delete.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![ file_descriptor_delete, unique_row("post-1", "hello-world", "first"), ], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = validate_prepared_writes( TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &EmptyLiveStateReader, ), ) .await .expect_err("same-transaction file descriptor tombstone must hide committed descriptor"); assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND); } #[tokio::test] async fn validation_allows_committed_file_owner_reference() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "first")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_file_descriptor_row("file-a", "version-a")], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("committed file descriptor should satisfy file ownership"); } #[tokio::test] async fn validation_rejects_tracked_file_owner_reference_committed_only_as_untracked() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "first")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let mut untracked_file_descriptor = committed_file_descriptor_row("file-a", "version-a"); mark_live_row_untracked(&mut untracked_file_descriptor); let live_state = StrictStaticLiveStateReader { rows: vec![untracked_file_descriptor], }; let error = validate_prepared_writes( TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, ), ) .await .expect_err("tracked file owner must not resolve through committed untracked descriptor"); assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND); } #[tokio::test] async fn validation_allows_untracked_file_owner_reference_committed_as_tracked() { let visible_schemas = vec![unique_schema()]; let mut untracked_row = unique_row("post-1", "hello-world", "first"); mark_prepared_row_untracked(&mut untracked_row); let staged_writes = PreparedWriteSet { state_rows: vec![untracked_row], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StrictStaticLiveStateReader { rows: vec![committed_file_descriptor_row("file-a", "version-a")], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("untracked file owner should resolve through committed tracked descriptor"); } #[tokio::test] async fn validation_allows_tracked_file_owner_reference_committed_behind_untracked_overlay() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "first")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let tracked_file_descriptor = committed_file_descriptor_row("file-a", "version-a"); let mut untracked_tombstone = committed_file_descriptor_row("file-a", "version-a"); untracked_tombstone.snapshot_content = None; mark_live_row_untracked(&mut untracked_tombstone); let live_state = OverlayingStaticLiveStateReader { rows: vec![tracked_file_descriptor, untracked_tombstone], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("tracked file owner should resolve against tracked descriptor behind overlay"); } #[tokio::test] async fn validation_rejects_deleting_file_descriptor_referenced_by_committed_row() { let visible_schemas = vec![ unique_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut file_descriptor_delete = staged_file_descriptor_row("file-a", "version-a"); file_descriptor_delete.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![file_descriptor_delete], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_unique_row("post-1", "hello-world", "first")], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("file descriptor delete must be blocked by committed file-owned rows"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_rejects_deleting_tracked_file_descriptor_referenced_by_committed_untracked_row( ) { let visible_schemas = vec![ unique_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut file_descriptor_delete = staged_file_descriptor_row("file-a", "version-a"); file_descriptor_delete.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![file_descriptor_delete], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let mut untracked_row = MaterializedLiveStateRow::from(unique_row("post-1", "hello-world", "first")); mark_live_row_untracked(&mut untracked_row); let live_state = StrictStaticLiveStateReader { rows: vec![ committed_file_descriptor_row("file-a", "version-a"), untracked_row, ], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("tracked file descriptor delete must be blocked by untracked rows"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_untracked_directory_parent_to_tracked_directory() { let visible_schemas = vec![directory_descriptor_schema()]; let tracked_parent = directory_descriptor_row("dir-parent", None, "parent", "version-a"); let mut untracked_child = directory_descriptor_row("dir-child", Some("dir-parent"), "child", "version-a"); mark_prepared_row_untracked(&mut untracked_child); let staged_writes = PreparedWriteSet { state_rows: vec![tracked_parent, untracked_child], ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("untracked directory parent_id should resolve through tracked directory"); } #[tokio::test] async fn validation_rejects_file_owner_reference_that_exists_only_in_global() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "first")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StrictStaticLiveStateReader { rows: vec![committed_file_descriptor_row( "file-a", crate::GLOBAL_VERSION_ID, )], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("global file descriptor should not satisfy a version-local row"); assert_eq!(error.code, LixError::CODE_FILE_NOT_FOUND); } #[tokio::test] async fn validation_rejects_primary_key_duplicate_with_different_identity() { let visible_schemas = vec![unique_schema()]; let mut conflicting = unique_row("post-1", "hello-world", "first"); conflicting.entity_id = crate::entity_identity::EntityIdentity::single("post-2"); let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "first"), conflicting], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("same primary key under different identity should fail"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_rejects_pending_unique_value_duplicate() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![ unique_row("post-1", "hello-world", "first"), unique_row("post-2", "hello-world", "second"), ], ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("duplicate pending unique value should fail"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_rejects_pending_unique_duplicate_with_null_component() { let visible_schemas = vec![nullable_unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![ nullable_unique_row("row-1", None, "root-name"), nullable_unique_row("row-2", None, "root-name"), ], ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("duplicate nullable unique value should fail"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_rejects_pending_unique_same_value_in_same_version() { let visible_schemas = vec![unique_schema()]; let mut duplicate = unique_row("post-2", "hello-world", "second"); duplicate.version_id = "version-a".to_string(); let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "first"), duplicate], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("same unique value in the same version should fail"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_allows_pending_unique_same_value_in_different_versions() { let visible_schemas = vec![unique_schema()]; let mut version_b = unique_row("post-2", "hello-world", "second"); version_b.version_id = "version-b".to_string(); let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "first"), version_b], adopted_rows: Vec::new(), ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("unique values should be scoped to the exact version_id"); } #[tokio::test] async fn validation_allows_pending_unique_overwrite_of_same_identity() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![ unique_row("post-1", "hello-world", "first"), unique_row("post-1", "hello-world", "updated"), ], ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("same identity should be treated as replacement, not duplicate"); } #[tokio::test] async fn validation_skips_pending_unique_indexes_for_tombstones() { let visible_schemas = vec![unique_schema()]; let mut tombstone = unique_row("post-1", "hello-world", "deleted"); tombstone.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![tombstone, unique_row("post-2", "hello-world", "second")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("tombstones should not claim pending unique values"); } #[tokio::test] async fn validation_scopes_pending_unique_values_by_file_and_version() { let visible_schemas = vec![unique_schema()]; let mut different_file = unique_row("post-2", "hello-world", "second"); different_file.file_id = Some("file-b".to_string()); let mut different_version = unique_row("post-3", "hello-world", "third"); different_version.version_id = "version-b".to_string(); let staged_writes = PreparedWriteSet { state_rows: vec![ unique_row("post-1", "hello-world", "first"), different_file, different_version, ], ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("unique values are scoped by file and version"); } #[tokio::test] async fn validation_rejects_committed_visible_unique_value_duplicate() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-2", "hello-world", "second")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_unique_row("post-1", "hello-world", "first")], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("committed visible unique value should conflict"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_rejects_committed_tracked_unique_duplicate_behind_untracked_overlay() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-2", "hello-world", "second")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let tracked_duplicate = committed_unique_row("post-1", "hello-world", "first"); let mut untracked_overlay = committed_unique_row("post-1", "draft-slug", "draft"); mark_live_row_untracked(&mut untracked_overlay); let live_state = OverlayingStaticLiveStateReader { rows: vec![tracked_duplicate, untracked_overlay], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("tracked unique duplicate must be detected behind untracked overlay"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_rejects_committed_unique_duplicate_when_untracked_tombstone_shadows_owner() { let visible_schemas = vec![unique_schema()]; let mut untracked_tombstone = unique_row("post-1", "ignored", "deleted"); untracked_tombstone.snapshot = None; mark_prepared_row_untracked(&mut untracked_tombstone); let staged_writes = PreparedWriteSet { state_rows: vec![ untracked_tombstone, unique_row("post-2", "hello-world", "second"), ], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_unique_row("post-1", "hello-world", "first")], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("untracked tombstone must not hide tracked unique owner"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_rejects_committed_unique_duplicate_with_null_component() { let visible_schemas = vec![nullable_unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![nullable_unique_row("row-2", None, "root-name")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_nullable_unique_row("row-1", None, "root-name")], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("committed duplicate nullable unique value should conflict"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_rejects_committed_unique_same_value_in_same_version() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-2", "hello-world", "second")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_unique_row("post-1", "hello-world", "first")], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("same unique value in the same version should conflict"); assert_eq!(error.code, LixError::CODE_UNIQUE); } #[tokio::test] async fn validation_allows_committed_unique_same_value_in_different_versions() { let visible_schemas = vec![unique_schema()]; let mut version_b = unique_row("post-2", "hello-world", "second"); version_b.version_id = "version-b".to_string(); let staged_writes = PreparedWriteSet { state_rows: vec![version_b], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_unique_row("post-1", "hello-world", "first")], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("committed unique values should be scoped to the exact version_id"); } #[tokio::test] async fn validation_ignores_projected_live_state_rows_for_unique_constraints() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-2", "hello-world", "second")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let mut projected_overlay_row = committed_unique_row("post-1", "hello-world", "first"); projected_overlay_row.version_id = "version-a".to_string(); projected_overlay_row.global = true; let live_state = StaticLiveStateReader { rows: vec![projected_overlay_row], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("validation should ignore live-state overlay projections"); } #[tokio::test] async fn validation_allows_committed_visible_unique_update_of_same_identity() { let visible_schemas = vec![unique_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![unique_row("post-1", "hello-world", "updated")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_unique_row("post-1", "hello-world", "first")], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("same identity should update committed unique owner"); } #[tokio::test] async fn validation_batches_committed_unique_scans_by_constraint_group() { let visible_schemas = vec![unique_schema()]; let mut staged_one = unique_row("post-3", "new-slug-3", "third"); staged_one.file_id = None; let mut staged_two = unique_row("post-4", "new-slug-4", "fourth"); staged_two.file_id = None; let mut committed_one = committed_unique_row("post-1", "hello-world", "first"); committed_one.file_id = None; let mut committed_two = committed_unique_row("post-2", "second-slug", "second"); committed_two.file_id = None; let staged_writes = PreparedWriteSet { state_rows: vec![staged_one, staged_two], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = CountingStaticLiveStateReader { rows: vec![committed_one, committed_two], scan_count: AtomicUsize::new(0), }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("distinct pending unique values should not conflict"); assert_eq!(live_state.scan_count.load(Ordering::Relaxed), 1); } #[tokio::test] async fn validation_ignores_committed_unique_owner_tombstoned_by_transaction() { let visible_schemas = vec![unique_schema()]; let mut tombstone = unique_row("post-1", "hello-world", "deleted"); tombstone.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![tombstone, unique_row("post-2", "hello-world", "second")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_unique_row("post-1", "hello-world", "first")], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("tombstoned committed owner should not conflict"); } #[tokio::test] async fn validation_allows_committed_unique_same_value_in_different_file_or_version() { let visible_schemas = vec![unique_schema()]; let mut different_file = unique_row("post-2", "hello-world", "second"); different_file.file_id = Some("file-b".to_string()); let mut different_version = unique_row("post-3", "hello-world", "third"); different_version.version_id = "version-b".to_string(); let staged_writes = PreparedWriteSet { state_rows: vec![different_file, different_version], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![committed_unique_row("post-1", "hello-world", "first")], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("committed uniqueness is scoped by file and version"); } #[tokio::test] async fn validation_rejects_foreign_key_target_missing_in_same_version() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![fk_child_row("child-1", "parent-1", "version-a")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("foreign key must resolve in the same version"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_foreign_key_target_in_same_version() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![ fk_parent_row("parent-1", "version-a"), fk_child_row("child-1", "parent-1", "version-a"), ], ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("foreign key should resolve against pending rows in the same version"); } #[tokio::test] async fn validation_rejects_tracked_foreign_key_target_pending_only_as_untracked() { let visible_schemas = vec![ fk_parent_schema(), fk_child_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut untracked_parent = fk_parent_row("parent-1", "version-a"); mark_prepared_row_untracked(&mut untracked_parent); let mut untracked_file_descriptor = staged_file_descriptor_row("file-a", "version-a"); mark_prepared_row_untracked(&mut untracked_file_descriptor); let staged_writes = PreparedWriteSet { state_rows: vec![ untracked_file_descriptor, untracked_parent, fk_child_row("child-1", "parent-1", "version-a"), ], ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("tracked FK must not resolve through a pending untracked target"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_untracked_foreign_key_target_pending_as_tracked() { let visible_schemas = vec![ fk_parent_schema(), fk_child_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let tracked_file_descriptor = staged_file_descriptor_row("file-a", "version-a"); let tracked_parent = fk_parent_row("parent-1", "version-a"); let mut untracked_file_descriptor = staged_file_descriptor_row("file-a", "version-a"); mark_prepared_row_untracked(&mut untracked_file_descriptor); let mut untracked_child = fk_child_row("child-1", "parent-1", "version-a"); mark_prepared_row_untracked(&mut untracked_child); let staged_writes = PreparedWriteSet { state_rows: vec![ tracked_file_descriptor, tracked_parent, untracked_file_descriptor, untracked_child, ], ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("untracked FK should be allowed to reference a pending tracked target"); } #[tokio::test] async fn validation_rejects_foreign_key_target_that_exists_only_in_different_version() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![ fk_parent_row("parent-1", "version-b"), fk_child_row("child-1", "parent-1", "version-a"), ], ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err("foreign key target in another version should not satisfy this version"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_foreign_key_target_committed_in_same_version() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![fk_child_row("child-1", "parent-1", "version-a")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "parent-1", "version-a", ))], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("foreign key should resolve against committed rows in the same version"); } #[tokio::test] async fn validation_rejects_tracked_foreign_key_target_committed_only_as_untracked() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![fk_child_row("child-1", "parent-1", "version-a")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let mut untracked_parent = MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")); mark_live_row_untracked(&mut untracked_parent); let live_state = StaticLiveStateReader { rows: vec![untracked_parent], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("tracked FK must not resolve through a committed untracked target"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_untracked_foreign_key_target_committed_as_tracked() { let visible_schemas = vec![ fk_parent_schema(), fk_child_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut untracked_file_descriptor = staged_file_descriptor_row("file-a", "version-a"); mark_prepared_row_untracked(&mut untracked_file_descriptor); let mut untracked_child = fk_child_row("child-1", "parent-1", "version-a"); mark_prepared_row_untracked(&mut untracked_child); let staged_writes = PreparedWriteSet { state_rows: vec![untracked_file_descriptor, untracked_child], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![ committed_file_descriptor_row("file-a", "version-a"), MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")), ], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("untracked FK should be allowed to reference a committed tracked target"); } #[tokio::test] async fn validation_allows_tracked_foreign_key_target_committed_behind_untracked_overlay() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![fk_child_row("child-1", "parent-1", "version-a")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let tracked_parent = MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")); let mut untracked_overlay = MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")); mark_live_row_untracked(&mut untracked_overlay); let live_state = OverlayingStaticLiveStateReader { rows: vec![tracked_parent, untracked_overlay], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect( "tracked FK should resolve against tracked storage target behind untracked overlay", ); } #[tokio::test] async fn validation_rejects_deleting_tracked_fk_target_referenced_behind_untracked_overlay() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![parent_delete], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let tracked_parent = MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")); let tracked_child = MaterializedLiveStateRow::from(fk_child_row("child-1", "parent-1", "version-a")); let mut untracked_child_overlay = MaterializedLiveStateRow::from(fk_child_row("child-1", "other-parent", "version-a")); mark_live_row_untracked(&mut untracked_child_overlay); let live_state = OverlayingStaticLiveStateReader { rows: vec![tracked_parent, tracked_child, untracked_child_overlay], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("tracked referencing row behind overlay must block target delete"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_rejects_deleting_tracked_fk_target_referenced_by_committed_untracked_row() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![parent_delete], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let tracked_parent = MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")); let mut untracked_child = MaterializedLiveStateRow::from(fk_child_row("child-1", "parent-1", "version-a")); mark_live_row_untracked(&mut untracked_child); let live_state = StaticLiveStateReader { rows: vec![tracked_parent, untracked_child], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("tracked target delete must be blocked by committed untracked references"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_rejects_foreign_key_target_committed_only_in_different_version() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![fk_child_row("child-1", "parent-1", "version-a")], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "parent-1", "version-b", ))], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err( "foreign key target in another committed version should not satisfy this version", ); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_rejects_foreign_key_target_tombstoned_by_transaction() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![ parent_delete, fk_child_row("child-1", "parent-1", "version-a"), ], ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "parent-1", "version-a", ))], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("same-transaction tombstone should hide the committed FK target"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_tracked_fk_target_when_untracked_tombstone_shadows_same_identity() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut untracked_parent_delete = fk_parent_row("parent-1", "version-a"); untracked_parent_delete.snapshot = None; mark_prepared_row_untracked(&mut untracked_parent_delete); let staged_writes = PreparedWriteSet { state_rows: vec![ untracked_parent_delete, fk_child_row("child-1", "parent-1", "version-a"), ], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "parent-1", "version-a", ))], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("untracked tombstone must not hide tracked FK target"); } #[tokio::test] async fn validation_rejects_pending_reference_to_deleted_identity() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![ parent_delete, fk_child_row("child-1", "parent-1", "version-a"), ], ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "parent-1", "version-a", ))], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("pending child reference should block parent delete"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_delete_with_pending_reference_in_different_version() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; let staged_writes = PreparedWriteSet { state_rows: vec![ parent_delete, fk_parent_row("parent-1", "version-b"), fk_child_row("child-1", "parent-1", "version-b"), ], ..empty_staged_write_set() }; validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect("pending references in another version should not block this delete"); } #[tokio::test] async fn validation_allows_state_surface_fk_target_committed_by_exact_identity() { let visible_schemas = vec![fk_parent_schema(), state_surface_ref_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![state_surface_ref_row( "ref-1", "target-1", "fk_parent_schema", "file-a", )], ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "target-1", "version-a", ))], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("state FK should resolve against exact committed identity"); } #[tokio::test] async fn validation_rejects_tracked_state_surface_fk_target_pending_only_as_untracked() { let visible_schemas = vec![ fk_parent_schema(), state_surface_ref_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut untracked_target = fk_parent_row("target-1", "version-a"); mark_prepared_row_untracked(&mut untracked_target); let mut untracked_file_descriptor = staged_file_descriptor_row("file-a", "version-a"); mark_prepared_row_untracked(&mut untracked_file_descriptor); let staged_writes = PreparedWriteSet { state_rows: vec![ untracked_file_descriptor, untracked_target, state_surface_ref_row("ref-1", "target-1", "fk_parent_schema", "file-a"), ], ..empty_staged_write_set() }; let error = validate_prepared_writes(validation_input(&staged_writes, &visible_schemas)) .await .expect_err( "tracked state-surface FK must not resolve through a pending untracked target", ); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_rejects_tracked_state_surface_fk_target_committed_only_as_untracked() { let visible_schemas = vec![fk_parent_schema(), state_surface_ref_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![state_surface_ref_row( "ref-1", "target-1", "fk_parent_schema", "file-a", )], ..empty_staged_write_set() }; let mut untracked_target = MaterializedLiveStateRow::from(fk_parent_row("target-1", "version-a")); mark_live_row_untracked(&mut untracked_target); let live_state = StaticLiveStateReader { rows: vec![untracked_target], }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err( "tracked state-surface FK must not resolve through a committed untracked target", ); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_untracked_state_surface_fk_target_committed_as_tracked() { let visible_schemas = vec![ fk_parent_schema(), state_surface_ref_schema(), file_descriptor_schema(), directory_descriptor_schema(), ]; let mut untracked_file_descriptor = staged_file_descriptor_row("file-a", "version-a"); mark_prepared_row_untracked(&mut untracked_file_descriptor); let mut untracked_ref = state_surface_ref_row("ref-1", "target-1", "fk_parent_schema", "file-a"); mark_prepared_row_untracked(&mut untracked_ref); let staged_writes = PreparedWriteSet { state_rows: vec![untracked_file_descriptor, untracked_ref], ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![ committed_file_descriptor_row("file-a", "version-a"), MaterializedLiveStateRow::from(fk_parent_row("target-1", "version-a")), ], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("untracked state-surface FK should reference committed tracked target"); } #[tokio::test] async fn validation_allows_tracked_state_surface_fk_target_committed_behind_untracked_overlay() { let visible_schemas = vec![fk_parent_schema(), state_surface_ref_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![state_surface_ref_row( "ref-1", "target-1", "fk_parent_schema", "file-a", )], ..empty_staged_write_set() }; let tracked_target = MaterializedLiveStateRow::from(fk_parent_row("target-1", "version-a")); let mut untracked_overlay = MaterializedLiveStateRow::from(fk_parent_row("target-1", "version-a")); mark_live_row_untracked(&mut untracked_overlay); let live_state = OverlayingStaticLiveStateReader { rows: vec![tracked_target, untracked_overlay], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect( "tracked state-surface FK should resolve against tracked target behind untracked overlay", ); } #[tokio::test] async fn validation_allows_state_surface_fk_target_with_composite_entity_id() { let visible_schemas = vec![composite_message_schema(), state_surface_ref_schema()]; let staged_writes = PreparedWriteSet { state_rows: vec![state_surface_ref_row_with_target_entity_id( "ref-1", json!(["welcome.title", "en"]), "composite_message_schema", "file-a", )], ..empty_staged_write_set() }; let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(composite_message_row( "welcome.title", "en", "version-a", ))], }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("state FK should resolve composite JSON-array entity ids"); } #[tokio::test] async fn validation_rejects_delete_when_same_version_reference_exists() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; let live_state = StaticLiveStateReader { rows: vec![ MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")), MaterializedLiveStateRow::from(fk_child_row("child-1", "parent-1", "version-a")), ], }; let staged_writes = PreparedWriteSet { state_rows: vec![parent_delete], adopted_rows: Vec::new(), ..empty_staged_write_set() }; let error = validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect_err("delete should be restricted by same-version references"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } #[tokio::test] async fn validation_allows_delete_when_only_different_version_reference_exists() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; let live_state = StaticLiveStateReader { rows: vec![ MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")), MaterializedLiveStateRow::from(fk_child_row("child-1", "parent-1", "version-b")), ], }; let staged_writes = PreparedWriteSet { state_rows: vec![parent_delete], adopted_rows: Vec::new(), ..empty_staged_write_set() }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("references in another version should not restrict this version"); } #[tokio::test] async fn validation_allows_delete_when_committed_reference_is_also_deleted() { let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; let mut child_delete = fk_child_row("child-1", "parent-1", "version-a"); child_delete.snapshot = None; let live_state = StaticLiveStateReader { rows: vec![ MaterializedLiveStateRow::from(fk_parent_row("parent-1", "version-a")), MaterializedLiveStateRow::from(fk_child_row("child-1", "parent-1", "version-a")), ], }; let staged_writes = PreparedWriteSet { state_rows: vec![parent_delete, child_delete], adopted_rows: Vec::new(), ..empty_staged_write_set() }; validate_prepared_writes(TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, )) .await .expect("committed references deleted in the same transaction should not restrict delete"); } #[test] fn schema_catalog_plans_include_compiled_schema() { let visible_schemas = vec![key_value_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let catalog = catalog_from_transaction_input(&input).expect("schema catalog should build"); let plan = catalog .plan_for_key("lix_key_value") .expect("lix_key_value plan should exist"); assert!(plan .1 .compiled_schema .validate(&json!({ "key": "k", "value": "v" })) .is_ok()); } #[test] fn pending_indexes_record_primary_key_fk_targets_by_exact_scope() { let mut indexes = PendingConstraintIndexes::default(); let row = fk_parent_row("parent-1", "version-a"); let snapshot = serde_json::from_str::( row.snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); indexes .remember_row( PreparedValidationRow::State(&row), test_plan_from_schema(fk_parent_schema()), &snapshot, ) .expect("parent row should index"); assert!(indexes .has_fk_target( "fk_parent_schema", "version-a", Some("file-a"), &["/id"], UniqueConstraintValue::string_values(["parent-1"]), ) .expect("lookup should build")); assert!(!indexes .has_fk_target( "fk_parent_schema", "version-b", Some("file-a"), &["/id"], UniqueConstraintValue::string_values(["parent-1"]), ) .expect("lookup should build")); } #[test] fn pending_indexes_record_unique_fk_targets_by_exact_scope() { let mut indexes = PendingConstraintIndexes::default(); let row = unique_row("post-1", "hello-world", "first"); let snapshot = serde_json::from_str::( row.snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); indexes .remember_row( PreparedValidationRow::State(&row), test_plan_from_schema(unique_schema()), &snapshot, ) .expect("unique row should index"); assert!(indexes .has_fk_target( "unique_schema", "version-a", Some("file-a"), &["/slug"], UniqueConstraintValue::string_values(["hello-world"]), ) .expect("lookup should build")); } #[test] fn pending_indexes_record_normal_fk_references_by_exact_scope() { let mut indexes = PendingConstraintIndexes::default(); let row = fk_child_row("child-1", "parent-1", "version-a"); let snapshot = serde_json::from_str::( row.snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); indexes .remember_foreign_key_references( PreparedValidationRow::State(&row), test_plan_from_schema(fk_child_schema()), &snapshot, ) .expect("child row should index FK reference"); assert!(indexes .has_fk_reference_to_key( "fk_parent_schema", "version-a", Some("file-a"), &["/id"], UniqueConstraintValue::string_values(["parent-1"]), ) .expect("lookup should build")); assert!(!indexes .has_fk_reference_to_key( "fk_parent_schema", "version-b", Some("file-a"), &["/id"], UniqueConstraintValue::string_values(["parent-1"]), ) .expect("lookup should build")); } #[test] fn pending_indexes_record_state_surface_fk_references_by_exact_identity() { let mut indexes = PendingConstraintIndexes::default(); let row = state_surface_ref_row("ref-1", "target-1", "fk_parent_schema", "file-a"); let snapshot = serde_json::from_str::( row.snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![state_surface_ref_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); indexes .remember_foreign_key_references( PreparedValidationRow::State(&row), test_plan_from_schema(state_surface_ref_schema()), &snapshot, ) .expect("state-surface row should index FK reference"); assert!( indexes.has_fk_reference_to_identity(DomainRowIdentity::exact( "version-a", false, Some("file-a".to_string()), "fk_parent_schema", EntityIdentity::single("target-1"), )) ); } #[test] fn pending_delete_restrictions_ignore_tombstoned_referencing_rows() { let mut indexes = PendingConstraintIndexes::default(); let mut parent_delete = fk_parent_row("parent-1", "version-a"); parent_delete.snapshot = None; indexes.remember_tombstone(PreparedValidationRow::State(&parent_delete)); let child = fk_child_row("child-1", "parent-1", "version-a"); let child_snapshot = serde_json::from_str::( child .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let catalog = catalog_from_transaction_input(&input).expect("catalog should build"); indexes .remember_foreign_key_references( PreparedValidationRow::State(&child), test_plan_from_schema(fk_child_schema()), &child_snapshot, ) .expect("child row should index FK reference"); let mut child_delete = fk_child_row("child-1", "parent-1", "version-a"); child_delete.snapshot = None; indexes.remember_tombstone(PreparedValidationRow::State(&child_delete)); validate_pending_delete_restrictions(&catalog, &indexes) .expect("a row deleted in the same transaction should not block target delete"); } #[test] fn pending_fk_validation_collects_unresolved_normal_fk_check() { let indexes = PendingConstraintIndexes::default(); let row = fk_child_row("child-1", "parent-1", "version-a"); let snapshot = serde_json::from_str::( row.snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); let unresolved = validate_pending_foreign_keys( &indexes, &[( PreparedValidationRow::State(&row), test_plan_from_schema(fk_child_schema()), &snapshot, )], ) .expect("FK validation should collect unresolved checks"); assert_eq!(unresolved.len(), 1); assert_eq!( unresolved[0].source_identity, DomainRowIdentity::exact( "version-a", false, Some("file-a".to_string()), "fk_child_schema", EntityIdentity::single("child-1"), ) ); assert_eq!(unresolved[0].source_schema_key, "fk_child_schema"); assert_eq!( unresolved[0].source_pointer_group, vec![vec!["parent_id".to_string()]] ); let UnresolvedForeignKeyTarget::Key(target) = &unresolved[0].target else { panic!("normal FK should produce key target"); }; assert_eq!(target.schema_key, "fk_parent_schema"); assert_eq!(target.domain.version_id(), "version-a"); assert_eq!( target.domain.file_scope(), &DomainFileScope::Exact(Some("file-a".to_string())) ); assert_eq!(target.pointer_group, vec![vec!["id".to_string()]]); assert_eq!( target.value, UniqueConstraintValue::string_values(["parent-1"]) ); } #[test] fn pending_fk_validation_resolves_normal_fk_against_pending_target() { let mut indexes = PendingConstraintIndexes::default(); let parent = fk_parent_row("parent-1", "version-a"); let parent_snapshot = serde_json::from_str::( parent .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); indexes .remember_row( PreparedValidationRow::State(&parent), test_plan_from_schema(fk_parent_schema()), &parent_snapshot, ) .expect("parent should index as pending FK target"); let child = fk_child_row("child-1", "parent-1", "version-a"); let child_snapshot = serde_json::from_str::( child .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); let unresolved = validate_pending_foreign_keys( &indexes, &[( PreparedValidationRow::State(&child), test_plan_from_schema(fk_child_schema()), &child_snapshot, )], ) .expect("FK validation should inspect pending targets"); assert!( unresolved.is_empty(), "same-version pending parent should satisfy the child FK" ); } #[test] fn pending_fk_validation_keeps_normal_fk_unresolved_across_versions() { let mut indexes = PendingConstraintIndexes::default(); let parent = fk_parent_row("parent-1", "version-b"); let parent_snapshot = serde_json::from_str::( parent .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); indexes .remember_row( PreparedValidationRow::State(&parent), test_plan_from_schema(fk_parent_schema()), &parent_snapshot, ) .expect("parent should index as pending FK target"); let child = fk_child_row("child-1", "parent-1", "version-a"); let child_snapshot = serde_json::from_str::( child .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); let unresolved = validate_pending_foreign_keys( &indexes, &[( PreparedValidationRow::State(&child), test_plan_from_schema(fk_child_schema()), &child_snapshot, )], ) .expect("FK validation should inspect pending targets"); assert_eq!(unresolved.len(), 1); let UnresolvedForeignKeyTarget::Key(target) = &unresolved[0].target else { panic!("normal FK should produce key target"); }; assert_eq!( target.domain.version_id(), "version-a", "FK checks are exact-version scoped, not overlay scoped" ); } #[test] fn pending_fk_validation_collects_unresolved_state_surface_check() { let indexes = PendingConstraintIndexes::default(); let row = state_surface_ref_row("ref-1", "target-1", "fk_parent_schema", "file-a"); let snapshot = serde_json::from_str::( row.snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![state_surface_ref_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); let unresolved = validate_pending_foreign_keys( &indexes, &[( PreparedValidationRow::State(&row), test_plan_from_schema(state_surface_ref_schema()), &snapshot, )], ) .expect("FK validation should collect unresolved checks"); assert_eq!(unresolved.len(), 1); assert_eq!( unresolved[0].source_identity, DomainRowIdentity::exact( "version-a", false, Some("file-a".to_string()), "state_surface_ref_schema", EntityIdentity::single("ref-1"), ) ); assert_eq!(unresolved[0].source_schema_key, "state_surface_ref_schema"); assert_eq!( unresolved[0].source_pointer_group, vec![ vec!["target_entity_id".to_string()], vec!["target_schema_key".to_string()], vec!["target_file_id".to_string()], ] ); let UnresolvedForeignKeyTarget::StateSurfaceIdentity(target) = &unresolved[0].target else { panic!("state FK should produce state-surface identity target"); }; assert_eq!(target.domain().version_id(), "version-a"); assert_eq!(target.schema_key(), "fk_parent_schema"); assert_eq!(target.entity_id(), &EntityIdentity::single("target-1")); assert_eq!( target.domain().file_scope(), &DomainFileScope::Exact(Some("file-a".to_string())) ); } #[tokio::test] async fn committed_fk_lookup_resolves_normal_fk_in_exact_scope() { let indexes = PendingConstraintIndexes::default(); let child = fk_child_row("child-1", "parent-1", "version-a"); let child_snapshot = serde_json::from_str::( child .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); let unresolved = validate_pending_foreign_keys( &indexes, &[( PreparedValidationRow::State(&child), test_plan_from_schema(fk_child_schema()), &child_snapshot, )], ) .expect("pending FK validation should collect unresolved check"); let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "parent-1", "version-a", ))], }; let still_unresolved = validate_committed_foreign_keys( &TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, ), &indexes, &unresolved, ) .await .expect("committed FK lookup should scan live state"); assert!( still_unresolved.is_empty(), "same-version committed parent should satisfy unresolved FK" ); } #[tokio::test] async fn committed_fk_lookup_keeps_normal_fk_unresolved_across_versions() { let indexes = PendingConstraintIndexes::default(); let child = fk_child_row("child-1", "parent-1", "version-a"); let child_snapshot = serde_json::from_str::( child .snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![fk_parent_schema(), fk_child_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); let unresolved = validate_pending_foreign_keys( &indexes, &[( PreparedValidationRow::State(&child), test_plan_from_schema(fk_child_schema()), &child_snapshot, )], ) .expect("pending FK validation should collect unresolved check"); let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "parent-1", "version-b", ))], }; let still_unresolved = validate_committed_foreign_keys( &TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, ), &indexes, &unresolved, ) .await .expect("committed FK lookup should scan live state"); assert_eq!( still_unresolved.len(), 1, "committed FK lookup is exact-version scoped" ); } #[tokio::test] async fn committed_fk_lookup_resolves_state_surface_fk_by_exact_identity() { let indexes = PendingConstraintIndexes::default(); let row = state_surface_ref_row("ref-1", "target-1", "fk_parent_schema", "file-a"); let snapshot = serde_json::from_str::( row.snapshot .as_ref() .map(|snapshot| snapshot.normalized.as_ref()) .expect("fixture should have snapshot"), ) .expect("fixture JSON should parse"); let visible_schemas = vec![state_surface_ref_schema()]; let staged_writes = empty_staged_write_set(); let input = validation_input(&staged_writes, &visible_schemas); let _catalog = catalog_from_transaction_input(&input).expect("catalog should build"); let unresolved = validate_pending_foreign_keys( &indexes, &[( PreparedValidationRow::State(&row), test_plan_from_schema(state_surface_ref_schema()), &snapshot, )], ) .expect("pending FK validation should collect unresolved check"); let live_state = StaticLiveStateReader { rows: vec![MaterializedLiveStateRow::from(fk_parent_row( "target-1", "version-a", ))], }; let still_unresolved = validate_committed_foreign_keys( &TransactionValidationInput::from_visible_schemas_for_tests( &staged_writes, &visible_schemas, &live_state, ), &indexes, &unresolved, ) .await .expect("committed FK lookup should load exact live-state row"); assert!( still_unresolved.is_empty(), "committed state-surface target should satisfy unresolved FK" ); } fn empty_staged_write_set() -> PreparedWriteSet { PreparedWriteSet { state_rows: Vec::new(), adopted_rows: Vec::new(), insert_identities: BTreeMap::new(), commit_members_by_version: BTreeMap::new(), extra_commit_parents_by_version: BTreeMap::new(), file_data_writes: Vec::new(), } } fn live_state_row_matches_scan( row: &MaterializedLiveStateRow, request: &LiveStateScanRequest, ) -> bool { if request .filter .untracked .is_some_and(|untracked| row.untracked != untracked) { return false; } (request.filter.schema_keys.is_empty() || request.filter.schema_keys.contains(&row.schema_key)) && (request.filter.version_ids.is_empty() || request.filter.version_ids.contains(&row.version_id)) && (request.filter.file_ids.is_empty() || request .filter .file_ids .iter() .any(|filter| filter.matches(row.file_id.as_ref()))) } fn live_state_row_matches_load( row: &MaterializedLiveStateRow, request: &LiveStateRowRequest, ) -> bool { row.schema_key == request.schema_key && row.version_id == request.version_id && row.entity_id == request.entity_id && request.file_id.matches(row.file_id.as_ref()) } fn test_file_descriptor_rows() -> Vec { vec![ committed_file_descriptor_row("file-a", "version-a"), committed_file_descriptor_row("file-a", "version-b"), committed_file_descriptor_row("file-b", "version-a"), committed_file_descriptor_row("file-b", "version-b"), ] } fn pending_registered_schema_row(schema_key: &str) -> PreparedStateRow { pending_registered_schema_from_definition(json!({ "x-lix-key": schema_key, "type": "object", "properties": { "id": { "type": "string" } }, "required": ["id"], "additionalProperties": false, })) } fn pending_registered_schema_from_definition(schema: JsonValue) -> PreparedStateRow { let key = schema_key_from_definition(&schema).expect("test schema should have a key"); PreparedStateRow { schema_plan_id: crate::catalog::SchemaPlanId::for_test(0), facts: crate::transaction::types::PreparedRowFacts::default(), entity_id: registered_schema_entity_id(&key.schema_key), schema_key: REGISTERED_SCHEMA_KEY.to_string(), file_id: None, snapshot: Some(test_stage_json(&json!({ "value": schema }).to_string())), metadata: None, origin: None, created_at: "2026-04-29T00:00:00.000Z".to_string(), updated_at: "2026-04-29T00:00:00.000Z".to_string(), global: true, change_id: Some("change-registered-schema".to_string()), commit_id: Some("commit-registered-schema".to_string()), untracked: false, version_id: crate::GLOBAL_VERSION_ID.to_string(), } } fn registered_schema_entity_id(schema_key: &str) -> crate::entity_identity::EntityIdentity { crate::entity_identity::EntityIdentity::from_primary_key_paths( &serde_json::json!({ "value": { "x-lix-key": schema_key, } }), &[vec!["value".to_string(), "x-lix-key".to_string()]], ) .expect("registered schema identity should derive") } fn key_value_schema() -> JsonValue { seed_schema_definition("lix_key_value") .expect("lix_key_value builtin schema should exist") .clone() } fn registered_schema() -> JsonValue { seed_schema_definition(REGISTERED_SCHEMA_KEY) .expect("lix_registered_schema builtin schema should exist") .clone() } fn file_descriptor_schema() -> JsonValue { seed_schema_definition(FILE_DESCRIPTOR_SCHEMA_KEY) .expect("lix_file_descriptor builtin schema should exist") .clone() } fn directory_descriptor_schema() -> JsonValue { seed_schema_definition(DIRECTORY_DESCRIPTOR_SCHEMA_KEY) .expect("lix_directory_descriptor builtin schema should exist") .clone() } fn unique_schema() -> JsonValue { json!({ "x-lix-key": "unique_schema", "x-lix-primary-key": ["/id"], "x-lix-unique": [["/slug"]], "type": "object", "properties": { "id": { "type": "string" }, "slug": { "type": "string" }, "title": { "type": "string" } }, "required": ["id", "slug", "title"], "additionalProperties": false }) } fn nullable_unique_schema() -> JsonValue { json!({ "x-lix-key": "nullable_unique_schema", "x-lix-primary-key": ["/id"], "x-lix-unique": [["/scope", "/name"]], "type": "object", "properties": { "id": { "type": "string" }, "scope": { "type": ["string", "null"] }, "name": { "type": "string" } }, "required": ["id", "scope", "name"], "additionalProperties": false }) } fn fk_parent_schema() -> JsonValue { json!({ "x-lix-key": "fk_parent_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" } }, "required": ["id"], "additionalProperties": false }) } fn composite_message_schema() -> JsonValue { json!({ "x-lix-key": "composite_message_schema", "x-lix-primary-key": ["/key", "/locale"], "type": "object", "properties": { "key": { "type": "string" }, "locale": { "type": "string" }, "text": { "type": "string" } }, "required": ["key", "locale", "text"], "additionalProperties": false }) } fn fk_child_schema() -> JsonValue { json!({ "x-lix-key": "fk_child_schema", "x-lix-primary-key": ["/id"], "x-lix-foreign-keys": [{ "properties": ["/parent_id"], "references": { "schemaKey": "fk_parent_schema", "properties": ["/id"] } }], "type": "object", "properties": { "id": { "type": "string" }, "parent_id": { "type": "string" } }, "required": ["id", "parent_id"], "additionalProperties": false }) } fn state_surface_ref_schema() -> JsonValue { json!({ "x-lix-key": "state_surface_ref_schema", "x-lix-primary-key": ["/id"], "x-lix-state-foreign-keys": [ ["/target_entity_id", "/target_schema_key", "/target_file_id"] ], "type": "object", "properties": { "id": { "type": "string" }, "target_entity_id": { "type": "array", "items": { "type": "string" }, "minItems": 1 }, "target_schema_key": { "type": "string" }, "target_file_id": { "type": ["string", "null"] } }, "required": ["id", "target_entity_id", "target_schema_key", "target_file_id"], "additionalProperties": false }) } fn unique_row(entity_id: &str, slug: &str, title: &str) -> PreparedStateRow { let mut row = staged_row( "unique_schema", Some( json!({ "id": entity_id, "slug": slug, "title": title, }) .to_string(), ), ); row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id); row.file_id = Some("file-a".to_string()); row.version_id = "version-a".to_string(); row.global = false; row } fn nullable_unique_row(entity_id: &str, scope: Option<&str>, name: &str) -> PreparedStateRow { let mut row = staged_row( "nullable_unique_schema", Some( json!({ "id": entity_id, "scope": scope, "name": name, }) .to_string(), ), ); row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id); row.file_id = Some("file-a".to_string()); row.version_id = "version-a".to_string(); row.global = false; row } fn fk_parent_row(entity_id: &str, version_id: &str) -> PreparedStateRow { let mut row = staged_row( "fk_parent_schema", Some(json!({ "id": entity_id }).to_string()), ); row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id); row.file_id = Some("file-a".to_string()); row.version_id = version_id.to_string(); row.global = false; row } fn fk_child_row(entity_id: &str, parent_id: &str, version_id: &str) -> PreparedStateRow { let mut row = staged_row( "fk_child_schema", Some(json!({ "id": entity_id, "parent_id": parent_id }).to_string()), ); row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id); row.file_id = Some("file-a".to_string()); row.version_id = version_id.to_string(); row.global = false; row } fn composite_message_row(key: &str, locale: &str, version_id: &str) -> PreparedStateRow { let snapshot = json!({ "key": key, "locale": locale, "text": "Welcome", }); let mut row = staged_row("composite_message_schema", Some(snapshot.to_string())); row.entity_id = EntityIdentity::from_primary_key_paths( &snapshot, &[vec!["key".to_string()], vec!["locale".to_string()]], ) .expect("composite message identity should derive"); row.file_id = Some("file-a".to_string()); row.version_id = version_id.to_string(); row.global = false; row } fn state_surface_ref_row( entity_id: &str, target_entity_id: &str, target_schema_key: &str, target_file_id: &str, ) -> PreparedStateRow { state_surface_ref_row_with_target_entity_id( entity_id, json!([target_entity_id]), target_schema_key, target_file_id, ) } fn state_surface_ref_row_with_target_entity_id( entity_id: &str, target_entity_id: JsonValue, target_schema_key: &str, target_file_id: &str, ) -> PreparedStateRow { let mut row = staged_row( "state_surface_ref_schema", Some( json!({ "id": entity_id, "target_entity_id": target_entity_id, "target_schema_key": target_schema_key, "target_file_id": target_file_id, }) .to_string(), ), ); row.entity_id = crate::entity_identity::EntityIdentity::single(entity_id); row.file_id = Some("file-a".to_string()); row.version_id = "version-a".to_string(); row.global = false; row } fn mark_prepared_row_untracked(row: &mut PreparedStateRow) { row.untracked = true; row.change_id = None; row.commit_id = None; } fn mark_live_row_untracked(row: &mut MaterializedLiveStateRow) { row.untracked = true; row.change_id = None; row.commit_id = None; } fn staged_file_descriptor_row(file_id: &str, version_id: &str) -> PreparedStateRow { let mut row = staged_row( FILE_DESCRIPTOR_SCHEMA_KEY, Some( json!({ "id": file_id, "directory_id": null, "name": file_id, "hidden": false, }) .to_string(), ), ); row.entity_id = crate::entity_identity::EntityIdentity::single(file_id); row.file_id = None; row.version_id = version_id.to_string(); row.global = version_id == crate::GLOBAL_VERSION_ID; row } fn committed_file_descriptor_row(file_id: &str, version_id: &str) -> MaterializedLiveStateRow { MaterializedLiveStateRow::from(staged_file_descriptor_row(file_id, version_id)) } fn directory_descriptor_row( directory_id: &str, parent_id: Option<&str>, name: &str, version_id: &str, ) -> PreparedStateRow { let mut row = staged_row( DIRECTORY_DESCRIPTOR_SCHEMA_KEY, Some( json!({ "id": directory_id, "parent_id": parent_id, "name": name, "hidden": false, }) .to_string(), ), ); row.entity_id = crate::entity_identity::EntityIdentity::single(directory_id); row.file_id = None; row.version_id = version_id.to_string(); row.global = version_id == crate::GLOBAL_VERSION_ID; row } fn committed_unique_row(entity_id: &str, slug: &str, title: &str) -> MaterializedLiveStateRow { let row = unique_row(entity_id, slug, title); MaterializedLiveStateRow { entity_id: row.entity_id, schema_key: row.schema_key, file_id: row.file_id, snapshot_content: row.snapshot.as_ref().map(|snapshot| snapshot.materialize()), metadata: row.metadata.as_ref().map(|metadata| metadata.materialize()), deleted: row.snapshot.is_none(), created_at: row.created_at, updated_at: row.updated_at, global: row.global, change_id: row.change_id, commit_id: row.commit_id, untracked: row.untracked, version_id: row.version_id, } } fn committed_nullable_unique_row( entity_id: &str, scope: Option<&str>, name: &str, ) -> MaterializedLiveStateRow { MaterializedLiveStateRow::from(nullable_unique_row(entity_id, scope, name)) } fn staged_row(schema_key: &str, snapshot_content: Option) -> PreparedStateRow { PreparedStateRow { schema_plan_id: crate::catalog::SchemaPlanId::for_test(0), facts: crate::transaction::types::PreparedRowFacts::default(), entity_id: crate::entity_identity::EntityIdentity::single("entity-1"), schema_key: schema_key.to_string(), file_id: None, snapshot: snapshot_content.as_deref().map(test_stage_json), metadata: None, origin: None, created_at: "2026-04-29T00:00:00.000Z".to_string(), updated_at: "2026-04-29T00:00:00.000Z".to_string(), global: true, change_id: Some("change-1".to_string()), commit_id: Some("commit-1".to_string()), untracked: false, version_id: crate::GLOBAL_VERSION_ID.to_string(), } } } ================================================ FILE: packages/engine/src/untracked_state/codec.rs ================================================ use crate::entity_identity::EntityIdentity; use crate::untracked_state::{UntrackedStateRow, UntrackedStateRowRef}; use crate::LixError; const UNTRACKED_STATE_FILE_IDENTIFIER: &str = "LXUS"; pub(crate) fn encode_row_ref(row: UntrackedStateRowRef<'_>) -> Result, LixError> { let entity_id = row.entity_id.as_json_array_text().map_err(|error| { LixError::unknown(format!( "failed to encode untracked-state entity identity: {error}" )) })?; let mut builder = flatbuffers::FlatBufferBuilder::with_capacity(256); let entity_id = builder.create_string(&entity_id); let schema_key = builder.create_string(row.schema_key); let file_id = row.file_id.map(|value| builder.create_string(value)); let snapshot_content = row .snapshot_content .map(|value| builder.create_string(value)); let metadata = row.metadata.map(|value| builder.create_string(value)); let created_at = builder.create_string(row.created_at); let updated_at = builder.create_string(row.updated_at); let version_id = builder.create_string(row.version_id); let root = flatbuffer::create_untracked_state_row( &mut builder, &flatbuffer::UntrackedStateRowArgs { entity_id, schema_key, file_id, snapshot_content, metadata, created_at, updated_at, global: row.global, version_id, }, ); builder.finish(root, Some(UNTRACKED_STATE_FILE_IDENTIFIER)); Ok(builder.finished_data().to_vec()) } pub(crate) fn decode_row(bytes: &[u8]) -> Result { if bytes.len() < flatbuffers::SIZE_UOFFSET + flatbuffers::FILE_IDENTIFIER_LENGTH || !flatbuffers::buffer_has_identifier(bytes, UNTRACKED_STATE_FILE_IDENTIFIER, false) { return Err(LixError::new( "LIX_ERROR_UNKNOWN", "failed to decode untracked-state row: invalid FlatBuffers file identifier", )); } let row = flatbuffer::root_as_untracked_state_row(bytes).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("failed to decode untracked-state row: {error}"), ) })?; let entity_id = required_str(row.entity_id(), "entity_id")?; let entity_id = EntityIdentity::from_json_array_text(entity_id).map_err(|error| { LixError::unknown(format!( "failed to decode untracked-state entity identity: {error}" )) })?; Ok(UntrackedStateRow { entity_id, schema_key: required_str(row.schema_key(), "schema_key")?.to_string(), file_id: row.file_id().map(ToString::to_string), snapshot_content: row.snapshot_content().map(ToString::to_string), metadata: row.metadata().map(ToString::to_string), created_at: required_str(row.created_at(), "created_at")?.to_string(), updated_at: required_str(row.updated_at(), "updated_at")?.to_string(), global: row.global(), version_id: required_str(row.version_id(), "version_id")?.to_string(), }) } fn required_str<'a>(value: Option<&'a str>, field: &str) -> Result<&'a str, LixError> { value.ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("failed to decode untracked-state row: missing required field `{field}`"), ) }) } mod flatbuffer { #[derive(Copy, Clone, PartialEq)] pub(super) struct UntrackedStateRow<'a> { table: flatbuffers::Table<'a>, } impl<'a> flatbuffers::Follow<'a> for UntrackedStateRow<'a> { type Inner = UntrackedStateRow<'a>; #[inline] unsafe fn follow(buf: &'a [u8], loc: usize) -> Self::Inner { Self { table: unsafe { flatbuffers::Table::new(buf, loc) }, } } } impl<'a> UntrackedStateRow<'a> { const VT_ENTITY_ID: flatbuffers::VOffsetT = 4; const VT_SCHEMA_KEY: flatbuffers::VOffsetT = 6; const VT_FILE_ID: flatbuffers::VOffsetT = 8; const VT_SNAPSHOT_CONTENT: flatbuffers::VOffsetT = 10; const VT_METADATA: flatbuffers::VOffsetT = 12; const VT_CREATED_AT: flatbuffers::VOffsetT = 14; const VT_UPDATED_AT: flatbuffers::VOffsetT = 16; const VT_GLOBAL: flatbuffers::VOffsetT = 18; const VT_VERSION_ID: flatbuffers::VOffsetT = 20; #[inline] pub(super) fn entity_id(&self) -> Option<&'a str> { unsafe { self.table .get::>(Self::VT_ENTITY_ID, None) } } #[inline] pub(super) fn schema_key(&self) -> Option<&'a str> { unsafe { self.table .get::>(Self::VT_SCHEMA_KEY, None) } } #[inline] pub(super) fn file_id(&self) -> Option<&'a str> { unsafe { self.table .get::>(Self::VT_FILE_ID, None) } } #[inline] pub(super) fn snapshot_content(&self) -> Option<&'a str> { unsafe { self.table .get::>(Self::VT_SNAPSHOT_CONTENT, None) } } #[inline] pub(super) fn metadata(&self) -> Option<&'a str> { unsafe { self.table .get::>(Self::VT_METADATA, None) } } pub(super) fn created_at(&self) -> Option<&'a str> { unsafe { self.table .get::>(Self::VT_CREATED_AT, None) } } #[inline] pub(super) fn updated_at(&self) -> Option<&'a str> { unsafe { self.table .get::>(Self::VT_UPDATED_AT, None) } } #[inline] pub(super) fn global(&self) -> bool { unsafe { self.table.get::(Self::VT_GLOBAL, Some(false)) }.unwrap_or(false) } #[inline] pub(super) fn version_id(&self) -> Option<&'a str> { unsafe { self.table .get::>(Self::VT_VERSION_ID, None) } } } impl flatbuffers::Verifiable for UntrackedStateRow<'_> { #[inline] fn run_verifier( verifier: &mut flatbuffers::Verifier, position: usize, ) -> Result<(), flatbuffers::InvalidFlatbuffer> { verifier .visit_table(position)? .visit_field::>( "entity_id", Self::VT_ENTITY_ID, true, )? .visit_field::>( "schema_key", Self::VT_SCHEMA_KEY, true, )? .visit_field::>( "file_id", Self::VT_FILE_ID, false, )? .visit_field::>( "snapshot_content", Self::VT_SNAPSHOT_CONTENT, false, )? .visit_field::>( "metadata", Self::VT_METADATA, false, )? .visit_field::>( "created_at", Self::VT_CREATED_AT, true, )? .visit_field::>( "updated_at", Self::VT_UPDATED_AT, true, )? .visit_field::("global", Self::VT_GLOBAL, false)? .visit_field::>( "version_id", Self::VT_VERSION_ID, true, )? .finish(); Ok(()) } } pub(super) struct UntrackedStateRowArgs<'a> { pub(super) entity_id: flatbuffers::WIPOffset<&'a str>, pub(super) schema_key: flatbuffers::WIPOffset<&'a str>, pub(super) file_id: Option>, pub(super) snapshot_content: Option>, pub(super) metadata: Option>, pub(super) created_at: flatbuffers::WIPOffset<&'a str>, pub(super) updated_at: flatbuffers::WIPOffset<&'a str>, pub(super) global: bool, pub(super) version_id: flatbuffers::WIPOffset<&'a str>, } pub(super) fn create_untracked_state_row<'bldr: 'args, 'args: 'mut_bldr, 'mut_bldr>( builder: &'mut_bldr mut flatbuffers::FlatBufferBuilder<'bldr>, args: &'args UntrackedStateRowArgs<'args>, ) -> flatbuffers::WIPOffset> { let start = builder.start_table(); builder.push_slot_always::>( UntrackedStateRow::VT_VERSION_ID, args.version_id, ); builder.push_slot::(UntrackedStateRow::VT_GLOBAL, args.global, false); builder.push_slot_always::>( UntrackedStateRow::VT_UPDATED_AT, args.updated_at, ); builder.push_slot_always::>( UntrackedStateRow::VT_CREATED_AT, args.created_at, ); if let Some(metadata) = args.metadata { builder.push_slot_always::>( UntrackedStateRow::VT_METADATA, metadata, ); } if let Some(snapshot_content) = args.snapshot_content { builder.push_slot_always::>( UntrackedStateRow::VT_SNAPSHOT_CONTENT, snapshot_content, ); } if let Some(file_id) = args.file_id { builder.push_slot_always::>( UntrackedStateRow::VT_FILE_ID, file_id, ); } builder.push_slot_always::>( UntrackedStateRow::VT_SCHEMA_KEY, args.schema_key, ); builder.push_slot_always::>( UntrackedStateRow::VT_ENTITY_ID, args.entity_id, ); let offset = builder.end_table(start); flatbuffers::WIPOffset::new(offset.value()) } #[inline] pub(super) fn root_as_untracked_state_row( bytes: &[u8], ) -> Result, flatbuffers::InvalidFlatbuffer> { flatbuffers::root::(bytes) } } ================================================ FILE: packages/engine/src/untracked_state/context.rs ================================================ use crate::storage::{StorageReader, StorageWriteSet}; use crate::untracked_state::{ MaterializedUntrackedStateRow, UntrackedStateIdentity, UntrackedStateIdentityRef, UntrackedStateRowRef, UntrackedStateRowRequest, UntrackedStateScanRequest, }; use crate::LixError; /// Durable local overlay excluded from changelog and commit membership. /// /// Untracked state is not change-controlled, but it is still durable local /// state. It is read alongside tracked live state and can override tracked rows /// with the same identity. #[derive(Clone, Copy)] pub(crate) struct UntrackedStateContext; impl UntrackedStateContext { pub(crate) fn new() -> Self { Self } /// Creates a reader over a caller-provided KV store. /// /// The caller decides which KV store supplies visibility for the read. pub(crate) fn reader(&self, store: S) -> UntrackedStateStoreReader where S: StorageReader, { UntrackedStateStoreReader { store } } /// Creates a writer over a transaction-local storage write set. /// /// The context never opens its own transaction; the caller applies the /// write set to choose the durable commit or rollback boundary. pub(crate) fn writer<'a>(&self, writes: &'a mut StorageWriteSet) -> UntrackedStateWriter<'a> { UntrackedStateWriter { writes } } } /// Store-backed untracked-state reader created by `UntrackedStateContext`. pub(crate) struct UntrackedStateStoreReader { store: S, } impl UntrackedStateStoreReader where S: StorageReader, { pub(crate) async fn scan_rows( &mut self, request: &UntrackedStateScanRequest, ) -> Result, LixError> { crate::untracked_state::storage::scan_rows(&mut self.store, request).await } pub(crate) async fn load_row( &mut self, request: &UntrackedStateRowRequest, ) -> Result, LixError> { crate::untracked_state::storage::load_row(&mut self.store, request).await } pub(crate) async fn existing_identities<'a, I>( &mut self, identities: I, ) -> Result, LixError> where I: IntoIterator>, { crate::untracked_state::storage::existing_identities(&mut self.store, identities).await } } /// Untracked-state writer over a transaction-local storage write set. pub(crate) struct UntrackedStateWriter<'a> { writes: &'a mut StorageWriteSet, } impl UntrackedStateWriter<'_> { /// Stages the latest untracked rows for their identities. /// /// A row with `snapshot_content = None` is treated as removal because /// untracked state keeps only the current local value, not tombstones. pub(crate) fn stage_rows<'a, I>(&mut self, rows: I) -> Result<(), LixError> where I: IntoIterator>, { crate::untracked_state::storage::stage_rows(self.writes, rows) } /// Removes untracked rows by exact identity. pub(crate) fn stage_delete_rows<'a, I>(&mut self, identities: I) where I: IntoIterator>, { crate::untracked_state::storage::stage_delete_rows(self.writes, identities) } } ================================================ FILE: packages/engine/src/untracked_state/materialization.rs ================================================ use crate::untracked_state::{MaterializedUntrackedStateRow, UntrackedStateRow}; use crate::{parse_row_metadata, LixError}; pub(crate) fn materialize_row( row: UntrackedStateRow, projection: &UntrackedMaterializationProjection, ) -> Result { let deleted = row.snapshot_content.is_none(); let snapshot_content = if projection.snapshot_content { row.snapshot_content } else { None }; let metadata = if projection.metadata { load_optional_metadata(row.metadata)? } else { None }; Ok(MaterializedUntrackedStateRow { entity_id: row.entity_id, schema_key: row.schema_key, file_id: row.file_id, snapshot_content, metadata, deleted, created_at: row.created_at, updated_at: row.updated_at, global: row.global, version_id: row.version_id, }) } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) struct UntrackedMaterializationProjection { pub(crate) snapshot_content: bool, pub(crate) metadata: bool, } impl UntrackedMaterializationProjection { pub(crate) fn full() -> Self { Self { snapshot_content: true, metadata: true, } } pub(crate) fn from_columns(columns: &[String]) -> Self { if columns.is_empty() { return Self::full(); } Self { snapshot_content: columns.iter().any(|column| column == "snapshot_content"), metadata: columns.iter().any(|column| column == "metadata"), } } } fn load_optional_metadata(metadata: Option) -> Result, LixError> { let Some(json) = metadata else { return Ok(None); }; parse_row_metadata(&json, "untracked_state metadata").map(Some) } ================================================ FILE: packages/engine/src/untracked_state/mod.rs ================================================ mod codec; mod context; mod materialization; pub(crate) mod storage; mod types; #[allow(unused_imports)] pub(crate) use context::{UntrackedStateContext, UntrackedStateStoreReader, UntrackedStateWriter}; pub(crate) use materialization::{materialize_row, UntrackedMaterializationProjection}; #[allow(unused_imports)] pub(crate) use types::{ MaterializedUntrackedStateRow, UntrackedStateFilter, UntrackedStateIdentity, UntrackedStateIdentityRef, UntrackedStateProjection, UntrackedStateRow, UntrackedStateRowRef, UntrackedStateRowRequest, UntrackedStateScanRequest, }; ================================================ FILE: packages/engine/src/untracked_state/storage.rs ================================================ use crate::storage::KvScanRange; use crate::storage::{KvGetGroup, KvGetRequest, KvScanRequest, StorageReader, StorageWriteSet}; use crate::untracked_state::{ MaterializedUntrackedStateRow, UntrackedMaterializationProjection, UntrackedStateIdentity, UntrackedStateIdentityRef, UntrackedStateRow, UntrackedStateRowRef, UntrackedStateRowRequest, UntrackedStateScanRequest, }; use crate::{LixError, NullableKeyFilter}; pub(super) const UNTRACKED_STATE_ROW_NAMESPACE: &str = "untracked_state.row"; pub(crate) async fn scan_rows( store: &mut impl StorageReader, request: &UntrackedStateScanRequest, ) -> Result, LixError> { let mut rows = scan_all_canonical_rows(store).await?; rows.retain(|row| row_matches_scan(row, request)); if let Some(limit) = request.limit { rows.truncate(limit); } let projection = UntrackedMaterializationProjection::from_columns(&request.projection.columns); let mut materialized = Vec::with_capacity(rows.len()); for row in rows { materialized.push(crate::untracked_state::materialize_row(row, &projection)?); } Ok(materialized) } pub(crate) async fn load_row( store: &mut impl StorageReader, request: &UntrackedStateRowRequest, ) -> Result, LixError> { let Some(identity) = identity_from_request(request) else { return Ok(None); }; let bytes = store .get_values(KvGetRequest { groups: vec![KvGetGroup { namespace: UNTRACKED_STATE_ROW_NAMESPACE.to_string(), keys: vec![encode_untracked_state_row_key(&identity)], }], }) .await? .groups .into_iter() .next() .and_then(|group| group.single_value_owned()); let Some(bytes) = bytes else { return Ok(None); }; let row = crate::untracked_state::codec::decode_row(&bytes)?; crate::untracked_state::materialize_row(row, &UntrackedMaterializationProjection::full()) .map(Some) } pub(super) async fn existing_identities<'a>( store: &mut (impl StorageReader + ?Sized), identities: impl IntoIterator>, ) -> Result, LixError> { let mut candidates = identities .into_iter() .map(|identity| { let owned = UntrackedStateIdentity { version_id: identity.version_id.to_string(), schema_key: identity.schema_key.to_string(), entity_id: identity.entity_id.clone(), file_id: identity.file_id.map(str::to_string), }; let key = encode_untracked_state_row_key_ref(owned.as_ref()); (key, owned) }) .collect::>(); candidates.sort_by(|(left, _), (right, _)| left.cmp(right)); candidates.dedup_by(|(left, _), (right, _)| left == right); if candidates.is_empty() { return Ok(Vec::new()); } let keys = candidates .iter() .map(|(key, _)| key.clone()) .collect::>(); let result = store .exists_many(KvGetRequest { groups: vec![KvGetGroup { namespace: UNTRACKED_STATE_ROW_NAMESPACE.to_string(), keys, }], }) .await?; let group = result.groups.into_iter().next().ok_or_else(|| { LixError::new( LixError::CODE_INTERNAL_ERROR, "untracked identity existence probe returned no result group", ) })?; if group.exists.len() != candidates.len() { return Err(LixError::new( LixError::CODE_INTERNAL_ERROR, format!( "untracked identity existence probe returned {} results for {} requested keys", group.exists.len(), candidates.len() ), )); } Ok(candidates .into_iter() .zip(group.exists) .filter_map(|((_, identity), exists)| exists.then_some(identity)) .collect()) } pub(crate) fn stage_rows<'a, I>(writes: &mut StorageWriteSet, rows: I) -> Result<(), LixError> where I: IntoIterator>, { for row in rows { if row.snapshot_content.is_none() { writes.delete( UNTRACKED_STATE_ROW_NAMESPACE, encode_untracked_state_row_key_ref(row.into()), ); } else { writes.put( UNTRACKED_STATE_ROW_NAMESPACE, encode_untracked_state_row_key_ref(row.into()), crate::untracked_state::codec::encode_row_ref(row)?, ); } } Ok(()) } pub(crate) fn stage_delete_rows<'a, I>(writes: &mut StorageWriteSet, identities: I) where I: IntoIterator>, { for identity in identities { writes.delete( UNTRACKED_STATE_ROW_NAMESPACE, encode_untracked_state_row_key_ref(identity), ); } } async fn scan_all_canonical_rows( store: &mut impl StorageReader, ) -> Result, LixError> { let page = store .scan_values(KvScanRequest { namespace: UNTRACKED_STATE_ROW_NAMESPACE.to_string(), range: KvScanRange::prefix(Vec::new()), after: None, limit: usize::MAX, }) .await?; page.values .iter() .map(crate::untracked_state::codec::decode_row) .collect() } fn row_matches_scan(row: &UntrackedStateRow, request: &UntrackedStateScanRequest) -> bool { (request.filter.schema_keys.is_empty() || request.filter.schema_keys.contains(&row.schema_key)) && (request.filter.entity_ids.is_empty() || request.filter.entity_ids.contains(&row.entity_id)) && (request.filter.version_ids.is_empty() || request.filter.version_ids.contains(&row.version_id)) && nullable_matches_filters(&row.file_id, &request.filter.file_ids) } fn nullable_matches_filters(value: &Option, filters: &[NullableKeyFilter]) -> bool { filters.is_empty() || filters.iter().any(|filter| match filter { NullableKeyFilter::Any => true, NullableKeyFilter::Null => value.is_none(), NullableKeyFilter::Value(expected) => value.as_ref() == Some(expected), }) } fn identity_from_request(request: &UntrackedStateRowRequest) -> Option { let file_id = match &request.file_id { NullableKeyFilter::Null => None, NullableKeyFilter::Value(value) => Some(value.clone()), NullableKeyFilter::Any => return None, }; Some(UntrackedStateIdentity { version_id: request.version_id.clone(), schema_key: request.schema_key.clone(), entity_id: request.entity_id.clone(), file_id, }) } fn encode_untracked_state_row_key(identity: &UntrackedStateIdentity) -> Vec { encode_untracked_state_row_key_ref(identity.as_ref()) } pub(super) fn encode_untracked_state_row_key_ref( identity: UntrackedStateIdentityRef<'_>, ) -> Vec { let mut out = Vec::new(); push_component(&mut out, identity.version_id); push_component(&mut out, identity.schema_key); let entity_id = identity .entity_id .as_json_array_text() .expect("untracked-state identity should project"); push_component(&mut out, &entity_id); match identity.file_id { Some(file_id) => { out.push(1); push_component(&mut out, file_id); } None => out.push(0), } out } fn push_component(out: &mut Vec, value: &str) { let bytes = value.as_bytes(); out.extend_from_slice(&(bytes.len() as u32).to_be_bytes()); out.extend_from_slice(bytes); } #[cfg(test)] mod tests { use std::sync::Arc; use super::*; use crate::backend::testing::UnitTestBackend; use crate::storage::{StorageContext, StorageWriteTransaction}; use crate::untracked_state::UntrackedStateContext; async fn write_materialized_rows_to_store( context: &UntrackedStateContext, store: &mut (impl StorageWriteTransaction + ?Sized), rows: &[MaterializedUntrackedStateRow], ) { let mut writes = StorageWriteSet::new(); let canonical_rows = rows .iter() .map(|row| crate::test_support::untracked_state_row_from_materialized(&mut writes, row)) .collect::, _>>() .expect("rows should canonicalize"); context .writer(&mut writes) .stage_rows(canonical_rows.iter().map(|row| row.as_ref())) .expect("rows should write"); writes.apply(store).await.expect("rows should apply"); } #[tokio::test] async fn write_and_load_roundtrips() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let context = UntrackedStateContext::new(); let row = untracked_row("global", "lix_key_value", "ui-tab"); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_materialized_rows_to_store( &context, transaction.as_mut(), std::slice::from_ref(&row), ) .await; transaction.commit().await.expect("commit should succeed"); let loaded = { let mut reader = context.reader(storage.clone()); reader .load_row(&UntrackedStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: "global".to_string(), entity_id: crate::entity_identity::EntityIdentity::single("ui-tab"), file_id: NullableKeyFilter::Null, }) .await } .expect("load should succeed"); assert_eq!(loaded, Some(row)); } #[tokio::test] async fn scan_filters_by_schema_and_version() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let context = UntrackedStateContext::new(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); write_materialized_rows_to_store( &context, transaction.as_mut(), &[ untracked_row("global", "lix_key_value", "global-ui"), untracked_row("version-a", "lix_key_value", "version-ui"), untracked_row("version-a", "other_schema", "other"), ], ) .await; transaction.commit().await.expect("commit should succeed"); let rows = { let mut reader = context.reader(storage.clone()); reader .scan_rows(&UntrackedStateScanRequest { filter: crate::untracked_state::UntrackedStateFilter { schema_keys: vec!["lix_key_value".to_string()], version_ids: vec!["version-a".to_string()], ..Default::default() }, ..Default::default() }) .await } .expect("scan should succeed"); assert_eq!(rows.len(), 1); assert_eq!( rows[0].entity_id, crate::entity_identity::EntityIdentity::single("version-ui") ); } #[tokio::test] async fn delete_removes_row() { let backend = Arc::new(UnitTestBackend::new()); let storage = StorageContext::new(backend.clone()); let context = UntrackedStateContext::new(); let row = untracked_row("global", "lix_key_value", "ui-tab"); let identity = UntrackedStateIdentity { version_id: row.version_id.clone(), schema_key: row.schema_key.clone(), entity_id: row.entity_id.clone(), file_id: row.file_id.clone(), }; let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); let canonical_row = crate::test_support::untracked_state_row_from_materialized(&mut writes, &row) .expect("row should canonicalize"); let mut writer = context.writer(&mut writes); writer .stage_rows(std::iter::once(canonical_row.as_ref())) .expect("write should succeed"); writer.stage_delete_rows(std::iter::once(identity.as_ref())); writes .apply(&mut transaction.as_mut()) .await .expect("writes should apply"); transaction.commit().await.expect("commit should succeed"); let loaded = { let mut reader = context.reader(storage.clone()); reader .load_row(&UntrackedStateRowRequest { schema_key: "lix_key_value".to_string(), version_id: "global".to_string(), entity_id: crate::entity_identity::EntityIdentity::single("ui-tab"), file_id: NullableKeyFilter::Null, }) .await } .expect("load should succeed"); assert_eq!(loaded, None); } fn untracked_row( version_id: &str, schema_key: &str, entity_id: &str, ) -> MaterializedUntrackedStateRow { MaterializedUntrackedStateRow { entity_id: crate::entity_identity::EntityIdentity::single(entity_id), schema_key: schema_key.to_string(), file_id: None, snapshot_content: Some(format!("{{\"key\":\"{}\",\"value\":\"value\"}}", entity_id)), metadata: None, deleted: false, created_at: "2026-01-01T00:00:00Z".to_string(), updated_at: "2026-01-01T00:00:00Z".to_string(), global: version_id == "global", version_id: version_id.to_string(), } } } ================================================ FILE: packages/engine/src/untracked_state/types.rs ================================================ use crate::entity_identity::EntityIdentity; use crate::NullableKeyFilter; /// Durable local row excluded from changelog and commit membership. /// /// This is the canonical physical shape: identity/header fields are stored /// directly, and mutable JSON payloads are stored inline in the sidecar row. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct UntrackedStateRow { pub(crate) entity_id: EntityIdentity, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot_content: Option, pub(crate) metadata: Option, pub(crate) created_at: String, pub(crate) updated_at: String, pub(crate) global: bool, pub(crate) version_id: String, } impl UntrackedStateRow { pub(crate) fn as_ref(&self) -> UntrackedStateRowRef<'_> { UntrackedStateRowRef { entity_id: &self.entity_id, schema_key: &self.schema_key, file_id: self.file_id.as_deref(), snapshot_content: self.snapshot_content.as_deref(), metadata: self.metadata.as_deref(), created_at: &self.created_at, updated_at: &self.updated_at, global: self.global, version_id: &self.version_id, } } } /// Zero-copy view of untracked-state write row. /// /// Untracked state owns this storage-facing write shape. Callers adapt into it /// without making untracked_state depend on transaction or live-state types. #[derive(Debug, Clone, Copy)] pub(crate) struct UntrackedStateRowRef<'a> { pub(crate) entity_id: &'a EntityIdentity, pub(crate) schema_key: &'a str, pub(crate) file_id: Option<&'a str>, pub(crate) snapshot_content: Option<&'a str>, pub(crate) metadata: Option<&'a str>, pub(crate) created_at: &'a str, pub(crate) updated_at: &'a str, pub(crate) global: bool, pub(crate) version_id: &'a str, } /// Hydrated boundary shape for callers that still work with JSON payloads. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] pub(crate) struct MaterializedUntrackedStateRow { pub(crate) entity_id: EntityIdentity, pub(crate) schema_key: String, pub(crate) file_id: Option, pub(crate) snapshot_content: Option, pub(crate) metadata: Option, pub(crate) deleted: bool, pub(crate) created_at: String, pub(crate) updated_at: String, pub(crate) global: bool, pub(crate) version_id: String, } /// Stable identity for one local untracked overlay row. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] pub(crate) struct UntrackedStateIdentity { pub(crate) version_id: String, pub(crate) schema_key: String, pub(crate) entity_id: EntityIdentity, pub(crate) file_id: Option, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) struct UntrackedStateIdentityRef<'a> { pub(crate) version_id: &'a str, pub(crate) schema_key: &'a str, pub(crate) entity_id: &'a EntityIdentity, pub(crate) file_id: Option<&'a str>, } impl UntrackedStateIdentity { pub(crate) fn as_ref(&self) -> UntrackedStateIdentityRef<'_> { UntrackedStateIdentityRef { version_id: &self.version_id, schema_key: &self.schema_key, entity_id: &self.entity_id, file_id: self.file_id.as_deref(), } } } impl<'a> From> for UntrackedStateIdentityRef<'a> { fn from(row: UntrackedStateRowRef<'a>) -> Self { Self { version_id: row.version_id, schema_key: row.schema_key, entity_id: row.entity_id, file_id: row.file_id, } } } /// Identity-centered filter for untracked local overlay scans. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct UntrackedStateFilter { #[serde(default)] pub(crate) schema_keys: Vec, #[serde(default)] pub(crate) entity_ids: Vec, #[serde(default)] pub(crate) version_ids: Vec, #[serde(default)] pub(crate) file_ids: Vec>, } /// Requested property set for an untracked-state scan. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct UntrackedStateProjection { #[serde(default)] pub(crate) columns: Vec, } /// Scan request for local untracked overlay rows. #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize, Default)] pub(crate) struct UntrackedStateScanRequest { #[serde(default)] pub(crate) filter: UntrackedStateFilter, #[serde(default)] pub(crate) projection: UntrackedStateProjection, #[serde(default)] pub(crate) limit: Option, } /// Point lookup request for one untracked local overlay row. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct UntrackedStateRowRequest { pub(crate) schema_key: String, pub(crate) version_id: String, pub(crate) entity_id: EntityIdentity, pub(crate) file_id: NullableKeyFilter, } ================================================ FILE: packages/engine/src/version/context.rs ================================================ use std::sync::Arc; use crate::storage::{StorageReader, StorageWriteSet}; use crate::untracked_state::{UntrackedStateContext, UntrackedStateRow}; use super::refs::VersionRefContext; use super::VersionRefReader; /// Aggregate entrypoint for version-domain services. /// /// Today this owns the moving-ref subsystem. Descriptor helpers are re-exported /// by `version`; future version APIs can grow here without making session or /// SQL code depend directly on ref storage details. pub(crate) struct VersionContext { refs: Arc, } impl VersionContext { pub(crate) fn new(untracked_state: Arc) -> Self { Self { refs: Arc::new(VersionRefContext::new(untracked_state)), } } /// Creates a version-ref reader over a caller-provided KV store. pub(crate) fn ref_reader(&self, store: S) -> impl VersionRefReader where S: StorageReader + Send, { self.refs.reader(store) } pub(crate) fn stage_canonical_ref_rows( &self, writes: &mut StorageWriteSet, rows: &[UntrackedStateRow], ) -> Result<(), crate::LixError> { self.refs.writer(writes).stage_rows(rows) } } ================================================ FILE: packages/engine/src/version/lifecycle.rs ================================================ use crate::commit_graph::{CommitGraphCommit, CommitGraphReader}; use crate::common::validate_non_empty_identity_value; use crate::LixError; use super::{VersionHead, VersionRefReader}; #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum VersionOperation { CreateVersion, SwitchVersion, MergeVersion, MergeVersionPreview, LoadWorkspaceSelector, } impl VersionOperation { pub(crate) fn label(self) -> &'static str { match self { Self::CreateVersion => "create_version", Self::SwitchVersion => "switch_version", Self::MergeVersion => "merge_version", Self::MergeVersionPreview => "merge_version_preview", Self::LoadWorkspaceSelector => "load_workspace_version_id", } } } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub(crate) enum VersionReferenceRole { Source, Target, WorkspaceSelector, CommitSource, } impl VersionReferenceRole { pub(crate) fn label(self) -> &'static str { match self { Self::Source => "source", Self::Target => "target", Self::WorkspaceSelector => "workspace_selector", Self::CommitSource => "commit_source", } } } /// Shared domain service for resolving public version references. /// /// Built-in version schemas describe row shape. This service owns semantic /// ref validation: non-empty ids, global sentinel handling, and missing refs. pub(crate) struct VersionLifecycle<'a> { refs: &'a dyn VersionRefReader, } impl<'a> VersionLifecycle<'a> { pub(crate) fn new(refs: &'a dyn VersionRefReader) -> Self { Self { refs } } pub(crate) fn require_non_empty_id( version_id: &str, operation: VersionOperation, role: VersionReferenceRole, ) -> Result<(), LixError> { require_non_empty_public_id("version_id", version_id, operation, role) } pub(crate) async fn require_existing_commit( commit_graph: &mut dyn CommitGraphReader, commit_id: &str, operation: VersionOperation, role: VersionReferenceRole, ) -> Result { require_non_empty_public_id("commit_id", commit_id, operation, role)?; commit_graph .load_commit(commit_id) .await? .ok_or_else(|| LixError::version_not_found(commit_id, operation.label(), role.label())) } pub(crate) async fn require_existing_ref( &self, version_id: &str, operation: VersionOperation, role: VersionReferenceRole, ) -> Result { Self::require_non_empty_id(version_id, operation, role)?; self.require_existing_stored_ref(version_id, operation, role) .await } pub(crate) async fn require_existing_commit_id( &self, version_id: &str, operation: VersionOperation, role: VersionReferenceRole, ) -> Result { Ok(self .require_existing_ref(version_id, operation, role) .await? .commit_id) } async fn require_existing_stored_ref( &self, version_id: &str, operation: VersionOperation, role: VersionReferenceRole, ) -> Result { self.refs .load_head(version_id) .await? .ok_or_else(|| LixError::version_not_found(version_id, operation.label(), role.label())) } } fn require_non_empty_public_id( label: &str, value: &str, operation: VersionOperation, role: VersionReferenceRole, ) -> Result<(), LixError> { validate_non_empty_identity_value(label, value) .map(|_| ()) .map_err(|_| { LixError::new( LixError::CODE_INVALID_PARAM, format!( "{} {} {label} must be non-empty", operation.label(), role.label() ), ) }) } #[cfg(test)] mod tests { use async_trait::async_trait; use super::*; #[tokio::test] async fn require_existing_ref_returns_head() { let reader = RowsVersionRefReader::new(vec![VersionHead { version_id: "version-a".to_string(), commit_id: "commit-a".to_string(), }]); let lifecycle = VersionLifecycle::new(&reader); let head = lifecycle .require_existing_ref( "version-a", VersionOperation::SwitchVersion, VersionReferenceRole::Target, ) .await .expect("version should resolve"); assert_eq!(head.commit_id, "commit-a"); } #[tokio::test] async fn require_existing_ref_rejects_empty_id_as_invalid_param() { let reader = RowsVersionRefReader::new(Vec::new()); let lifecycle = VersionLifecycle::new(&reader); let error = lifecycle .require_existing_ref( "", VersionOperation::SwitchVersion, VersionReferenceRole::Target, ) .await .expect_err("empty version id should be rejected before lookup"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); } #[tokio::test] async fn require_existing_ref_reports_missing_version() { let reader = RowsVersionRefReader::new(Vec::new()); let lifecycle = VersionLifecycle::new(&reader); let error = lifecycle .require_existing_ref( "missing", VersionOperation::SwitchVersion, VersionReferenceRole::Target, ) .await .expect_err("missing version should be rejected"); assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND); } struct RowsVersionRefReader { heads: Vec, } impl RowsVersionRefReader { fn new(heads: Vec) -> Self { Self { heads } } } #[async_trait] impl VersionRefReader for RowsVersionRefReader { async fn load_head(&self, version_id: &str) -> Result, LixError> { Ok(self .heads .iter() .find(|head| head.version_id == version_id) .cloned()) } async fn scan_heads(&self) -> Result, LixError> { Ok(self.heads.clone()) } } } ================================================ FILE: packages/engine/src/version/mod.rs ================================================ mod context; mod lifecycle; mod refs; mod stage_rows; mod types; pub(crate) use context::VersionContext; pub(crate) use lifecycle::{VersionLifecycle, VersionOperation, VersionReferenceRole}; pub(crate) use stage_rows::{ version_descriptor_stage_row, version_descriptor_tombstone_row, version_ref_stage_row, version_ref_tombstone_row, VERSION_DESCRIPTOR_SCHEMA_KEY, VERSION_REF_SCHEMA_KEY, }; pub(crate) use types::{VersionHead, VersionRefReader}; ================================================ FILE: packages/engine/src/version/refs.rs ================================================ use std::sync::Arc; use tokio::sync::Mutex; use crate::entity_identity::EntityIdentity; use crate::storage::{StorageReader, StorageWriteSet}; use crate::untracked_state::{ MaterializedUntrackedStateRow, UntrackedStateContext, UntrackedStateFilter, UntrackedStateRow, UntrackedStateRowRequest, UntrackedStateScanRequest, }; use crate::version::VERSION_REF_SCHEMA_KEY; use crate::version::{VersionHead, VersionRefReader}; use crate::GLOBAL_VERSION_ID; use crate::{LixError, NullableKeyFilter}; /// Typed access to moving version heads stored in untracked state. /// /// Version refs are one of the inputs used by live_state visibility, so this /// context deliberately bypasses live_state and reads the underlying untracked /// rows directly. That keeps the dependency acyclic: /// untracked_state -> version_ref -> live_state. pub(super) struct VersionRefContext { untracked_state: Arc, } impl VersionRefContext { pub(super) fn new(untracked_state: Arc) -> Self { Self { untracked_state } } /// Creates a version-ref reader over a caller-provided KV store. pub(super) fn reader(&self, store: S) -> VersionRefStoreReader where S: StorageReader, { VersionRefStoreReader { untracked_state: Arc::clone(&self.untracked_state), store: Mutex::new(store), } } /// Creates a version-ref writer over a transaction-local storage write set. pub(super) fn writer<'a>(&self, writes: &'a mut StorageWriteSet) -> VersionRefWriter<'a> { VersionRefWriter { untracked_state: Arc::clone(&self.untracked_state), writes, } } } /// Read side for version heads. pub(super) struct VersionRefStoreReader where S: StorageReader, { untracked_state: Arc, store: Mutex, } impl VersionRefStoreReader where S: StorageReader, { pub(crate) async fn load_head( &self, version_id: &str, ) -> Result, LixError> { let mut store = self.store.lock().await; let Some(row) = self .untracked_state .reader(&mut *store as &mut dyn StorageReader) .load_row(&UntrackedStateRowRequest { schema_key: VERSION_REF_SCHEMA_KEY.to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: EntityIdentity::single(version_id), file_id: NullableKeyFilter::Null, }) .await? else { return Ok(None); }; decode_version_head(version_id, &row) } pub(crate) async fn load_head_commit_id( &self, version_id: &str, ) -> Result, LixError> { Ok(self.load_head(version_id).await?.map(|head| head.commit_id)) } pub(crate) async fn scan_heads(&self) -> Result, LixError> { let mut store = self.store.lock().await; let rows = self .untracked_state .reader(&mut *store as &mut dyn StorageReader) .scan_rows(&UntrackedStateScanRequest { filter: UntrackedStateFilter { schema_keys: vec![VERSION_REF_SCHEMA_KEY.to_string()], version_ids: vec![GLOBAL_VERSION_ID.to_string()], ..UntrackedStateFilter::default() }, ..UntrackedStateScanRequest::default() }) .await?; let mut heads = rows .iter() .map(|row| { let version_id = row.entity_id.as_single_string_owned()?; decode_version_head(&version_id, row) }) .collect::, _>>()? .into_iter() .flatten() .collect::>(); heads.sort_by(|left, right| left.version_id.cmp(&right.version_id)); Ok(heads) } } #[async_trait::async_trait] impl VersionRefReader for VersionRefStoreReader where S: StorageReader + Send, { async fn load_head(&self, version_id: &str) -> Result, LixError> { VersionRefStoreReader::load_head(self, version_id).await } async fn load_head_commit_id(&self, version_id: &str) -> Result, LixError> { VersionRefStoreReader::load_head_commit_id(self, version_id).await } async fn scan_heads(&self) -> Result, LixError> { VersionRefStoreReader::scan_heads(self).await } } /// Write side for moving version heads. pub(super) struct VersionRefWriter<'a> { untracked_state: Arc, writes: &'a mut StorageWriteSet, } impl VersionRefWriter<'_> { pub(crate) fn stage_rows(&mut self, rows: &[UntrackedStateRow]) -> Result<(), LixError> { self.untracked_state .writer(self.writes) .stage_rows(rows.iter().map(|row| row.as_ref())) } } fn decode_version_head( requested_version_id: &str, row: &MaterializedUntrackedStateRow, ) -> Result, LixError> { let Some(snapshot_content) = row.snapshot_content.as_deref() else { return Ok(None); }; let snapshot = serde_json::from_str::(snapshot_content).map_err(|error| { LixError::new( "LIX_ERROR_UNKNOWN", format!("engine version-ref snapshot parse failed: {error}"), ) })?; let commit_id = snapshot .get("commit_id") .and_then(serde_json::Value::as_str) .ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", format!("version ref for version '{requested_version_id}' is missing commit_id"), ) })?; Ok(Some(VersionHead { version_id: requested_version_id.to_string(), commit_id: commit_id.to_string(), })) } #[cfg(test)] mod tests { use std::sync::Arc; use crate::backend::testing::UnitTestBackend; use crate::storage::{StorageContext, StorageWriteSet}; use crate::transaction::prepare_version_ref_row; use crate::untracked_state::{UntrackedStateContext, UntrackedStateRowRequest}; use super::*; #[tokio::test] async fn load_head_returns_none_when_missing() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let version_ref = test_version_ref(); let head = version_ref .reader(storage) .load_head("missing-version") .await .expect("missing version ref should load cleanly"); assert_eq!(head, None); } #[tokio::test] async fn advance_head_writes_untracked_global_ref() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let version_ref = VersionRefContext::new(Arc::new(UntrackedStateContext::new())); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); stage_version_head( &version_ref, &mut writes, "version-a", "commit-a", "2026-01-01T00:00:00Z", ) .expect("version head should advance"); writes .apply(&mut transaction.as_mut()) .await .expect("version head should apply"); transaction .commit() .await .expect("transaction should commit"); let head = version_ref .reader(storage.clone()) .load_head("version-a") .await .expect("version head should load") .expect("version head should exist"); assert_eq!(head.version_id, "version-a"); assert_eq!(head.commit_id, "commit-a"); let mut reader = UntrackedStateContext::new().reader(storage); let row = reader .load_row(&UntrackedStateRowRequest { schema_key: VERSION_REF_SCHEMA_KEY.to_string(), version_id: GLOBAL_VERSION_ID.to_string(), entity_id: crate::entity_identity::EntityIdentity::single("version-a"), file_id: NullableKeyFilter::Null, }) .await .expect("version-ref row should load") .expect("version-ref row should exist"); assert!(row.global); assert_eq!(row.created_at, "2026-01-01T00:00:00Z"); assert_eq!(row.updated_at, "2026-01-01T00:00:00Z"); } #[tokio::test] async fn scan_heads_returns_sorted_version_heads() { let storage = StorageContext::new(Arc::new(UnitTestBackend::new())); let version_ref = test_version_ref(); let mut transaction = storage .begin_write_transaction() .await .expect("transaction should open"); let mut writes = StorageWriteSet::new(); stage_version_head( &version_ref, &mut writes, "version-b", "commit-b", "2026-01-01T00:00:00Z", ) .expect("version-b should advance"); stage_version_head( &version_ref, &mut writes, "version-a", "commit-a", "2026-01-01T00:00:00Z", ) .expect("version-a should advance"); writes .apply(&mut transaction.as_mut()) .await .expect("version heads should apply"); transaction .commit() .await .expect("transaction should commit"); let heads = version_ref .reader(storage) .scan_heads() .await .expect("heads should scan"); assert_eq!( heads, vec![ VersionHead { version_id: "version-a".to_string(), commit_id: "commit-a".to_string(), }, VersionHead { version_id: "version-b".to_string(), commit_id: "commit-b".to_string(), }, ] ); } fn test_version_ref() -> VersionRefContext { VersionRefContext::new(Arc::new(UntrackedStateContext::new())) } fn stage_version_head( version_ref: &VersionRefContext, writes: &mut StorageWriteSet, version_id: &str, commit_id: &str, timestamp: &str, ) -> Result<(), LixError> { let canonical_row = prepare_version_ref_row(version_id, commit_id, timestamp)?; version_ref.writer(writes).stage_rows(&[canonical_row.row]) } } ================================================ FILE: packages/engine/src/version/stage_rows.rs ================================================ use serde_json::json; use crate::entity_identity::EntityIdentity; use crate::transaction::types::{TransactionJson, TransactionWriteRow}; use crate::GLOBAL_VERSION_ID; pub(crate) const VERSION_DESCRIPTOR_SCHEMA_KEY: &str = "lix_version_descriptor"; pub(crate) const VERSION_REF_SCHEMA_KEY: &str = "lix_version_ref"; pub(crate) fn version_descriptor_stage_row( version_id: &str, name: &str, hidden: bool, ) -> TransactionWriteRow { TransactionWriteRow { entity_id: Some(EntityIdentity::single(version_id)), schema_key: VERSION_DESCRIPTOR_SCHEMA_KEY.to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_unchecked(json!({ "id": version_id, "name": name, "hidden": hidden, }))), metadata: None, origin: None, created_at: None, updated_at: None, global: true, change_id: None, commit_id: None, untracked: false, version_id: GLOBAL_VERSION_ID.to_string(), } } pub(crate) fn version_ref_stage_row(version_id: &str, commit_id: &str) -> TransactionWriteRow { TransactionWriteRow { entity_id: Some(EntityIdentity::single(version_id)), schema_key: VERSION_REF_SCHEMA_KEY.to_string(), file_id: None, snapshot: Some(TransactionJson::from_value_unchecked(json!({ "id": version_id, "commit_id": commit_id, }))), metadata: None, origin: None, created_at: None, updated_at: None, global: true, change_id: None, commit_id: None, untracked: true, version_id: GLOBAL_VERSION_ID.to_string(), } } pub(crate) fn version_descriptor_tombstone_row(version_id: &str) -> TransactionWriteRow { let mut row = version_descriptor_stage_row(version_id, "", false); row.snapshot = None; row } pub(crate) fn version_ref_tombstone_row(version_id: &str) -> TransactionWriteRow { let mut row = version_ref_stage_row(version_id, ""); row.snapshot = None; row } ================================================ FILE: packages/engine/src/version/types.rs ================================================ /// Current changelog head for a version. #[derive(Debug, Clone, PartialEq, Eq)] pub(crate) struct VersionHead { pub(crate) version_id: String, pub(crate) commit_id: String, } /// Typed reader for moving version heads. #[async_trait::async_trait] pub(crate) trait VersionRefReader: Send + Sync { async fn load_head(&self, version_id: &str) -> Result, crate::LixError>; async fn load_head_commit_id( &self, version_id: &str, ) -> Result, crate::LixError> { Ok(self.load_head(version_id).await?.map(|head| head.commit_id)) } async fn scan_heads(&self) -> Result, crate::LixError>; } ================================================ FILE: packages/engine/src/wasm/mod.rs ================================================ use std::sync::Arc; use async_trait::async_trait; use crate::LixError; #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub struct WasmLimits { pub max_memory_bytes: u64, pub max_fuel: Option, pub timeout_ms: Option, } impl Default for WasmLimits { fn default() -> Self { Self { max_memory_bytes: 64 * 1024 * 1024, max_fuel: None, timeout_ms: None, } } } #[async_trait(?Send)] pub trait WasmRuntime: Send + Sync { async fn init_component( &self, bytes: Vec, limits: WasmLimits, ) -> Result, LixError>; } #[async_trait(?Send)] pub trait WasmComponentInstance: Send + Sync { async fn call(&self, export: &str, input: &[u8]) -> Result, LixError>; async fn close(&self) -> Result<(), LixError> { Ok(()) } } #[derive(Debug, Default, Clone, Copy)] pub struct NoopWasmRuntime; #[async_trait(?Send)] impl WasmRuntime for NoopWasmRuntime { async fn init_component( &self, _bytes: Vec, _limits: WasmLimits, ) -> Result, LixError> { Err(LixError { code: "LIX_ERROR_UNKNOWN".to_string(), message: "wasm runtime is required to execute plugins; provide a non-noop runtime" .to_string(), hint: None, details: None, }) } } ================================================ FILE: packages/engine/tests/branching.rs ================================================ #[macro_use] #[path = "support/mod.rs"] mod support; use lix_engine::Value; use lix_engine::{ CreateVersionOptions, Engine, LixError, MergeChangeStats, MergeVersionOptions, MergeVersionOutcome, MergeVersionPreviewOptions, SwitchVersionOptions, }; use serde_json::Value as JsonValue; simulation_test!(create_version_from_main, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; assert_version_descriptor(&main, "draft-version", "Draft").await; assert_eq!( engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load"), Some(sim.initial_commit_id().to_string()) ); drop(draft); drop(main); drop(engine); }); simulation_test!(create_version_rejects_existing_id, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; let error = main .create_version(CreateVersionOptions { id: Some("draft-version".to_string()), name: "Overwritten draft".to_string(), from_commit_id: None, }) .await .expect_err("creating a version with an existing id should fail"); assert_eq!(error.code, "LIX_ERROR_UNIQUE"); assert!( error .to_string() .contains("INSERT would duplicate entity_id"), "error should explain the duplicate version id: {error:?}" ); assert_version_descriptor(&main, "draft-version", "Draft").await; drop(draft); drop(main); drop(engine); }); simulation_test!(create_version_rejects_duplicate_name, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; let error = main .create_version(CreateVersionOptions { id: Some("duplicate-name-version".to_string()), name: "Draft".to_string(), from_commit_id: None, }) .await .expect_err("creating a version with an existing name should fail"); assert_eq!(error.code, lix_engine::LixError::CODE_UNIQUE); assert!( error.to_string().contains("/name"), "error should explain the duplicate version name: {error:?}" ); drop(draft); drop(main); drop(engine); }); simulation_test!( version_descriptor_delete_via_entity_surface_is_rejected_when_ref_exists, |sim| async move { let (engine, main, _draft) = create_draft_from_main(&sim).await; let error = main .execute( "DELETE FROM lix_version_descriptor WHERE id = 'draft-version'", &[], ) .await .expect_err("descriptor delete through entity surface should fail"); assert_version_pair_delete_restricted(&error); assert_eq!(count_version_descriptors(&main, "draft-version").await, 1); assert_eq!(count_version_refs(&main, "draft-version").await, 1); assert_eq!( engine .load_version_head_commit_id("draft-version") .await .expect("version ref head should still load"), Some(sim.initial_commit_id().to_string()) ); drop(main); drop(engine); } ); simulation_test!( version_descriptor_delete_via_lix_state_is_rejected_when_ref_exists, |sim| async move { let (engine, main, _draft) = create_draft_from_main(&sim).await; let error = main .execute( "DELETE FROM lix_state \ WHERE schema_key = 'lix_version_descriptor' AND entity_id = lix_json('[\"draft-version\"]')", &[], ) .await .expect_err("descriptor delete through lix_state should fail"); assert_version_pair_delete_restricted(&error); assert_eq!(count_version_descriptors(&main, "draft-version").await, 1); assert_eq!(count_version_refs(&main, "draft-version").await, 1); assert_eq!( engine .load_version_head_commit_id("draft-version") .await .expect("version ref head should still load"), Some(sim.initial_commit_id().to_string()) ); drop(main); drop(engine); } ); simulation_test!( version_ref_delete_via_entity_surface_is_rejected_when_descriptor_exists, |sim| async move { let (engine, main, _draft) = create_draft_from_main(&sim).await; let error = main .execute( "DELETE FROM lix_version_ref WHERE id = 'draft-version'", &[], ) .await .expect_err("ref delete through entity surface should fail"); assert_version_pair_delete_restricted(&error); assert_eq!(count_version_descriptors(&main, "draft-version").await, 1); assert_eq!(count_version_refs(&main, "draft-version").await, 1); assert_eq!( engine .load_version_head_commit_id("draft-version") .await .expect("version ref head should still load"), Some(sim.initial_commit_id().to_string()) ); drop(main); drop(engine); } ); simulation_test!( version_ref_delete_via_lix_state_is_rejected_when_descriptor_exists, |sim| async move { let (engine, main, _draft) = create_draft_from_main(&sim).await; let error = main .execute( "DELETE FROM lix_state \ WHERE schema_key = 'lix_version_ref' AND entity_id = lix_json('[\"draft-version\"]')", &[], ) .await .expect_err("ref delete through lix_state should fail"); assert_version_pair_delete_restricted(&error); assert_eq!(count_version_descriptors(&main, "draft-version").await, 1); assert_eq!(count_version_refs(&main, "draft-version").await, 1); assert_eq!( engine .load_version_head_commit_id("draft-version") .await .expect("version ref head should still load"), Some(sim.initial_commit_id().to_string()) ); drop(main); drop(engine); } ); simulation_test!( create_version_can_start_from_explicit_commit, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_session(sim.main_version_id()) .await .expect("main session should open"), &engine, ); main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('main-after-initial', 'main')", &[], ) .await .expect("main write should succeed"); assert_key_value(&main, "main-after-initial", Some("\"main\"")).await; let receipt = main .create_version(CreateVersionOptions { id: Some("from-initial".to_string()), name: "From initial".to_string(), from_commit_id: Some(sim.initial_commit_id().to_string()), }) .await .expect("version should be created from explicit commit"); assert_eq!(receipt.id, "from-initial"); assert_eq!(receipt.name, "From initial"); assert!(!receipt.hidden); assert_eq!(receipt.commit_id, sim.initial_commit_id()); assert_eq!( engine .load_version_head_commit_id("from-initial") .await .expect("version head should load"), Some(sim.initial_commit_id().to_string()) ); let from_initial = main.wrap_session( engine .open_session("from-initial") .await .expect("explicit commit version session should open"), &engine, ); assert_key_value(&from_initial, "main-after-initial", None).await; drop(from_initial); drop(main); drop(engine); } ); simulation_test!(created_version_sees_inherited_state, |sim| async move { let (_engine, _main, draft) = create_draft_after_shared_write(&sim).await; assert_key_value(&draft, "shared-before-branch", Some("\"shared\"")).await; }); simulation_test!( open_workspace_session_starts_on_seeded_main_version, |sim| async move { let engine = sim.boot_engine().await; let workspace = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_eq!( workspace .active_version_id() .await .expect("workspace active version should resolve"), sim.main_version_id() ); } ); simulation_test!( later_main_changes_do_not_appear_in_created_version, |sim| async move { let (_engine, main, draft) = create_draft_from_main(&sim).await; main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('main-after-branch', 'main')", &[], ) .await .expect("main write should succeed"); assert_key_value(&main, "main-after-branch", Some("\"main\"")).await; assert_key_value(&draft, "main-after-branch", None).await; } ); simulation_test!( later_created_version_changes_do_not_appear_in_main, |sim| async move { let (_engine, main, draft) = create_draft_from_main(&sim).await; draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('draft-after-branch', 'draft')", &[], ) .await .expect("draft write should succeed"); assert_key_value(&draft, "draft-after-branch", Some("\"draft\"")).await; assert_key_value(&main, "draft-after-branch", None).await; } ); simulation_test!( switch_version_returns_session_for_target_version, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('switch-draft-only', 'draft')", &[], ) .await .expect("draft write should succeed"); let (switched, receipt) = main .switch_version(SwitchVersionOptions { version_id: "draft-version".to_string(), }) .await .expect("switch should succeed"); assert_eq!(receipt.version_id, "draft-version"); assert_key_value(&switched, "switch-draft-only", Some("\"draft\"")).await; assert_key_value(&main, "switch-draft-only", None).await; drop(engine); } ); simulation_test!( pinned_switch_version_is_ephemeral_and_does_not_advance_refs, |sim| async move { let (engine, main, _draft) = create_draft_from_main(&sim).await; let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"); let draft_head_before = engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load"); let workspace_before = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_eq!( workspace_before .active_version_id() .await .expect("workspace selector should resolve"), sim.main_version_id(), "pinned session setup should not have moved the workspace selector" ); let (_switched, _receipt) = main .switch_version(SwitchVersionOptions { version_id: "draft-version".to_string(), }) .await .expect("switch should succeed"); assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"), main_head_before, "switching must not mutate the source session version ref" ); assert_eq!( engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load"), draft_head_before, "switching must not mutate the target version ref" ); let workspace_after = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_eq!( workspace_after .active_version_id() .await .expect("workspace selector should resolve"), sim.main_version_id(), "pinned switching must not mutate the shared workspace selector" ); } ); simulation_test!( workspace_switch_version_updates_shared_workspace_selector, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('workspace-draft-only', 'draft')", &[], ) .await .expect("draft write should succeed"); let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"); let draft_head_before = engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load"); let workspace_a = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); let workspace_b = sim.wrap_session( engine .open_workspace_session() .await .expect("second workspace session should open"), &engine, ); assert_eq!( workspace_a .active_version_id() .await .expect("workspace selector should resolve"), sim.main_version_id() ); let (workspace_switched, receipt) = workspace_a .switch_version(SwitchVersionOptions { version_id: "draft-version".to_string(), }) .await .expect("workspace switch should succeed"); assert_eq!(receipt.version_id, "draft-version"); assert_eq!( workspace_switched .active_version_id() .await .expect("switched workspace selector should resolve"), "draft-version" ); assert_eq!( workspace_b .active_version_id() .await .expect("other workspace session should observe selector"), "draft-version", "workspace sessions resolve the shared selector on use" ); assert_key_value(&workspace_b, "workspace-draft-only", Some("\"draft\"")).await; assert_key_value(&main, "workspace-draft-only", None).await; assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"), main_head_before, "workspace switching must not mutate the old version ref" ); assert_eq!( engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load"), draft_head_before, "workspace switching must not mutate the new version ref" ); } ); simulation_test!( workspace_switch_version_persists_across_reopened_engine, |sim| async move { let (engine, _main, draft) = create_draft_from_main(&sim).await; draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('workspace-reopen-draft', 'draft')", &[], ) .await .expect("draft write should succeed"); let workspace = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); workspace .switch_version(SwitchVersionOptions { version_id: "draft-version".to_string(), }) .await .expect("workspace switch should persist"); let reopened_engine = sim .reboot_engine_from_current_snapshot() .await .expect("engine should reopen from current snapshot"); let reopened_workspace = sim.wrap_session( reopened_engine .open_workspace_session() .await .expect("reopened workspace session should open"), &reopened_engine, ); assert_eq!( reopened_workspace .active_version_id() .await .expect("workspace selector should resolve after reopen"), "draft-version", "workspace switch should survive reopening the engine" ); assert_key_value( &reopened_workspace, "workspace-reopen-draft", Some("\"draft\""), ) .await; } ); simulation_test!( switch_version_errors_when_target_ref_is_missing, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let result = main .switch_version(SwitchVersionOptions { version_id: "missing-version".to_string(), }) .await; let Err(error) = result else { panic!("missing version ref should fail"); }; assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND); assert_eq!( error .details .as_ref() .and_then(|details| details.get("version_id")), Some(&JsonValue::String("missing-version".to_string())) ); assert_eq!( error .details .as_ref() .and_then(|details| details.get("operation")), Some(&JsonValue::String("switch_version".to_string())) ); assert_eq!( error .details .as_ref() .and_then(|details| details.get("role")), Some(&JsonValue::String("target".to_string())) ); } ); simulation_test!( merge_version_resolves_existing_source_and_target_heads, |sim| async move { let (engine, main, _draft) = create_draft_from_main(&sim).await; let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let receipt = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("merge head resolution should succeed"); assert_eq!(receipt.outcome, MergeVersionOutcome::AlreadyUpToDate); assert_eq!(receipt.change_stats, MergeChangeStats::default()); assert_eq!(receipt.created_merge_commit_id, None); assert_eq!(receipt.target_version_id, sim.main_version_id()); assert_eq!(receipt.source_version_id, "draft-version"); assert_eq!( receipt.target_head_before_commit_id, main_head_before, "receipt should expose the target head before the no-op merge" ); assert_eq!( receipt.target_head_after_commit_id, main_head_before, "no-op merge should leave target head unchanged" ); assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"), Some(main_head_before) ); } ); simulation_test!( merge_version_fast_forwards_when_target_is_merge_base, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('draft-fast-forward', 'draft')", &[], ) .await .expect("draft write should succeed"); let target_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let source_head = engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load") .expect("draft head should exist"); let preview = main .merge_version_preview(MergeVersionPreviewOptions { source_version_id: "draft-version".to_string(), }) .await .expect("merge preview should analyze fast-forward"); assert_eq!(preview.outcome, MergeVersionOutcome::FastForward); assert_eq!(preview.target_head_commit_id, target_head_before); assert_eq!(preview.source_head_commit_id, source_head); assert_eq!( preview.change_stats, MergeChangeStats { total: 1, added: 1, modified: 0, removed: 0, } ); assert_eq!(preview.conflicts.len(), 0); assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .as_deref(), Some(target_head_before.as_str()), "preview should not advance the target ref" ); let receipt = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("merge should fast-forward target"); assert_eq!(receipt.outcome, MergeVersionOutcome::FastForward); assert_eq!( receipt.change_stats, MergeChangeStats { total: 1, added: 1, modified: 0, removed: 0, } ); assert_eq!(receipt.created_merge_commit_id, None); assert_eq!(receipt.base_commit_id, target_head_before); assert_eq!(receipt.target_head_before_commit_id, target_head_before); assert_eq!(receipt.source_head_before_commit_id, source_head); assert_eq!(receipt.target_head_after_commit_id, source_head); assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .as_deref(), Some(source_head.as_str()) ); assert_key_value(&main, "draft-fast-forward", Some("\"draft\"")).await; let global = sim.wrap_session( engine .open_session("global") .await .expect("global session should open"), &engine, ); assert_eq!( commit_parent_edges(&global, &source_head).await, vec![(target_head_before, 0)], "fast-forward should not create a two-parent merge commit" ); } ); simulation_test!( merge_version_advances_target_with_two_parent_commit, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('main-merge-target', 'main')", &[], ) .await .expect("main write should succeed"); draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('draft-merge-source', 'draft')", &[], ) .await .expect("draft write should succeed"); let target_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let source_head = engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load") .expect("draft head should exist"); let receipt = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("merge should apply source change"); assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted); assert_eq!( receipt.change_stats, MergeChangeStats { total: 1, added: 1, modified: 0, removed: 0, } ); assert_eq!(receipt.target_head_before_commit_id, target_head_before); assert_eq!(receipt.source_head_before_commit_id, source_head); let target_head_after = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); assert_eq!( receipt.target_head_after_commit_id, target_head_after, "receipt should expose the post-merge target head" ); assert_eq!( receipt.created_merge_commit_id.as_deref(), Some(target_head_after.as_str()), "a non-empty merge should report the merge commit it created" ); assert_ne!(target_head_after, target_head_before); assert_eq!( engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load") .as_deref(), Some(source_head.as_str()), "merging into main must not move the source version ref" ); assert_key_value(&main, "draft-merge-source", Some("\"draft\"")).await; assert_key_value(&main, "main-merge-target", Some("\"main\"")).await; let global = sim.wrap_session( engine .open_session("global") .await .expect("global session should open"), &engine, ); assert_eq!( commit_parent_edges(&global, &target_head_after).await, vec![(target_head_before, 0), (source_head, 1)], "merge commit should preserve target as first parent and source as second parent" ); } ); simulation_test!( merge_version_adopts_source_change_without_minting_equivalent_copy, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-adopt-target', 'target')", &[], ) .await .expect("main write should succeed"); draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-adopt-change', 'source')", &[], ) .await .expect("draft write should succeed"); let receipt = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("merge should apply source change"); assert!( receipt.created_merge_commit_id.is_some(), "non-empty merge should create a merge commit" ); let global = sim.wrap_session( engine .open_session("global") .await .expect("global session should open"), &engine, ); let equivalent_change_count = select_single_integer( &global, "SELECT count(*) \ FROM lix_change \ WHERE schema_key = 'lix_key_value' \ AND entity_id = lix_json('[\"merge-adopt-change\"]') \ AND snapshot_content = lix_json('{\"key\":\"merge-adopt-change\",\"value\":\"source\"}')", ) .await; assert_eq!( equivalent_change_count, 1, "merge must not append a second canonical change with identical effect" ); let history = main .execute( "SELECT snapshot_content \ FROM lix_state_history \ WHERE start_commit_id = lix_active_version_commit_id() \ AND entity_id = lix_json('[\"merge-adopt-change\"]') \ ORDER BY depth", &[], ) .await .expect("history query should succeed"); assert_eq!( history.len(), 1, "history should show the adopted canonical change once, not once from the merge commit and once from the source parent" ); } ); simulation_test!( merge_version_adopts_schema_registration_before_schema_rows, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-schema-target-change', 'target')", &[], ) .await .expect("main write should force a merge commit instead of fast-forward"); draft .execute( "INSERT INTO lix_registered_schema (value) \ VALUES (\ lix_json('{\"x-lix-key\":\"merge_task_item\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"title\":{\"type\":\"string\"}},\"required\":[\"id\",\"title\"],\"additionalProperties\":false}')\ )", &[], ) .await .expect("draft schema registration should succeed"); draft .execute( "INSERT INTO merge_task_item (id, title) \ VALUES ('task-1', 'Adopted schema row')", &[], ) .await .expect("draft row using newly registered schema should succeed"); main.merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("merge should adopt schema registration before rows that use it"); let reopened_main = sim.wrap_session( engine .open_session(sim.main_version_id()) .await .expect("main session should reopen after merge"), &engine, ); let rows = reopened_main .execute( "SELECT id, title FROM merge_task_item WHERE id = 'task-1'", &[], ) .await .expect("merged schema surface should be queryable"); assert_eq!( rows.rows()[0].values(), &[ Value::Text("task-1".to_string()), Value::Text("Adopted schema row".to_string()), ] ); } ); simulation_test!( merge_version_errors_on_divergent_same_entity_change, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-conflict', 'main')", &[], ) .await .expect("main write should succeed"); draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-conflict', 'draft')", &[], ) .await .expect("draft write should succeed"); let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let error = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect_err("divergent same-entity changes should conflict"); assert_merge_conflict_error(&error); assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"), Some(main_head_before), "failed merge should not advance the target version ref" ); assert_key_value(&main, "merge-conflict", Some("\"main\"")).await; } ); simulation_test!( merge_version_fast_forwards_source_delete_when_target_unchanged, |sim| async move { let (engine, main, draft) = create_draft_after_shared_write(&sim).await; delete_key_value(&draft, "shared-before-branch").await; let source_head = engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load") .expect("draft head should exist"); let receipt = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("merge should apply source delete"); assert_eq!(receipt.outcome, MergeVersionOutcome::FastForward); assert_eq!( receipt.change_stats, MergeChangeStats { total: 1, added: 0, modified: 0, removed: 1, } ); assert_eq!(receipt.created_merge_commit_id, None); assert_eq!(receipt.target_head_after_commit_id, source_head); assert_key_value(&main, "shared-before-branch", None).await; } ); simulation_test!( merge_version_records_empty_merge_when_both_sides_delete, |sim| async move { let (engine, main, draft) = create_draft_after_shared_write(&sim).await; delete_key_value(&main, "shared-before-branch").await; delete_key_value(&draft, "shared-before-branch").await; let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let source_head = engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load") .expect("draft head should exist"); let receipt = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("convergent delete merge should succeed"); assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted); assert_eq!(receipt.change_stats, MergeChangeStats::default()); let merge_commit_id = receipt .created_merge_commit_id .clone() .expect("convergent delete should create an empty merge commit"); assert_eq!(receipt.target_head_after_commit_id, merge_commit_id); assert_eq!(receipt.target_head_before_commit_id, main_head_before); assert_eq!(receipt.source_head_before_commit_id, source_head); assert_empty_merge_commit( &engine, &main, &merge_commit_id, &receipt.target_head_before_commit_id, &receipt.source_head_before_commit_id, ) .await; assert_key_value(&main, "shared-before-branch", None).await; } ); simulation_test!( merge_version_conflicts_when_target_deletes_source_modifies, |sim| async move { let (engine, main, draft) = create_draft_after_shared_write(&sim).await; delete_key_value(&main, "shared-before-branch").await; draft .execute( "UPDATE lix_key_value SET value = 'draft' WHERE key = 'shared-before-branch'", &[], ) .await .expect("draft update should succeed"); let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let error = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect_err("delete/modify should conflict"); assert_merge_conflict_error(&error); assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"), Some(main_head_before), "failed merge should not advance the target version ref" ); assert_key_value(&main, "shared-before-branch", None).await; } ); simulation_test!( merge_version_conflicts_when_target_modifies_source_deletes, |sim| async move { let (engine, main, draft) = create_draft_after_shared_write(&sim).await; main.execute( "UPDATE lix_key_value SET value = 'main' WHERE key = 'shared-before-branch'", &[], ) .await .expect("main update should succeed"); delete_key_value(&draft, "shared-before-branch").await; let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let error = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect_err("modify/delete should conflict"); assert_merge_conflict_error(&error); assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"), Some(main_head_before), "failed merge should not advance the target version ref" ); assert_key_value(&main, "shared-before-branch", Some("\"main\"")).await; } ); simulation_test!( merge_version_records_empty_merge_for_same_payload_convergence, |sim| async move { let (engine, main, draft) = create_draft_after_shared_write(&sim).await; main.execute( "UPDATE lix_key_value SET value = 'same' WHERE key = 'shared-before-branch'", &[], ) .await .expect("main update should succeed"); draft .execute( "UPDATE lix_key_value SET value = 'same' WHERE key = 'shared-before-branch'", &[], ) .await .expect("draft update should succeed"); let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let source_head = engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load") .expect("draft head should exist"); let receipt = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("convergent update merge should succeed"); assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted); assert_eq!(receipt.change_stats, MergeChangeStats::default()); let merge_commit_id = receipt .created_merge_commit_id .clone() .expect("convergent update should create an empty merge commit"); assert_eq!(receipt.target_head_after_commit_id, merge_commit_id); assert_eq!(receipt.target_head_before_commit_id, main_head_before); assert_eq!(receipt.source_head_before_commit_id, source_head); assert_empty_merge_commit( &engine, &main, &merge_commit_id, &receipt.target_head_before_commit_id, &receipt.source_head_before_commit_id, ) .await; assert_key_value(&main, "shared-before-branch", Some("\"same\"")).await; } ); simulation_test!( merge_version_conflicts_on_independent_add_same_identity_different_payload, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-independent-add', 'main')", &[], ) .await .expect("main insert should succeed"); draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-independent-add', 'draft')", &[], ) .await .expect("draft insert should succeed"); let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let error = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect_err("independent adds with different payloads should conflict"); assert_merge_conflict_error(&error); assert_eq!( engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load"), Some(main_head_before), "failed merge should not advance the target version ref" ); assert_key_value(&main, "merge-independent-add", Some("\"main\"")).await; } ); simulation_test!( merge_version_records_empty_merge_for_same_identity_same_payload_add, |sim| async move { let (engine, main, draft) = create_draft_from_main(&sim).await; main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-independent-same-add', 'same')", &[], ) .await .expect("main insert should succeed"); draft .execute( "INSERT INTO lix_key_value (key, value) VALUES ('merge-independent-same-add', 'same')", &[], ) .await .expect("draft insert should succeed"); let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); let source_head = engine .load_version_head_commit_id("draft-version") .await .expect("draft head should load") .expect("draft head should exist"); let receipt = main .merge_version(MergeVersionOptions { source_version_id: "draft-version".to_string(), }) .await .expect("convergent independent add merge should succeed"); assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted); assert_eq!(receipt.change_stats, MergeChangeStats::default()); let merge_commit_id = receipt .created_merge_commit_id .clone() .expect("convergent independent add should create an empty merge commit"); assert_eq!(receipt.target_head_after_commit_id, merge_commit_id); assert_eq!(receipt.target_head_before_commit_id, main_head_before); assert_eq!(receipt.source_head_before_commit_id, source_head); assert_empty_merge_commit( &engine, &main, &merge_commit_id, &receipt.target_head_before_commit_id, &receipt.source_head_before_commit_id, ) .await; assert_key_value(&main, "merge-independent-same-add", Some("\"same\"")).await; } ); simulation_test!( merge_version_errors_when_source_version_ref_is_missing, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = main .merge_version(MergeVersionOptions { source_version_id: "missing-version".to_string(), }) .await .expect_err("missing source ref should fail"); assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND); assert_eq!( error .details .as_ref() .and_then(|details| details.get("version_id")), Some(&JsonValue::String("missing-version".to_string())) ); assert_eq!( error .details .as_ref() .and_then(|details| details.get("operation")), Some(&JsonValue::String("merge_version".to_string())) ); assert_eq!( error .details .as_ref() .and_then(|details| details.get("role")), Some(&JsonValue::String("source".to_string())) ); } ); simulation_test!(merge_version_rejects_self_merge, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = main .merge_version(MergeVersionOptions { source_version_id: sim.main_version_id().to_string(), }) .await .expect_err("self-merge should fail"); assert_eq!(error.code, LixError::CODE_INVALID_MERGE); assert_eq!( error .details .as_ref() .and_then(|details| details.get("operation")), Some(&JsonValue::String("merge_version".to_string())) ); assert_eq!( error .details .as_ref() .and_then(|details| details.get("target_version_id")), Some(&JsonValue::String(sim.main_version_id().to_string())) ); assert_eq!( error .details .as_ref() .and_then(|details| details.get("source_version_id")), Some(&JsonValue::String(sim.main_version_id().to_string())) ); }); async fn delete_key_value( session: &crate::support::simulation_test::engine::SimSession, key: &str, ) { session .execute( &format!("DELETE FROM lix_key_value WHERE key = '{key}'"), &[], ) .await .expect("key-value delete should succeed"); } async fn create_draft_after_shared_write( sim: &crate::support::simulation_test::engine::Simulation, ) -> ( Engine, crate::support::simulation_test::engine::SimSession, crate::support::simulation_test::engine::SimSession, ) { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_session(sim.main_version_id()) .await .expect("main session should open"), &engine, ); main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('shared-before-branch', 'shared')", &[], ) .await .expect("source write should succeed"); let draft = create_draft(&engine, &main).await; (engine, main, draft) } async fn create_draft_from_main( sim: &crate::support::simulation_test::engine::Simulation, ) -> ( Engine, crate::support::simulation_test::engine::SimSession, crate::support::simulation_test::engine::SimSession, ) { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_session(sim.main_version_id()) .await .expect("main session should open"), &engine, ); let draft = create_draft(&engine, &main).await; (engine, main, draft) } async fn create_draft( engine: &Engine, main: &crate::support::simulation_test::engine::SimSession, ) -> crate::support::simulation_test::engine::SimSession { let receipt = main .create_version(CreateVersionOptions { id: Some("draft-version".to_string()), name: "Draft".to_string(), from_commit_id: None, }) .await .expect("version should be created"); assert_eq!(receipt.id, "draft-version"); let version_row = main .execute( "SELECT id, name, hidden, commit_id FROM lix_version WHERE id = 'draft-version'", &[], ) .await .expect("created version should be queryable through lix_version"); assert_eq!(version_row.len(), 1); assert_eq!( version_row.rows()[0].values(), &[ Value::Text(receipt.id.clone()), Value::Text(receipt.name.clone()), Value::Boolean(receipt.hidden), Value::Text(receipt.commit_id.clone()), ], "create_version should return the same public shape as lix_version" ); main.wrap_session( engine .open_session(receipt.id) .await .expect("draft session should open"), engine, ) } async fn assert_key_value( session: &crate::support::simulation_test::engine::SimSession, key: &str, expected: Option<&str>, ) { let result = session .execute( &format!("SELECT value FROM lix_key_value WHERE key = '{key}'"), &[], ) .await .expect("key-value query should succeed"); let rows = result; match expected { Some(value) => { assert_eq!(rows.len(), 1); let expected_json = serde_json::from_str::(value) .expect("expected key-value should be valid JSON"); assert_eq!(rows.rows()[0].values(), &[Value::Json(expected_json)]); } None => assert_eq!(rows.len(), 0), } } async fn assert_version_descriptor( session: &crate::support::simulation_test::engine::SimSession, version_id: &str, expected_name: &str, ) { let result = session .execute( &format!("SELECT id, name FROM lix_version WHERE id = '{version_id}'"), &[], ) .await .expect("version query should succeed"); let rows = result; assert_eq!(rows.len(), 1); assert_eq!( rows.rows()[0].values(), &[ Value::Text(version_id.to_string()), Value::Text(expected_name.to_string()), ] ); } async fn count_version_descriptors( session: &crate::support::simulation_test::engine::SimSession, version_id: &str, ) -> i64 { select_single_integer( session, &format!("SELECT COUNT(*) FROM lix_version_descriptor WHERE id = '{version_id}'"), ) .await } async fn count_version_refs( session: &crate::support::simulation_test::engine::SimSession, version_id: &str, ) -> i64 { select_single_integer( session, &format!( "SELECT COUNT(*) FROM lix_state \ WHERE schema_key = 'lix_version_ref' AND entity_id = lix_json('[\"{version_id}\"]')" ), ) .await } fn assert_version_pair_delete_restricted(error: &lix_engine::LixError) { assert_eq!(error.code, lix_engine::LixError::CODE_READ_ONLY); assert!( error.to_string().contains("lix_version"), "error should explain the version pair restriction: {error:?}" ); assert!( error .hint .as_deref() .is_some_and(|hint| hint.contains("lix_version")), "error should guide callers to the lix_version surface: {error:?}" ); } fn assert_merge_conflict_error(error: &lix_engine::LixError) { assert_eq!(error.code, "LIX_MERGE_CONFLICT"); assert!( error.message.contains("tracked-state conflict"), "unexpected merge error: {error:?}" ); let details = error .details .as_ref() .expect("merge conflict should include details"); let conflicts = details .get("conflicts") .and_then(JsonValue::as_array) .expect("merge conflict details should include conflicts array"); assert_eq!(conflicts.len(), 1); let conflict = &conflicts[0]; assert_eq!( conflict.get("kind").and_then(JsonValue::as_str), Some("sameEntityChanged") ); assert_eq!( conflict.get("schemaKey").and_then(JsonValue::as_str), Some("lix_key_value") ); assert!( conflict .get("entityId") .and_then(JsonValue::as_array) .is_some(), "conflict should include entityId: {conflict:?}" ); assert!( conflict.get("target").is_some(), "conflict should include target side: {conflict:?}" ); assert!( conflict.get("source").is_some(), "conflict should include source side: {conflict:?}" ); } async fn select_single_integer( session: &crate::support::simulation_test::engine::SimSession, sql: &str, ) -> i64 { let result = session .execute(sql, &[]) .await .expect("query should succeed"); assert_eq!(result.len(), 1, "expected exactly one row for query: {sql}"); let Value::Integer(value) = result.rows()[0].values()[0] else { panic!("expected integer value for query: {sql}"); }; value } async fn commit_parent_edges( session: &crate::support::simulation_test::engine::SimSession, commit_id: &str, ) -> Vec<(String, i64)> { let result = session .execute( &format!( "SELECT parent_id, parent_order \ FROM lix_commit_edge \ WHERE child_id = '{commit_id}' \ ORDER BY parent_order" ), &[], ) .await .expect("commit edges should read"); result .rows() .iter() .map(|row| { let Value::Text(value) = &row.values()[0] else { panic!("parent_id should be text"); }; let Value::Integer(parent_order) = row.values()[1] else { panic!("parent_order should be integer"); }; (value.clone(), parent_order) }) .collect() } async fn assert_empty_merge_commit( engine: &Engine, session: &crate::support::simulation_test::engine::SimSession, merge_commit_id: &str, target_head_before: &str, source_head: &str, ) { let active_version_id = session .active_version_id() .await .expect("active version should load"); assert_eq!( engine .load_version_head_commit_id(&active_version_id) .await .expect("target version head should load") .as_deref(), Some(merge_commit_id), "empty merge should advance the target version ref" ); let global = session.wrap_session( engine .open_session("global") .await .expect("global session should open"), engine, ); assert_eq!( commit_parent_edges(&global, merge_commit_id) .await .into_iter() .map(|(parent_id, _)| parent_id) .collect::>(), [target_head_before.to_string(), source_head.to_string()] .into_iter() .collect::>(), "empty merge commit should preserve target/source ancestry" ); } ================================================ FILE: packages/engine/tests/code_structure.rs ================================================ #![allow(dead_code)] use std::collections::{BTreeMap, BTreeSet, HashMap, HashSet}; use std::fmt::Write as _; use std::fs; use std::path::{Path, PathBuf}; #[derive(Debug, Clone, PartialEq, Eq)] struct ForbiddenDependencyRule { from_scope: &'static str, reason: &'static str, forbidden_scopes: &'static [&'static str], } const FORBIDDEN_DEPENDENCY_RULES: &[ForbiddenDependencyRule] = &[ ForbiddenDependencyRule { from_scope: "catalog", reason: "catalog is the semantic owner for public named relations and must not depend on lowering, orchestration, or sidecar owners", forbidden_scopes: &[ "backend", "canonical", "api", "execution", "init", "services", "session", "sql", ], }, ForbiddenDependencyRule { from_scope: "backend", reason: "backend is a lower persistence owner; it owns raw prepared statement DTOs but must not grow dependencies on higher workflow or sidecar roots", forbidden_scopes: &["services"], }, ForbiddenDependencyRule { from_scope: "services", reason: "services are leaf sidecar capabilities and may depend only on neutral foundations like common, not on engine composition or semantic owner roots", forbidden_scopes: &[ "api", "backend", "canonical", "catalog", "diagnostics", "execution", "init", "live_state", "schema", "session", "sql", ], }, ForbiddenDependencyRule { from_scope: "live_state", reason: "live_state is the generic projection engine and must not reacquire services sidecars or write orchestration owners", forbidden_scopes: &["execution", "services"], }, ForbiddenDependencyRule { from_scope: "sql2", reason: "sql2 is the compiler/runtime provider lane; it must not depend on workflow or higher orchestration roots directly", forbidden_scopes: &["execution", "services", "session"], }, ForbiddenDependencyRule { from_scope: "execution", reason: "execution is the public SQL runner leaf; it may consume sql-owned prepared artifacts but must not depend on higher orchestration owners or transaction internals", forbidden_scopes: &["canonical", "api", "init", "services", "session", "transaction"], }, ForbiddenDependencyRule { from_scope: "session", reason: "session owns orchestration and workflow code, but should not couple itself to the root API shell", forbidden_scopes: &["api"], }, ]; const TARGET_CORE_MODULES: &[&str] = &["backend", "live_state", "session", "sql2", "transaction"]; #[derive(Debug, Clone, PartialEq, Eq)] struct EngineDependencyGraph { module_source: String, modules_analyzed: Vec, edges: Vec, strongly_connected_components: Vec, adjacency_by_module: BTreeMap, } #[derive(Debug, Clone, PartialEq, Eq)] struct DependencyEdge { from: String, to: String, via_files: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] struct StronglyConnectedComponent { modules: Vec, internal_edges: Vec, } #[derive(Debug, Clone, PartialEq, Eq)] struct ModuleAdjacency { incoming: Vec, outgoing: Vec, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct SealedOwnerViolation { importer_file: String, imported_path: String, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct ImportPathViolation { importer_file: String, imported_path: String, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct RawSqlExecutionViolation { file: String, pattern: &'static str, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct RawBackendTypeViolation { file: String, type_name: &'static str, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct TransactionLifecycleViolation { file: String, pattern: &'static str, } #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)] struct SqlRuntimeOwnershipViolation { file: String, pattern: &'static str, } #[derive(Debug, Clone, PartialEq, Eq)] enum UseToken { DblColon, LBrace, RBrace, Comma, Star, As, Ident(String), } const ALLOWED_SERVICE_FOUNDATION_ROOTS: &[&str] = &["common"]; fn engine_root() -> PathBuf { PathBuf::from(env!("CARGO_MANIFEST_DIR")) } fn src_root() -> PathBuf { engine_root().join("src") } fn lib_path() -> PathBuf { src_root().join("lib.rs") } fn read_engine_source(relative: &str) -> String { fs::read_to_string(src_root().join(relative)).expect("engine source file should be readable") } fn source_between<'a>( relative: &str, source: &'a str, start_needle: &str, end_needle: &str, ) -> &'a str { let start = source .find(start_needle) .unwrap_or_else(|| panic!("{relative} should contain `{start_needle}`")); let end = source[start..] .find(end_needle) .map(|end| start + end) .unwrap_or_else(|| { panic!("{relative} should contain `{end_needle}` after `{start_needle}`") }); &source[start..end] } fn assert_source_contains_in_order(relative: &str, source: &str, needles: &[&str]) { let mut previous: Option<(&str, usize)> = None; for needle in needles { let index = source .find(needle) .unwrap_or_else(|| panic!("{relative} should contain `{needle}`")); if let Some((previous_needle, previous_index)) = previous { assert!( previous_index < index, "{relative} should keep `{previous_needle}` before `{needle}`", ); } previous = Some((needle, index)); } } fn assert_source_contains_all(relative: &str, source: &str, needles: &[&str]) { for needle in needles { assert!( source.contains(needle), "{relative} should contain `{needle}`", ); } } fn assert_source_contains_none(relative: &str, source: &str, needles: &[&str]) { for needle in needles { assert!( !source.contains(needle), "{relative} should not contain `{needle}`", ); } } fn analyze_engine_dependency_graph() -> EngineDependencyGraph { let lib_source = fs::read_to_string(lib_path()).expect("src/lib.rs should be readable"); let top_level_modules = parse_top_level_modules(&lib_source); let module_set: HashSet = top_level_modules.iter().cloned().collect(); let mut graph: BTreeMap> = top_level_modules .iter() .cloned() .map(|module| (module, BTreeSet::new())) .collect(); let mut edge_provenance: BTreeMap<(String, String), BTreeSet> = BTreeMap::new(); for module_name in &top_level_modules { for absolute_path in rust_files_for_top_level_module(module_name) { let relative_path = absolute_path .strip_prefix(src_root()) .expect("module source file should be inside src/") .to_string_lossy() .replace('\\', "/"); if is_test_support_relative_path(&relative_path) { continue; } let source = fs::read_to_string(&absolute_path).expect("module source file should be readable"); let current_module_path = module_path_for_file(&relative_path); let dependencies = collect_dependencies_from_source( &strip_test_code(&source), ¤t_module_path, &module_set, ); for dependency in dependencies { if dependency == *module_name { continue; } graph .get_mut(module_name) .expect("all top-level modules should have graph entries") .insert(dependency.clone()); edge_provenance .entry((module_name.clone(), dependency)) .or_default() .insert(relative_path.clone()); } } } let edges: Vec = edge_provenance .into_iter() .map(|((from, to), via_files)| DependencyEdge { from, to, via_files: via_files.into_iter().collect(), }) .collect(); let strongly_connected_components = tarjan(&top_level_modules, &graph) .into_iter() .filter(|component| component.len() > 1) .map(|component| { let members: BTreeSet = component.into_iter().collect(); let mut modules: Vec = members.iter().cloned().collect(); modules.sort(); let internal_edges: Vec = edges .iter() .filter(|edge| members.contains(&edge.from) && members.contains(&edge.to)) .cloned() .collect(); StronglyConnectedComponent { modules, internal_edges, } }) .collect(); let adjacency_by_module = build_adjacency_map(&top_level_modules, &edges); EngineDependencyGraph { module_source: "src/lib.rs".to_string(), modules_analyzed: top_level_modules, edges, strongly_connected_components, adjacency_by_module, } } fn build_adjacency_map( modules: &[String], edges: &[DependencyEdge], ) -> BTreeMap { let mut incoming: BTreeMap> = modules .iter() .cloned() .map(|module| (module, BTreeSet::new())) .collect(); let mut outgoing: BTreeMap> = modules .iter() .cloned() .map(|module| (module, BTreeSet::new())) .collect(); for edge in edges { incoming .get_mut(&edge.to) .expect("all destination modules should exist in adjacency map") .insert(edge.from.clone()); outgoing .get_mut(&edge.from) .expect("all source modules should exist in adjacency map") .insert(edge.to.clone()); } modules .iter() .cloned() .map(|module| { let incoming = incoming .remove(&module) .expect("all modules should have incoming adjacency entries") .into_iter() .collect(); let outgoing = outgoing .remove(&module) .expect("all modules should have outgoing adjacency entries") .into_iter() .collect(); (module, ModuleAdjacency { incoming, outgoing }) }) .collect() } fn parse_top_level_modules(lib_source: &str) -> Vec { let mut modules = Vec::new(); let mut pending_attributes = Vec::new(); for line in lib_source.lines() { let trimmed = line.trim(); if trimmed.is_empty() { continue; } if trimmed.starts_with("#[") { pending_attributes.push(trimmed.to_string()); continue; } let mut cursor = trimmed; if let Some(rest) = cursor.strip_prefix("pub(crate) ") { cursor = rest; } else if let Some(rest) = cursor.strip_prefix("pub ") { cursor = rest; } else if cursor.starts_with("pub(") { if let Some(idx) = cursor.find(") ") { cursor = &cursor[idx + 2..]; } } if let Some(rest) = cursor.strip_prefix("mod ") { if let Some(module_name) = rest.strip_suffix(';') { let is_test_only = pending_attributes .iter() .any(|attribute| attribute.contains("cfg(test)")); if !is_test_only { let name = module_name.trim(); if !name.is_empty() { modules.push(name.to_string()); } } } } pending_attributes.clear(); } modules } fn rust_files_for_top_level_module(module_name: &str) -> Vec { let mut files = Vec::new(); let module_file = src_root().join(format!("{module_name}.rs")); let module_directory = src_root().join(module_name); if module_file.exists() { files.push(module_file); } if module_directory.exists() { walk_rust_files(&module_directory, &mut files); } files.sort(); files } fn walk_rust_files(directory: &Path, files: &mut Vec) { for entry in fs::read_dir(directory).expect("directory should be readable") { let entry = entry.expect("directory entry should be readable"); let path = entry.path(); if path.is_dir() { if path.file_name().is_some_and(|name| name == "tests") { continue; } walk_rust_files(&path, files); continue; } if !path.is_file() { continue; } if path.extension().is_some_and(|ext| ext == "rs") && path.file_name().is_none_or(|name| name != "tests.rs") { files.push(path); } } } fn module_path_for_file(relative_path: &str) -> Vec { let normalized: Vec<&str> = relative_path.split('/').collect(); if normalized.len() == 1 { return vec![normalized[0].trim_end_matches(".rs").to_string()]; } if normalized.last() == Some(&"mod.rs") { return normalized[..normalized.len() - 1] .iter() .map(|segment| (*segment).to_string()) .collect(); } let mut parts: Vec = normalized[..normalized.len() - 1] .iter() .map(|segment| (*segment).to_string()) .collect(); let filename = normalized .last() .expect("relative path should contain a file name") .trim_end_matches(".rs"); parts.push(filename.to_string()); parts } fn collect_dependencies_from_source( source: &str, current_module_path: &[String], module_set: &HashSet, ) -> BTreeSet { let without_tests = strip_test_code(source); let sanitized = mask_rust_source(&without_tests); let mut dependencies = BTreeSet::new(); dependencies.extend(collect_use_dependencies( &sanitized, current_module_path, module_set, )); dependencies.extend(collect_explicit_path_dependencies( &sanitized, current_module_path, module_set, )); dependencies } fn strip_test_code(source: &str) -> String { let stripped = strip_cfg_test_items(source); let masked = mask_rust_source(&stripped); let mut ranges = Vec::new(); let bytes = masked.as_bytes(); let mut index = 0usize; while index < bytes.len() { if let Some((mod_start, after_mod)) = match_keyword(bytes, index, b"mod") { let after_whitespace = skip_whitespace(bytes, after_mod); if let Some((ident, after_ident)) = parse_identifier(bytes, after_whitespace) { let ident = normalize_identifier(&ident); let after_name = skip_whitespace(bytes, after_ident); if ident == "tests" && bytes.get(after_name) == Some(&b'{') { if let Some(close_brace_index) = find_matching_brace(bytes, after_name) { ranges.push((mod_start, close_brace_index + 1)); index = close_brace_index + 1; continue; } } } } index += 1; } let mut result = stripped; ranges.sort_by(|left, right| right.0.cmp(&left.0)); for (start, end) in ranges { result.replace_range(start..end, ""); } result } fn strip_cfg_test_items(source: &str) -> String { let lines: Vec<&str> = source.lines().collect(); let mut output = String::new(); let mut index = 0usize; while index < lines.len() { let line = lines[index]; let trimmed = line.trim_start(); if trimmed.starts_with("#[") && trimmed.contains("cfg(test)") { index += 1; while index < lines.len() && lines[index].trim_start().starts_with("#[") { index += 1; } skip_annotated_item(&lines, &mut index); continue; } output.push_str(line); output.push('\n'); index += 1; } output } fn skip_annotated_item(lines: &[&str], index: &mut usize) { let mut brace_depth = 0i32; let mut saw_item_body = false; while *index < lines.len() { let line = lines[*index]; brace_depth += brace_delta(line); saw_item_body |= line.contains('{') || line.trim_end().ends_with(';'); *index += 1; if saw_item_body && brace_depth <= 0 { break; } } } fn brace_delta(line: &str) -> i32 { line.chars().fold(0, |count, ch| match ch { '{' => count + 1, '}' => count - 1, _ => count, }) } fn mask_rust_source(source: &str) -> String { let bytes = source.as_bytes(); let mut result = vec![b' '; bytes.len()]; let mut index = 0usize; let mut block_comment_depth = 0usize; while index < bytes.len() { let current = bytes[index]; let next = bytes.get(index + 1).copied().unwrap_or_default(); if block_comment_depth > 0 { if current == b'/' && next == b'*' { block_comment_depth += 1; index += 2; continue; } if current == b'*' && next == b'/' { block_comment_depth -= 1; index += 2; continue; } if current == b'\n' { result[index] = b'\n'; } index += 1; continue; } if current == b'/' && next == b'/' { index += 2; while index < bytes.len() && bytes[index] != b'\n' { index += 1; } continue; } if current == b'/' && next == b'*' { block_comment_depth = 1; index += 2; continue; } if current == b'"' { result[index] = b' '; index += 1; while index < bytes.len() { let ch = bytes[index]; if ch == b'\n' { result[index] = b'\n'; } index += 1; if ch == b'\\' { if index < bytes.len() { if bytes[index] == b'\n' { result[index] = b'\n'; } index += 1; } continue; } if ch == b'"' { break; } } continue; } if current == b'r' { let mut probe = index + 1; while bytes.get(probe) == Some(&b'#') { probe += 1; } if bytes.get(probe) == Some(&b'"') { let hash_count = probe - index - 1; let closing_len = hash_count + 1; index = probe + 1; while index < bytes.len() { if bytes[index] == b'\n' { result[index] = b'\n'; } if bytes[index] == b'"' && bytes .get(index + 1..index + 1 + hash_count) .is_some_and(|suffix| suffix.iter().all(|byte| *byte == b'#')) { index += closing_len; break; } index += 1; } continue; } } result[index] = current; index += 1; } String::from_utf8(result).expect("masked Rust source should stay valid UTF-8") } fn collect_use_dependencies( source: &str, current_module_path: &[String], module_set: &HashSet, ) -> BTreeSet { let bytes = source.as_bytes(); let mut dependencies = BTreeSet::new(); let mut index = 0usize; while index < bytes.len() { if let Some((_, after_use)) = match_keyword(bytes, index, b"use") { let mut cursor = after_use; while cursor < bytes.len() && bytes[cursor] != b';' { cursor += 1; } if cursor < bytes.len() { let spec = &source[after_use..cursor]; dependencies.extend(resolve_use_dependencies( spec, current_module_path, module_set, )); index = cursor + 1; continue; } } index += 1; } dependencies } fn resolve_use_dependencies( spec: &str, current_module_path: &[String], module_set: &HashSet, ) -> BTreeSet { let tokens = tokenize_use_spec(spec); let mut dependencies = BTreeSet::new(); let mut index = 0usize; while index < tokens.len() { index = parse_use_tree( &tokens, index, current_module_path, None, module_set, &mut dependencies, ); if matches!(tokens.get(index), Some(UseToken::Comma)) { index += 1; } else { break; } } dependencies } fn tokenize_use_spec(spec: &str) -> Vec { let bytes = spec.as_bytes(); let mut tokens = Vec::new(); let mut index = 0usize; while index < bytes.len() { let current = bytes[index]; let next = bytes.get(index + 1).copied().unwrap_or_default(); if current.is_ascii_whitespace() { index += 1; continue; } if current == b':' && next == b':' { tokens.push(UseToken::DblColon); index += 2; continue; } if current == b'{' { tokens.push(UseToken::LBrace); index += 1; continue; } if current == b'}' { tokens.push(UseToken::RBrace); index += 1; continue; } if current == b',' { tokens.push(UseToken::Comma); index += 1; continue; } if current == b'*' { tokens.push(UseToken::Star); index += 1; continue; } if let Some((ident, next_index)) = parse_identifier(bytes, index) { let normalized = normalize_identifier(&ident); if normalized == "as" { tokens.push(UseToken::As); } else { tokens.push(UseToken::Ident(normalized)); } index = next_index; continue; } index += 1; } tokens } fn parse_use_tree( tokens: &[UseToken], index: usize, current_module_path: &[String], base_context: Option<&[String]>, module_set: &HashSet, dependencies: &mut BTreeSet, ) -> usize { let (path_parts, next_index) = parse_use_path(tokens, index); if path_parts.is_empty() { return skip_until_boundary(tokens, index); } let resolved_path = resolve_use_path(&path_parts, current_module_path, base_context); if let Some(dependency) = resolved_path.first() { if module_set.contains(dependency) { dependencies.insert(dependency.clone()); } } let mut cursor = next_index; if matches!(tokens.get(cursor), Some(UseToken::DblColon)) && matches!(tokens.get(cursor + 1), Some(UseToken::LBrace)) { cursor += 2; while cursor < tokens.len() && !matches!(tokens.get(cursor), Some(UseToken::RBrace)) { cursor = parse_use_tree( tokens, cursor, current_module_path, Some(&resolved_path), module_set, dependencies, ); if matches!(tokens.get(cursor), Some(UseToken::Comma)) { cursor += 1; } } if matches!(tokens.get(cursor), Some(UseToken::RBrace)) { cursor += 1; } return cursor; } if matches!(tokens.get(cursor), Some(UseToken::DblColon)) && matches!(tokens.get(cursor + 1), Some(UseToken::Star)) { return cursor + 2; } if matches!(tokens.get(cursor), Some(UseToken::As)) { return cursor + if matches!(tokens.get(cursor + 1), Some(UseToken::Ident(_))) { 2 } else { 1 }; } cursor } fn parse_use_path(tokens: &[UseToken], index: usize) -> (Vec, usize) { let mut path_parts = Vec::new(); let mut cursor = index; while let Some(UseToken::Ident(value)) = tokens.get(cursor) { path_parts.push(value.clone()); if matches!(tokens.get(cursor + 1), Some(UseToken::DblColon)) && matches!(tokens.get(cursor + 2), Some(UseToken::Ident(_))) { cursor += 2; continue; } cursor += 1; break; } (path_parts, cursor) } fn resolve_use_path( path_parts: &[String], current_module_path: &[String], base_context: Option<&[String]>, ) -> Vec { if let Some(base_context) = base_context { if path_parts.first().is_some_and(|part| part == "self") { let mut result = base_context.to_vec(); result.extend(path_parts.iter().skip(1).cloned()); return result; } if path_parts .first() .is_some_and(|part| part == "crate" || part == "super") { return resolve_relative_path(path_parts, current_module_path); } let mut result = base_context.to_vec(); result.extend(path_parts.iter().cloned()); return result; } if path_parts .first() .is_none_or(|part| part != "crate" && part != "self" && part != "super") { return Vec::new(); } resolve_relative_path(path_parts, current_module_path) } fn resolve_relative_path(path_parts: &[String], current_module_path: &[String]) -> Vec { if path_parts.first().is_some_and(|part| part == "crate") { return path_parts.iter().skip(1).cloned().collect(); } if path_parts.first().is_some_and(|part| part == "self") { let mut result = current_module_path.to_vec(); result.extend(path_parts.iter().skip(1).cloned()); return result; } let super_count = path_parts .iter() .take_while(|part| *part == "super") .count(); let mut result: Vec = current_module_path .iter() .take(current_module_path.len().saturating_sub(super_count)) .cloned() .collect(); result.extend(path_parts.iter().skip(super_count).cloned()); result } fn skip_until_boundary(tokens: &[UseToken], index: usize) -> usize { let mut cursor = index; while cursor < tokens.len() && !matches!( tokens.get(cursor), Some(UseToken::Comma) | Some(UseToken::RBrace) ) { cursor += 1; } cursor } fn collect_explicit_path_dependencies( source: &str, current_module_path: &[String], module_set: &HashSet, ) -> BTreeSet { let bytes = source.as_bytes(); let mut dependencies = BTreeSet::new(); let mut index = 0usize; while index < bytes.len() { let Some((prefix, after_prefix)) = parse_explicit_prefix(bytes, index) else { index += 1; continue; }; let after_separator = skip_whitespace(bytes, after_prefix); if bytes.get(after_separator..after_separator + 2) != Some(&b"::"[..]) { index += 1; continue; } let after_double_colon = skip_whitespace(bytes, after_separator + 2); let Some((first_segment, after_first_segment)) = parse_identifier(bytes, after_double_colon) else { index += 1; continue; }; let dependency = resolve_explicit_dependency( &prefix, &normalize_identifier(&first_segment), current_module_path, ); if let Some(dependency) = dependency { if module_set.contains(&dependency) { dependencies.insert(dependency); } } index = after_first_segment; } dependencies } fn parse_explicit_prefix(bytes: &[u8], index: usize) -> Option<(Vec, usize)> { let (ident, mut cursor) = parse_identifier(bytes, index)?; let normalized = normalize_identifier(&ident); if normalized != "crate" && normalized != "self" && normalized != "super" { return None; } let mut prefix = vec![normalized]; loop { let after_whitespace = skip_whitespace(bytes, cursor); if bytes.get(after_whitespace..after_whitespace + 2) != Some(&b"::"[..]) { return Some((prefix, cursor)); } let after_separator = skip_whitespace(bytes, after_whitespace + 2); let Some((next_ident, next_cursor)) = parse_identifier(bytes, after_separator) else { return Some((prefix, cursor)); }; let next_ident = normalize_identifier(&next_ident); if next_ident != "super" { return Some((prefix, cursor)); } prefix.push(next_ident); cursor = next_cursor; } } fn resolve_explicit_dependency( prefix: &[String], first_segment: &str, current_module_path: &[String], ) -> Option { match prefix.first()?.as_str() { "crate" => Some(first_segment.to_string()), "self" => current_module_path.first().cloned(), "super" => { let super_count = prefix.iter().filter(|segment| *segment == "super").count(); let mut absolute_path: Vec = current_module_path .iter() .take(current_module_path.len().saturating_sub(super_count)) .cloned() .collect(); absolute_path.push(first_segment.to_string()); absolute_path.first().cloned() } _ => None, } } fn parse_identifier(bytes: &[u8], index: usize) -> Option<(String, usize)> { let current = *bytes.get(index)?; if current == b'r' && bytes.get(index + 1) == Some(&b'#') { let mut cursor = index + 2; if !bytes.get(cursor).is_some_and(|byte| is_ident_start(*byte)) { return None; } cursor += 1; while bytes .get(cursor) .is_some_and(|byte| is_ident_continue(*byte)) { cursor += 1; } return Some(( String::from_utf8(bytes[index..cursor].to_vec()) .expect("raw identifier should stay valid UTF-8"), cursor, )); } if !is_ident_start(current) { return None; } let mut cursor = index + 1; while bytes .get(cursor) .is_some_and(|byte| is_ident_continue(*byte)) { cursor += 1; } Some(( String::from_utf8(bytes[index..cursor].to_vec()) .expect("identifier should stay valid UTF-8"), cursor, )) } fn normalize_identifier(identifier: &str) -> String { identifier .strip_prefix("r#") .unwrap_or(identifier) .to_string() } fn is_ident_start(byte: u8) -> bool { byte.is_ascii_alphabetic() || byte == b'_' } fn is_ident_continue(byte: u8) -> bool { byte.is_ascii_alphanumeric() || byte == b'_' } fn skip_whitespace(bytes: &[u8], mut index: usize) -> usize { while bytes .get(index) .is_some_and(|byte| byte.is_ascii_whitespace()) { index += 1; } index } fn match_keyword(bytes: &[u8], index: usize, keyword: &[u8]) -> Option<(usize, usize)> { let end = index.checked_add(keyword.len())?; if bytes.get(index..end)? != keyword { return None; } let boundary_before = index == 0 || !is_ident_continue(bytes[index - 1]); let boundary_after = bytes.get(end).is_none_or(|byte| !is_ident_continue(*byte)); if boundary_before && boundary_after { Some((index, end)) } else { None } } fn find_matching_brace(bytes: &[u8], open_brace_index: usize) -> Option { let mut depth = 0i32; for (index, byte) in bytes.iter().copied().enumerate().skip(open_brace_index) { match byte { b'{' => depth += 1, b'}' => { depth -= 1; if depth == 0 { return Some(index); } } _ => {} } } None } fn tarjan(nodes: &[String], graph: &BTreeMap>) -> Vec> { fn strong_connect( node: &str, graph: &BTreeMap>, next_index: &mut usize, stack: &mut Vec, on_stack: &mut HashSet, index_by_node: &mut HashMap, low_link_by_node: &mut HashMap, components: &mut Vec>, ) { index_by_node.insert(node.to_string(), *next_index); low_link_by_node.insert(node.to_string(), *next_index); *next_index += 1; stack.push(node.to_string()); on_stack.insert(node.to_string()); for neighbor in graph .get(node) .into_iter() .flat_map(|neighbors| neighbors.iter()) { if !index_by_node.contains_key(neighbor) { strong_connect( neighbor, graph, next_index, stack, on_stack, index_by_node, low_link_by_node, components, ); let new_low_link = low_link_by_node[node].min(low_link_by_node[neighbor]); low_link_by_node.insert(node.to_string(), new_low_link); } else if on_stack.contains(neighbor) { let new_low_link = low_link_by_node[node].min(index_by_node[neighbor]); low_link_by_node.insert(node.to_string(), new_low_link); } } if low_link_by_node[node] != index_by_node[node] { return; } let mut component = Vec::new(); while let Some(member) = stack.pop() { on_stack.remove(&member); component.push(member.clone()); if member == node { break; } } components.push(component); } let mut next_index = 0usize; let mut stack = Vec::new(); let mut on_stack = HashSet::new(); let mut index_by_node = HashMap::new(); let mut low_link_by_node = HashMap::new(); let mut components = Vec::new(); for node in nodes { if !index_by_node.contains_key(node) { strong_connect( node, graph, &mut next_index, &mut stack, &mut on_stack, &mut index_by_node, &mut low_link_by_node, &mut components, ); } } components } fn module_set(graph: &EngineDependencyGraph) -> BTreeSet { graph.modules_analyzed.iter().cloned().collect() } fn forbidden_dependency_lookup() -> BTreeMap<&'static str, &'static ForbiddenDependencyRule> { let mut lookup = BTreeMap::new(); for rule in FORBIDDEN_DEPENDENCY_RULES { let replaced = lookup.insert(rule.from_scope, rule); assert!( replaced.is_none(), "forbidden dependency map must define each source scope only once; duplicate `{}`", rule.from_scope, ); } lookup } fn actual_architecture_violations<'a>( graph: &'a EngineDependencyGraph, forbidden_lookup: &BTreeMap<&'static str, &'static ForbiddenDependencyRule>, ) -> Vec<&'a DependencyEdge> { graph .edges .iter() .filter(|edge| { forbidden_lookup .get(edge.from.as_str()) .is_some_and(|rule| rule.forbidden_scopes.contains(&edge.to.as_str())) }) .collect() } fn target_core_graph(graph: &EngineDependencyGraph) -> BTreeMap> { let target_core_modules: BTreeSet = TARGET_CORE_MODULES .iter() .map(|module| (*module).to_string()) .collect(); let mut filtered: BTreeMap> = target_core_modules .iter() .cloned() .map(|module| (module, BTreeSet::new())) .collect(); for edge in &graph.edges { if !target_core_modules.contains(&edge.from) || !target_core_modules.contains(&edge.to) { continue; } if target_core_transition_allows_edge(edge) { continue; } filtered .get_mut(&edge.from) .expect("target core graph should contain every filtered source") .insert(edge.to.clone()); } filtered } fn target_core_transition_allows_edge(edge: &DependencyEdge) -> bool { (edge.from == "transaction" && edge.to == "session") || (edge.from == "sql2" && edge.to == "transaction") } fn render_target_core_graph(graph: &BTreeMap>) -> String { let mut rendered = String::new(); for (module, outgoing) in graph { let neighbors = outgoing.iter().cloned().collect::>().join(", "); let _ = writeln!(&mut rendered, "{module} -> [{neighbors}]"); } rendered } fn owner_root_cycles(graph: &BTreeMap>) -> Vec> { let nodes = graph.keys().cloned().collect::>(); let mut cycles = tarjan(&nodes, graph) .into_iter() .filter(|component| { component.len() > 1 || component.first().is_some_and(|node| { graph .get(node) .is_some_and(|neighbors| neighbors.contains(node)) }) }) .map(|mut component| { component.sort(); component }) .collect::>(); cycles.sort(); cycles } fn render_owner_root_cycles(cycles: &[Vec]) -> String { let mut rendered = String::new(); for cycle in cycles { let _ = writeln!(&mut rendered, " - {}", cycle.join(" -> ")); } rendered } fn render_forbidden_dependency_violations( violations: &[&DependencyEdge], forbidden_lookup: &BTreeMap<&'static str, &'static ForbiddenDependencyRule>, ) -> String { let mut grouped: BTreeMap<&str, Vec<&DependencyEdge>> = BTreeMap::new(); for violation in violations { grouped .entry(violation.from.as_str()) .or_default() .push(*violation); } let mut rendered = String::new(); for (from_scope, edges) in grouped { let rule = forbidden_lookup .get(from_scope) .expect("every forbidden violation should have a matching rule"); let _ = writeln!(&mut rendered, "{from_scope}: {}", rule.reason); for edge in edges { let _ = writeln!(&mut rendered, " - {} -> {}", edge.from, edge.to); for via_file in &edge.via_files { let _ = writeln!(&mut rendered, " via {via_file}"); } } } rendered } fn production_source_files() -> Vec<(String, String)> { let lib_source = fs::read_to_string(lib_path()).expect("src/lib.rs should be readable"); let top_level_modules = parse_top_level_modules(&lib_source); let mut files = Vec::new(); files.push(("lib.rs".to_string(), strip_test_code(&lib_source))); for module_name in top_level_modules { for absolute_path in rust_files_for_top_level_module(&module_name) { let relative_path = absolute_path .strip_prefix(src_root()) .expect("module source file should be inside src/") .to_string_lossy() .replace('\\', "/"); if is_test_support_relative_path(&relative_path) { continue; } let source = fs::read_to_string(&absolute_path).expect("module source file should be readable"); files.push((relative_path, strip_test_code(&source))); } } files.sort_by(|left, right| left.0.cmp(&right.0)); files } fn source_and_test_rust_files() -> Vec<(String, String)> { let mut files = production_source_files(); let mut test_files = Vec::new(); let tests_root = engine_root().join("tests"); walk_rust_files(&tests_root, &mut test_files); for absolute_path in test_files { let relative_path = absolute_path .strip_prefix(engine_root()) .expect("test source file should be inside the engine root") .to_string_lossy() .replace('\\', "/"); let source = fs::read_to_string(&absolute_path).expect("test source file should be readable"); files.push((relative_path, source)); } files.sort_by(|left, right| left.0.cmp(&right.0)); files } fn is_test_support_relative_path(relative_path: &str) -> bool { let parts: Vec<&str> = relative_path.split('/').collect(); parts.iter().any(|part| { *part == "tests" || *part == "test" || part .strip_suffix(".rs") .is_some_and(|stem| stem.ends_with("_tests")) || part.ends_with("_tests") }) } fn root_module_entry_relative_path(module_name: &str) -> Option { let module_file = src_root().join(format!("{module_name}.rs")); if module_file.exists() { return Some(format!("{module_name}.rs")); } let module_mod_file = src_root().join(module_name).join("mod.rs"); if module_mod_file.exists() { return Some(format!("{module_name}/mod.rs")); } None } fn parse_declared_modules(source: &str) -> Vec { let mut modules = Vec::new(); let mut pending_attributes = Vec::new(); for line in source.lines() { let trimmed = line.trim(); if trimmed.is_empty() { continue; } if trimmed.starts_with("#[") { pending_attributes.push(trimmed.to_string()); continue; } let mut cursor = trimmed; if let Some(rest) = cursor.strip_prefix("pub(crate) ") { cursor = rest; } else if let Some(rest) = cursor.strip_prefix("pub ") { cursor = rest; } else if cursor.starts_with("pub(") { if let Some(idx) = cursor.find(") ") { cursor = &cursor[idx + 2..]; } } if let Some(rest) = cursor.strip_prefix("mod ") { if let Some(module_name) = rest.strip_suffix(';') { let is_test_only = pending_attributes .iter() .any(|attribute| attribute.contains("cfg(test)")); if !is_test_only { let name = module_name.trim(); if !name.is_empty() { modules.push(name.to_string()); } } } } pending_attributes.clear(); } modules } fn sealed_owner_child_modules() -> BTreeMap> { let lib_source = fs::read_to_string(lib_path()).expect("src/lib.rs should be readable"); let top_level_modules = parse_top_level_modules(&lib_source); let mut child_modules = BTreeMap::new(); for module_name in top_level_modules { let Some(relative_path) = root_module_entry_relative_path(&module_name) else { continue; }; let source = read_engine_source(&relative_path); let declared_modules = parse_declared_modules(&strip_test_code(&source)); child_modules.insert(module_name, declared_modules.into_iter().collect()); } child_modules } fn collect_module_paths_from_source( source: &str, current_module_path: &[String], module_set: &HashSet, ) -> BTreeSet> { let without_tests = strip_test_code(source); let sanitized = mask_rust_source(&without_tests); let mut paths = BTreeSet::new(); paths.extend(collect_use_paths_from_source( &sanitized, current_module_path, module_set, )); paths.extend(collect_explicit_paths_from_source( &sanitized, current_module_path, module_set, )); paths } fn collect_use_paths_from_source( source: &str, current_module_path: &[String], module_set: &HashSet, ) -> BTreeSet> { let bytes = source.as_bytes(); let mut paths = BTreeSet::new(); let mut index = 0usize; while index < bytes.len() { if let Some((_, after_use)) = match_keyword(bytes, index, b"use") { let mut cursor = after_use; while cursor < bytes.len() && bytes[cursor] != b';' { cursor += 1; } if cursor < bytes.len() { let spec = &source[after_use..cursor]; paths.extend(resolve_use_paths(spec, current_module_path, module_set)); index = cursor + 1; continue; } } index += 1; } paths } fn resolve_use_paths( spec: &str, current_module_path: &[String], module_set: &HashSet, ) -> BTreeSet> { let tokens = tokenize_use_spec(spec); let mut paths = BTreeSet::new(); let mut index = 0usize; while index < tokens.len() { index = parse_use_tree_paths( &tokens, index, current_module_path, None, module_set, &mut paths, ); if matches!(tokens.get(index), Some(UseToken::Comma)) { index += 1; } else { break; } } paths } fn parse_use_tree_paths( tokens: &[UseToken], index: usize, current_module_path: &[String], base_context: Option<&[String]>, module_set: &HashSet, paths: &mut BTreeSet>, ) -> usize { let (path_parts, next_index) = parse_use_path(tokens, index); if path_parts.is_empty() { return skip_until_boundary(tokens, index); } let resolved_path = resolve_use_path(&path_parts, current_module_path, base_context); if resolved_path .first() .is_some_and(|dependency| module_set.contains(dependency)) { paths.insert(resolved_path.clone()); } let mut cursor = next_index; if matches!(tokens.get(cursor), Some(UseToken::DblColon)) && matches!(tokens.get(cursor + 1), Some(UseToken::LBrace)) { cursor += 2; while cursor < tokens.len() && !matches!(tokens.get(cursor), Some(UseToken::RBrace)) { cursor = parse_use_tree_paths( tokens, cursor, current_module_path, Some(&resolved_path), module_set, paths, ); if matches!(tokens.get(cursor), Some(UseToken::Comma)) { cursor += 1; } } if matches!(tokens.get(cursor), Some(UseToken::RBrace)) { cursor += 1; } return cursor; } if matches!(tokens.get(cursor), Some(UseToken::DblColon)) && matches!(tokens.get(cursor + 1), Some(UseToken::Star)) { return cursor + 2; } if matches!(tokens.get(cursor), Some(UseToken::As)) { return cursor + if matches!(tokens.get(cursor + 1), Some(UseToken::Ident(_))) { 2 } else { 1 }; } cursor } fn collect_explicit_paths_from_source( source: &str, current_module_path: &[String], module_set: &HashSet, ) -> BTreeSet> { let bytes = source.as_bytes(); let mut paths = BTreeSet::new(); let mut index = 0usize; while index < bytes.len() { let Some((prefix, after_prefix)) = parse_explicit_prefix(bytes, index) else { index += 1; continue; }; let after_separator = skip_whitespace(bytes, after_prefix); if bytes.get(after_separator..after_separator + 2) != Some(&b"::"[..]) { index += 1; continue; } let mut cursor = skip_whitespace(bytes, after_separator + 2); let mut segments = Vec::new(); loop { let Some((segment, after_segment)) = parse_identifier(bytes, cursor) else { break; }; segments.push(normalize_identifier(&segment)); let after_whitespace = skip_whitespace(bytes, after_segment); if bytes.get(after_whitespace..after_whitespace + 2) == Some(&b"::"[..]) { cursor = skip_whitespace(bytes, after_whitespace + 2); continue; } cursor = after_segment; break; } if segments.is_empty() { index += 1; continue; } let resolved_path = resolve_explicit_path(&prefix, &segments, current_module_path); if resolved_path .first() .is_some_and(|dependency| module_set.contains(dependency)) { paths.insert(resolved_path); } index = cursor.max(index + 1); } paths } fn resolve_explicit_path( prefix: &[String], segments: &[String], current_module_path: &[String], ) -> Vec { match prefix.first().map(String::as_str) { Some("crate") => segments.to_vec(), Some("self") => { let mut result = current_module_path.to_vec(); result.extend(segments.iter().cloned()); result } Some("super") => { let super_count = prefix.iter().filter(|segment| *segment == "super").count(); let mut result: Vec = current_module_path .iter() .take(current_module_path.len().saturating_sub(super_count)) .cloned() .collect(); result.extend(segments.iter().cloned()); result } _ => Vec::new(), } } fn current_sealed_owner_violations() -> Vec { let lib_source = fs::read_to_string(lib_path()).expect("src/lib.rs should be readable"); let top_level_modules = parse_top_level_modules(&lib_source); let module_set: HashSet = top_level_modules.iter().cloned().collect(); let child_modules = sealed_owner_child_modules(); let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { let current_module_path = module_path_for_file(&relative_path); let Some(current_root) = current_module_path.first() else { continue; }; for imported_path in collect_module_paths_from_source(&source, ¤t_module_path, &module_set) { if imported_path.len() < 2 { continue; } let owner_root = imported_path[0].as_str(); if owner_root == current_root { continue; } if sealed_owner_allows_importer(owner_root, &relative_path) { continue; } if sealed_owner_allows_import_path(owner_root, &imported_path) { continue; } if !violates_sealed_owner_boundary(owner_root, &imported_path, &child_modules) { continue; } violations.insert(SealedOwnerViolation { importer_file: relative_path.clone(), imported_path: imported_path.join("::"), }); } } violations.into_iter().collect() } fn violates_sealed_owner_boundary( owner_root: &str, imported_path: &[String], child_modules: &BTreeMap>, ) -> bool { if sealed_owner_root_facade_owners().contains(owner_root) { return true; } child_modules .get(owner_root) .is_some_and(|owner_child_modules| owner_child_modules.contains(&imported_path[1])) } fn sealed_owner_root_facade_owners() -> BTreeSet<&'static str> { ["api"].into_iter().collect() } fn sealed_owner_allows_importer(owner_root: &str, importer_file: &str) -> bool { (matches!(owner_root, "api") && importer_file == "lib.rs") || importer_file == "storage_bench.rs" } fn sealed_owner_allows_import_path(owner_root: &str, imported_path: &[String]) -> bool { owner_root == "transaction" && imported_path .get(1) .is_some_and(|segment| segment == "types") } fn render_grouped_sealed_owner_violations(violations: &[SealedOwnerViolation]) -> String { let mut grouped: BTreeMap<&str, BTreeMap<&str, Vec<&str>>> = BTreeMap::new(); for violation in violations { let owner_root = violation .imported_path .split("::") .next() .expect("imported path should include an owner root"); grouped .entry(owner_root) .or_default() .entry(violation.importer_file.as_str()) .or_default() .push(violation.imported_path.as_str()); } let mut rendered = String::new(); for (owner_root, files) in grouped { let _ = writeln!(&mut rendered, "{owner_root}:"); for (file, imported_paths) in files { let _ = writeln!(&mut rendered, " {file}:"); for imported_path in imported_paths { let _ = writeln!(&mut rendered, " - {imported_path}"); } } } rendered } fn render_grouped_import_path_violations(violations: &[ImportPathViolation]) -> String { let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new(); for violation in violations { grouped .entry(violation.importer_file.as_str()) .or_default() .push(violation.imported_path.as_str()); } let mut rendered = String::new(); for (file, imported_paths) in grouped { let _ = writeln!(&mut rendered, "{file}:"); for imported_path in imported_paths { let _ = writeln!(&mut rendered, " - {imported_path}"); } } rendered } fn render_grouped_raw_sql_execution_violations(violations: &[RawSqlExecutionViolation]) -> String { let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new(); for violation in violations { grouped .entry(violation.file.as_str()) .or_default() .push(violation.pattern); } let mut rendered = String::new(); for (file, patterns) in grouped { let _ = writeln!(&mut rendered, "{file}:"); for pattern in patterns { let _ = writeln!(&mut rendered, " - {pattern}"); } } rendered } fn render_grouped_raw_backend_type_violations(violations: &[RawBackendTypeViolation]) -> String { let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new(); for violation in violations { grouped .entry(violation.file.as_str()) .or_default() .push(violation.type_name); } let mut rendered = String::new(); for (file, type_names) in grouped { let _ = writeln!(&mut rendered, "{file}:"); for type_name in type_names { let _ = writeln!(&mut rendered, " - {type_name}"); } } rendered } fn render_grouped_transaction_lifecycle_violations( violations: &[TransactionLifecycleViolation], ) -> String { let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new(); for violation in violations { grouped .entry(violation.file.as_str()) .or_default() .push(violation.pattern); } let mut rendered = String::new(); for (file, patterns) in grouped { let _ = writeln!(&mut rendered, "{file}:"); for pattern in patterns { let _ = writeln!(&mut rendered, " - {pattern}"); } } rendered } fn render_grouped_sql_runtime_ownership_violations( violations: &[SqlRuntimeOwnershipViolation], ) -> String { let mut grouped: BTreeMap<&str, Vec<&str>> = BTreeMap::new(); for violation in violations { grouped .entry(violation.file.as_str()) .or_default() .push(violation.pattern); } let mut rendered = String::new(); for (file, patterns) in grouped { let _ = writeln!(&mut rendered, "{file}:"); for pattern in patterns { let _ = writeln!(&mut rendered, " - {pattern}"); } } rendered } fn top_level_module_set() -> HashSet { let lib_source = fs::read_to_string(lib_path()).expect("src/lib.rs should be readable"); parse_top_level_modules(&lib_source).into_iter().collect() } fn services_child_modules() -> BTreeSet { let Some(relative_path) = root_module_entry_relative_path("services") else { return BTreeSet::new(); }; let source = read_engine_source(&relative_path); parse_declared_modules(&strip_test_code(&source)) .into_iter() .collect() } fn current_services_direct_child_import_violations() -> Vec { let module_set = top_level_module_set(); let service_children = services_child_modules(); let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { let current_module_path = module_path_for_file(&relative_path); if current_module_path .first() .is_some_and(|root| root == "services") { continue; } for imported_path in collect_module_paths_from_source(&source, ¤t_module_path, &module_set) { if imported_path.first().is_none_or(|root| root != "services") { continue; } let imported_child = imported_path.get(1); let imports_declared_service_child = imported_child.is_some_and(|child| service_children.contains(child)); let stays_within_direct_child_surface = imported_path.len() <= 3; if imports_declared_service_child && stays_within_direct_child_surface { continue; } violations.insert(ImportPathViolation { importer_file: relative_path.clone(), imported_path: imported_path.join("::"), }); } } violations.into_iter().collect() } fn current_services_external_dependency_violations() -> Vec { let module_set = top_level_module_set(); let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { let current_module_path = module_path_for_file(&relative_path); if current_module_path .first() .is_none_or(|root| root != "services") { continue; } for imported_path in collect_module_paths_from_source(&source, ¤t_module_path, &module_set) { let Some(imported_root) = imported_path.first() else { continue; }; if imported_root == "services" { continue; } if ALLOWED_SERVICE_FOUNDATION_ROOTS.contains(&imported_root.as_str()) { continue; } violations.insert(ImportPathViolation { importer_file: relative_path.clone(), imported_path: imported_path.join("::"), }); } } violations.into_iter().collect() } fn current_services_sibling_dependency_violations() -> Vec { let module_set = top_level_module_set(); let service_children = services_child_modules(); let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { let current_module_path = module_path_for_file(&relative_path); if current_module_path .first() .is_none_or(|root| root != "services") { continue; } let Some(current_child) = current_module_path.get(1) else { continue; }; for imported_path in collect_module_paths_from_source(&source, ¤t_module_path, &module_set) { if imported_path.first().is_none_or(|root| root != "services") { continue; } let Some(imported_child) = imported_path.get(1) else { continue; }; if !service_children.contains(imported_child) { continue; } if imported_child == current_child { continue; } violations.insert(ImportPathViolation { importer_file: relative_path.clone(), imported_path: imported_path.join("::"), }); } } violations.into_iter().collect() } fn is_engine_owned_persistence_path(relative_path: &str) -> bool { let in_scope_owner_root = relative_path.starts_with("live_state/") || relative_path.starts_with("canonical/") || relative_path.starts_with("binary_cas/") || relative_path.starts_with("session/version_ops/"); let is_allowed_adapter_surface = relative_path.ends_with("/store.rs") || relative_path.ends_with("/store_sql.rs") || relative_path.ends_with("/storage.rs"); in_scope_owner_root && !is_allowed_adapter_surface } fn current_engine_owned_persistence_raw_sql_execution_violations() -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !is_engine_owned_persistence_path(&relative_path) { continue; } let masked_source = mask_rust_source(&source); for pattern in [".execute("] { if masked_source.contains(pattern) { violations.insert(RawSqlExecutionViolation { file: relative_path.clone(), pattern, }); } } } violations.into_iter().collect() } fn contains_identifier(source: &str, identifier: &str) -> bool { let bytes = source.as_bytes(); let needle = identifier.as_bytes(); let mut index = 0usize; while index + needle.len() <= bytes.len() { if &bytes[index..index + needle.len()] != needle { index += 1; continue; } let boundary_before = index == 0 || !is_ident_continue(bytes[index - 1]); let boundary_after = index + needle.len() == bytes.len() || !is_ident_continue(bytes[index + needle.len()]); if boundary_before && boundary_after { return true; } index += 1; } false } fn current_engine_owned_persistence_raw_backend_type_violations() -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !is_engine_owned_persistence_path(&relative_path) { continue; } let masked_source = mask_rust_source(&source); for type_name in [ "Backend", "BackendReadTransaction", "BackendWriteTransaction", ] { if contains_identifier(&masked_source, type_name) { violations.insert(RawBackendTypeViolation { file: relative_path.clone(), type_name, }); } } } violations.into_iter().collect() } fn is_owner_persistence_root_path(relative_path: &str) -> bool { relative_path.starts_with("live_state/") || relative_path.starts_with("canonical/") || relative_path.starts_with("binary_cas/") } fn is_owner_sql_adapter_path(relative_path: &str) -> bool { relative_path.ends_with("/store_sql.rs") || relative_path.ends_with("/storage.rs") } fn current_owner_persistence_backend_root_dependency_violations() -> Vec { let module_set = top_level_module_set(); let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !is_owner_persistence_root_path(&relative_path) || is_owner_sql_adapter_path(&relative_path) { continue; } let masked_source = mask_rust_source(&source); if !contains_identifier(&masked_source, "Backend") && !contains_identifier(&masked_source, "BackendReadTransaction") && !contains_identifier(&masked_source, "BackendWriteTransaction") { continue; } let current_module_path = module_path_for_file(&relative_path); for imported_path in collect_module_paths_from_source(&source, ¤t_module_path, &module_set) { if imported_path.first().is_none_or(|root| root != "backend") { continue; } violations.insert(ImportPathViolation { importer_file: relative_path.clone(), imported_path: imported_path.join("::"), }); } } violations.into_iter().collect() } fn current_backend_import_outside_storage_violations() -> Vec { let module_set = top_level_module_set(); let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if relative_path.starts_with("backend/") || relative_path.starts_with("storage/") { continue; } let current_module_path = module_path_for_file(&relative_path); for imported_path in collect_module_paths_from_source(&source, ¤t_module_path, &module_set) { if imported_path.first().is_none_or(|root| root != "backend") { continue; } violations.insert(ImportPathViolation { importer_file: relative_path.clone(), imported_path: imported_path.join("::"), }); } } violations.into_iter().collect() } fn current_store_sql_import_boundary_violations() -> Vec { let module_set = top_level_module_set(); let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { let current_module_path = module_path_for_file(&relative_path); let current_root = current_module_path.first().map(String::as_str); for imported_path in collect_module_paths_from_source(&source, ¤t_module_path, &module_set) { if imported_path .get(1) .is_none_or(|segment| segment != "store_sql") { continue; } let owner_root = imported_path.first().map(String::as_str); if current_root == owner_root { continue; } violations.insert(ImportPathViolation { importer_file: relative_path.clone(), imported_path: imported_path.join("::"), }); } } violations.into_iter().collect() } fn current_owner_persistence_transaction_lifecycle_violations() -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !is_owner_persistence_root_path(&relative_path) || is_owner_sql_adapter_path(&relative_path) { continue; } let masked_source = mask_rust_source(&source); for pattern in [ ".begin_read_transaction(", "begin_write_transaction(", ".commit().await", ".rollback().await", ] { if masked_source.contains(pattern) { violations.insert(TransactionLifecycleViolation { file: relative_path.clone(), pattern, }); } } } violations.into_iter().collect() } fn is_owner_local_storage_path(relative_path: &str) -> bool { relative_path.ends_with("/storage.rs") } fn is_allowed_raw_execute_boundary_path(relative_path: &str) -> bool { is_owner_local_storage_path(relative_path) || relative_path.starts_with("sql/") || relative_path.starts_with("execution/") || relative_path.starts_with("backend/") || relative_path == "transaction/backend.rs" || relative_path == "transaction/buffered_write_transaction.rs" || relative_path == "transaction/live_state_write_transaction.rs" } fn current_raw_execute_outside_owner_storage_or_public_sql_boundary_violations( ) -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if is_allowed_raw_execute_boundary_path(&relative_path) { continue; } let masked_source = mask_rust_source(&source); for pattern in [ "backend.execute(", "transaction.execute(", "executor.execute(", "self.base.execute(", "self.backend.execute(", "self.backend_transaction.execute(", ] { if masked_source.contains(pattern) { violations.insert(RawSqlExecutionViolation { file: relative_path.clone(), pattern, }); } } } violations.into_iter().collect() } fn is_orchestration_runtime_path(relative_path: &str) -> bool { relative_path.starts_with("api/") || relative_path.starts_with("init/") || relative_path.starts_with("session/") || relative_path.starts_with("transaction/") } fn current_scattered_internal_metadata_crud_outside_owner_storage_violations( ) -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !is_orchestration_runtime_path(&relative_path) || is_owner_local_storage_path(&relative_path) { continue; } let masked_source = mask_rust_source(&source); for pattern in [ "SELECT value FROM lix_internal_workspace_metadata", "INSERT INTO lix_internal_workspace_metadata", "CREATE TABLE lix_internal_workspace_metadata", "FROM lix_internal_commit_idempotency", "INSERT INTO lix_internal_commit_idempotency", "CREATE TABLE IF NOT EXISTS lix_internal_commit_idempotency", "FROM lix_internal_undo_redo_operation", "INSERT INTO lix_internal_undo_redo_operation", "CREATE TABLE IF NOT EXISTS lix_internal_undo_redo_operation", ] { if masked_source.contains(pattern) { violations.insert(RawSqlExecutionViolation { file: relative_path.clone(), pattern, }); } } } violations.into_iter().collect() } fn current_owner_storage_public_sql_shaped_api_violations() -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !is_owner_local_storage_path(&relative_path) { continue; } let masked_source = mask_rust_source(&source); for pattern in [ "pub(crate) async fn execute_query_with_", "pub(crate) async fn execute_ddl_batch_with_", "pub(crate) async fn add_column_if_missing_with_", "pub(crate) async fn begin_write_transaction", "pub(crate) fn executor_from_transaction", ] { if masked_source.contains(pattern) { violations.insert(RawSqlExecutionViolation { file: relative_path.clone(), pattern, }); } } } violations.into_iter().collect() } fn current_shared_persistence_root_files() -> Vec { production_source_files() .into_iter() .filter_map(|(relative_path, _)| { relative_path .starts_with("persistence/") .then_some(relative_path) }) .collect() } fn is_sql2_runtime_owner_path(relative_path: &str) -> bool { relative_path == "sql2/runtime.rs" } fn current_sql2_datafusion_physical_execution_owner_violations() -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !relative_path.starts_with("sql2/") || is_sql2_runtime_owner_path(&relative_path) { continue; } let stripped = strip_test_code(&source); let masked_source = mask_rust_source(&stripped); for pattern in [ ".collect().await", ".create_physical_plan().await", ".execute(partition,", "execute_input_stream(", ] { if masked_source.contains(pattern) { violations.insert(SqlRuntimeOwnershipViolation { file: relative_path.clone(), pattern, }); } } } violations.into_iter().collect() } fn current_sql2_data_sink_exec_violations() -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !relative_path.starts_with("sql2/") { continue; } let stripped = strip_test_code(&source); let masked_source = mask_rust_source(&stripped); for pattern in ["DataSinkExec", "DataSinkExec::new("] { if masked_source.contains(pattern) { violations.insert(SqlRuntimeOwnershipViolation { file: relative_path.clone(), pattern, }); } } } violations.into_iter().collect() } fn current_schema_catalog_dependency_violations() -> Vec { let module_set = top_level_module_set(); let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !relative_path.starts_with("schema/") { continue; } let current_module_path = module_path_for_file(&relative_path); for imported_path in collect_module_paths_from_source(&source, ¤t_module_path, &module_set) { if imported_path .first() .is_some_and(|root| root == "schema_catalog") { violations.insert(ImportPathViolation { importer_file: relative_path.clone(), imported_path: imported_path.join("::"), }); } } } violations.into_iter().collect() } fn current_schema_invalid_param_violations() -> Vec { let mut violations = BTreeSet::new(); for (relative_path, source) in production_source_files() { if !relative_path.starts_with("schema/") { continue; } let masked_source = mask_rust_source(&source); for pattern in ["CODE_INVALID_PARAM", "LIX_INVALID_PARAM"] { if masked_source.contains(pattern) { violations.insert(RawSqlExecutionViolation { file: relative_path.clone(), pattern, }); } } } violations.into_iter().collect() } #[test] fn sealed_owner_violations_are_empty() { let violations = current_sealed_owner_violations(); assert!( violations.is_empty(), "sealed-owner violations are present.\n\nCurrent violations:\n{}", render_grouped_sealed_owner_violations(&violations), ); } #[test] fn forbidden_dependency_rules_have_no_current_violations() { let graph = analyze_engine_dependency_graph(); let graph_modules = module_set(&graph); for module in TARGET_CORE_MODULES { assert!( graph_modules.contains(*module), "target core graph should include `{module}`", ); } let forbidden_lookup = forbidden_dependency_lookup(); let violations = actual_architecture_violations(&graph, &forbidden_lookup); assert!( violations.is_empty(), "forbidden owner-root dependencies are present.\n\nTarget core graph:\n{}\nCurrent violations:\n{}", render_target_core_graph(&target_core_graph(&graph)), render_forbidden_dependency_violations(&violations, &forbidden_lookup), ); } #[test] fn target_core_owner_graph_has_no_cycles() { let graph = analyze_engine_dependency_graph(); let core_graph = target_core_graph(&graph); let cycles = owner_root_cycles(&core_graph); assert!( cycles.is_empty(), "target core owner-root graph has cycles.\n\nTarget core graph:\n{}\nCycles:\n{}", render_target_core_graph(&core_graph), render_owner_root_cycles(&cycles), ); } #[test] fn schema_domain_does_not_depend_on_schema_catalog() { let violations = current_schema_catalog_dependency_violations(); assert!( violations.is_empty(), "`schema/*` owns schema-document semantics and must not depend on `schema_catalog/*`; transaction/public boundary adapters should compose the two domains.\n\nCurrent violations:\n{}", render_grouped_import_path_violations(&violations), ); } #[test] fn schema_domain_does_not_emit_public_invalid_param() { let violations = current_schema_invalid_param_violations(); assert!( violations.is_empty(), "`schema/*` must return schema-domain errors only. Public `INVALID_PARAM` classification belongs at transaction/API/SQL public boundaries.\n\nCurrent violations:\n{}", render_grouped_raw_sql_execution_violations(&violations), ); } // `services` intentionally does not get a giant root facade. Outside code may // depend on `services::child::*`, but not on deeper implementation paths. #[test] fn services_imports_are_limited_to_direct_child_namespaces() { if !top_level_module_set().contains("services") { return; } let violations = current_services_direct_child_import_violations(); assert!( violations.is_empty(), "outside `services/*`, imports into `services` must target a direct child capability namespace only.\n\nCurrent violations:\n{}", render_grouped_import_path_violations(&violations), ); } // Leaf `services/*` modules are standalone capabilities. They may depend on // neutral foundations like `common`, but not on engine // composition, semantic owners, or other top-level roots. #[test] fn services_has_no_external_root_dependencies() { if !top_level_module_set().contains("services") { return; } let violations = current_services_external_dependency_violations(); assert!( violations.is_empty(), "`services/*` leaf modules may only import neutral foundation roots (`common`) outside `services`.\n\nCurrent violations:\n{}", render_grouped_import_path_violations(&violations), ); } // Direct child `services/*` modules are also standalone relative to each // other. If two services need shared pieces, that code should move to neutral // ground or the capabilities should be merged. #[test] fn services_direct_children_do_not_import_sibling_services() { if !top_level_module_set().contains("services") { return; } let violations = current_services_sibling_dependency_violations(); assert!( violations.is_empty(), "direct child `services/*` modules must not import sibling service namespaces.\n\nCurrent violations:\n{}", render_grouped_import_path_violations(&violations), ); } // Engine-owned persistence modules should execute through owner-local adapters // rather than calling raw backend SQL directly. #[test] fn engine_owned_persistence_modules_do_not_execute_raw_sql_directly() { let violations = current_engine_owned_persistence_raw_sql_execution_violations(); assert!( violations.is_empty(), "engine-owned persistence modules must not execute raw SQL directly outside owner-local adapter files.\n\nCurrent violations:\n{}", render_grouped_raw_sql_execution_violations(&violations), ); } // Engine-owned persistence modules should depend on owner-local store // interfaces rather than raw backend handle types. #[test] fn engine_owned_persistence_modules_do_not_import_raw_backend_types() { let violations = current_engine_owned_persistence_raw_backend_type_violations(); assert!( violations.is_empty(), "engine-owned persistence modules must not depend on raw backend types outside owner-local adapter files.\n\nCurrent violations:\n{}", render_grouped_raw_backend_type_violations(&violations), ); } // Owner persistence code should speak in owner-local store terms, not import // lower `backend/*` helpers directly outside SQL adapter files. #[test] fn owner_persistence_modules_do_not_depend_on_backend_root_outside_sql_adapters() { let violations = current_owner_persistence_backend_root_dependency_violations(); assert!( violations.is_empty(), "owner persistence modules must not depend on `backend/*` outside owner-local SQL adapter files.\n\nCurrent violations:\n{}", render_grouped_import_path_violations(&violations), ); } #[test] fn backend_imports_are_limited_to_storage_boundary() { let violations = current_backend_import_outside_storage_violations(); assert!( violations.is_empty(), "`backend/*` may only be imported by `storage/*`; other engine modules must depend on storage-facing APIs.\n\nCurrent violations:\n{}", render_grouped_import_path_violations(&violations), ); } // SQL-backed store adapters are owner internals. Other roots may import the // owner-facing store interfaces, but not the `store_sql` implementations. #[test] fn store_sql_modules_are_not_imported_outside_their_owning_root() { let violations = current_store_sql_import_boundary_violations(); assert!( violations.is_empty(), "`store_sql` modules must not be imported outside their owning root.\n\nCurrent violations:\n{}", render_grouped_import_path_violations(&violations), ); } // Owner persistence modules may perform work inside a caller-owned transaction, // but must not decide when transactions begin or end. Transaction lifecycle // policy belongs to session/runtime, while owner-local SQL adapters may still // contain low-level backend transaction calls during the MVP. #[test] fn owner_persistence_modules_do_not_own_transaction_lifecycle() { let violations = current_owner_persistence_transaction_lifecycle_violations(); assert!( violations.is_empty(), "owner persistence modules must not begin, commit, or roll back transactions outside owner-local SQL adapter files.\n\nCurrent violations:\n{}", render_grouped_transaction_lifecycle_violations(&violations), ); } #[test] fn raw_backend_execute_is_only_used_in_owner_storage_or_public_sql_layers() { let violations = current_raw_execute_outside_owner_storage_or_public_sql_boundary_violations(); assert!( violations.is_empty(), "raw backend / transaction SQL execution may only appear in owner-local `storage.rs`, `sql/*`, `execution/*`, or backend glue.\n\nCurrent violations:\n{}", render_grouped_raw_sql_execution_violations(&violations), ); } #[test] fn internal_metadata_crud_is_centralized_in_owner_storage() { let violations = current_scattered_internal_metadata_crud_outside_owner_storage_violations(); assert!( violations.is_empty(), "internal metadata CRUD for workspace selectors, commit idempotency, and undo/redo log should live in owner-local `storage.rs` seams, not scattered through `api/*`, `init/*`, `session/*`, or `transaction/*`.\n\nCurrent violations:\n{}", render_grouped_raw_sql_execution_violations(&violations), ); } #[test] fn owner_storage_modules_do_not_expose_public_sql_shaped_helpers() { let violations = current_owner_storage_public_sql_shaped_api_violations(); assert!( violations.is_empty(), "owner-local `storage.rs` seams should expose operation-shaped APIs rather than public SQL-shaped helpers.\n\nCurrent violations:\n{}", render_grouped_raw_sql_execution_violations(&violations), ); } #[test] fn sql2_physical_execution_is_owned_by_runtime_module() { let violations = current_sql2_datafusion_physical_execution_owner_violations(); assert!( violations.is_empty(), "DataFusion physical execution must be centralized in `sql2/runtime.rs`; read/write SQL paths should not collect DataFrames or execute physical plans through side doors.\n\nCurrent violations:\n{}", render_grouped_sql_runtime_ownership_violations(&violations), ); } #[test] fn sql2_write_providers_do_not_delegate_dml_execution_to_datafusion_sinks() { let violations = current_sql2_data_sink_exec_violations(); assert!( violations.is_empty(), "SQL2 write providers must not use DataFusion `DataSinkExec`; DML source batches should be collected through the SQL runtime and staged by transaction-owned write code.\n\nCurrent violations:\n{}", render_grouped_sql_runtime_ownership_violations(&violations), ); } #[test] fn sql2_public_boundary_does_not_reintroduce_stringly_validation() { let mut violations = Vec::new(); for (relative_path, source) in production_source_files() { if !relative_path.starts_with("sql2/") { continue; } let stripped = strip_test_code(&source); let masked_source = mask_rust_source(&stripped); for pattern in [ "PublicPredicateSpec {", "public_input::expect_text_column(\"", "public_input::expect_bool_column(\"", "public_input::expect_json_object_metadata(\"", "public_input::expect_json_text(\"", "public_input::expect_file_path_public(\"", "public_input::expect_directory_path_public(\"", "public_input::expect_entity_identity_public(\"", "public_input::expect_non_blob_public_id(\"", "require_write(\"", "routed_surface(", "operation: &str", "table: &str", ] { if masked_source.contains(pattern) { violations.push(format!("{relative_path}: {pattern}")); } } } assert!( violations.is_empty(), "SQL2 public boundary validation must flow through typed PublicBoundaryContext/PublicSurface helpers, not raw operation/table strings.\n\nCurrent violations:\n{}", violations.join("\n"), ); } #[test] fn sql2_read_session_does_not_register_write_surfaces() { let relative = "sql2/session.rs"; let source = read_engine_source(relative); let read_session = source_between( relative, &source, "pub(crate) async fn build_read_session", "pub(crate) async fn build_write_session", ); assert_source_contains_all( relative, read_session, &[ "register_lix_state_providers", "register_lix_version_provider", "register_lix_change_provider", "register_history_providers", "register_lix_file_history_provider", "register_lix_directory_history_provider", "register_lix_directory_providers", "register_lix_file_providers", "register_entity_providers", ], ); assert_source_contains_none( relative, read_session, &[ "SqlWriteContext::new", "register_lix_state_write_providers", "register_lix_version_write_provider", "register_lix_directory_write_providers", "register_lix_file_write_providers", "register_entity_write_providers", ], ); } #[test] fn sql2_write_session_does_not_register_history_or_committed_read_surfaces() { let relative = "sql2/session.rs"; let source = read_engine_source(relative); let write_session = source_between( relative, &source, "pub(crate) async fn build_write_session", "fn new_sql_session_context", ); assert_source_contains_all( relative, write_session, &[ "SqlWriteContext::new", "register_lix_state_write_providers", "register_lix_version_write_provider", "register_lix_directory_write_providers", "register_lix_file_write_providers", "register_entity_write_providers", ], ); assert_source_contains_none( relative, write_session, &[ "ctx.commit_store_query_source", "ctx.commit_graph", "ctx.live_state()", "ctx.version_ref()", "register_lix_state_providers", "register_lix_version_provider", "register_lix_change_provider", "register_history_providers", "register_lix_file_history_provider", "register_lix_directory_history_provider", "register_lix_directory_providers", "register_lix_file_providers", "register_entity_providers", ], ); } #[test] fn sql2_session_context_keeps_wasm_safe_physical_plan_defaults() { let relative = "sql2/session.rs"; let source = read_engine_source(relative); let session_context = source_between(relative, &source, "fn new_sql_session_context", "\n}"); assert_source_contains_all( relative, session_context, &[ ".with_target_partitions(1)", "\"datafusion.optimizer.repartition_aggregations\", false", "\"datafusion.optimizer.repartition_joins\", false", "\"datafusion.optimizer.repartition_sorts\", false", "\"datafusion.optimizer.repartition_windows\", false", "\"datafusion.optimizer.repartition_file_scans\", false", "\"datafusion.optimizer.enable_round_robin_repartition\", false", ], ); } #[test] fn shared_persistence_root_is_empty_or_absent() { let remaining_files = current_shared_persistence_root_files(); assert!( remaining_files.is_empty(), "the shared `persistence/*` root is transitional and should become empty or disappear as owner-local `storage.rs` seams take over.\n\nCurrent files:\n{}", remaining_files .into_iter() .map(|file| format!("- {file}")) .collect::>() .join("\n"), ); } ================================================ FILE: packages/engine/tests/commit_graph.rs ================================================ #[macro_use] #[path = "support/mod.rs"] mod support; use lix_engine::Value; simulation_test!( version_ref_advances_after_tracked_commit, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let initial_head = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("version head should load") .expect("version head should exist"); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('version-ref-advance', 'one')", &[], ) .await .expect("tracked write should succeed"); let advanced_head = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("version head should load") .expect("version head should exist"); assert_ne!( advanced_head, initial_head, "tracked commit should advance the touched version ref" ); } ); simulation_test!( tracked_write_creates_one_commit_without_advancing_global_ref, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let global_session = sim.wrap_session( engine .open_session("global") .await .expect("global session should open"), &engine, ); let global_head_before = engine .load_version_head_commit_id("global") .await .expect("global head should load") .expect("global head should exist"); let main_head_before = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('one-commit-model', 'ok')", &[], ) .await .expect("tracked write should succeed"); let global_head_after = engine .load_version_head_commit_id("global") .await .expect("global head should load") .expect("global head should exist"); let main_head_after = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("main head should load") .expect("main head should exist"); assert_eq!( global_head_after, global_head_before, "non-global writes must not advance the global version ref" ); assert_ne!( main_head_after, main_head_before, "tracked write should advance exactly the touched version ref" ); assert_eq!( commit_ids(&global_session, &main_head_after).await, vec![main_head_after.clone()], "the touched-version commit should still be globally visible through lix_state" ); } ); simulation_test!( second_commit_parents_previous_version_head, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let global_session = sim.wrap_session( engine .open_session("global") .await .expect("global session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('commit-parent', 'one')", &[], ) .await .expect("first tracked write should succeed"); let first_head = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("version head should load") .expect("version head should exist"); session .execute( "UPDATE lix_key_value SET value = 'two' WHERE key = 'commit-parent'", &[], ) .await .expect("second tracked write should succeed"); let second_head = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("version head should load") .expect("version head should exist"); assert_ne!(second_head, first_head); assert_eq!( commit_parent_ids(&global_session, &second_head).await, vec![first_head], "second commit should parent to the previous version head" ); } ); async fn commit_parent_ids( session: &crate::support::simulation_test::engine::SimSession, commit_id: &str, ) -> Vec { let result = session .execute( &format!( "SELECT parent_id \ FROM lix_commit_edge \ WHERE child_id = '{commit_id}' \ ORDER BY parent_id" ), &[], ) .await .expect("commit edge rows should read"); result .rows() .iter() .map(|row| match &row.values()[0] { Value::Text(parent_id) => parent_id.clone(), value => panic!("expected parent_id string, got {value:?}"), }) .collect() } async fn commit_ids( session: &crate::support::simulation_test::engine::SimSession, commit_id: &str, ) -> Vec { let result = session .execute( &format!("SELECT id FROM lix_commit WHERE id = '{commit_id}'"), &[], ) .await .expect("commit rows should read"); result .rows() .iter() .map(|row| match &row.values()[0] { Value::Text(commit_id) => commit_id.clone(), value => panic!("expected commit id string, got {value:?}"), }) .collect() } ================================================ FILE: packages/engine/tests/engine.rs ================================================ #[path = "support/mod.rs"] mod support; use lix_engine::ExecuteResult; use lix_engine::{CreateVersionOptions, Engine, MergeVersionOptions, SwitchVersionOptions, Value}; use serde_json::json; simulation_test!(engine_new_rejects_uninitialized_backend, |sim| async move { match Engine::new(sim.uninitialized_backend()).await { Ok(_) => panic!("uninitialized backend should not create an engine"), Err(error) => assert_eq!(error.code, "LIX_ERROR_NOT_INITIALIZED"), } }); simulation_test!( engine_initialize_seeds_repository_bootstrap_state, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_session("global") .await .expect("initialized backend should open global session"), &engine, ); let main_session = sim.wrap_session( engine .open_workspace_session() .await .expect("initialized backend should open main session"), &engine, ); let version_result = session .execute( "SELECT entity_id, snapshot_content \ FROM lix_state \ WHERE schema_key = 'lix_version_descriptor' \ ORDER BY entity_id", &[], ) .await .expect("version descriptors should be readable"); let version_rows = version_result; assert_eq!(version_rows.len(), 2); let version_values = version_rows .rows() .iter() .map(|row| row.values().to_vec()) .collect::>(); assert!(version_values.contains(&vec![ Value::Json(json!(["global"])), Value::Json(json!({"hidden": true, "id": "global", "name": "global"})), ])); assert!(version_values.contains(&vec![ Value::Json(json!([sim.main_version_id()])), Value::Json(json!({"hidden": false, "id": sim.main_version_id(), "name": "main"})), ])); let lix_id_result = session .execute("SELECT value FROM lix_key_value WHERE key = 'lix_id'", &[]) .await .expect("lix_id key value should be readable"); assert_single_json(lix_id_result, &format!("\"{}\"", sim.lix_id())); let refs_result = session .execute( "SELECT entity_id, snapshot_content, untracked \ FROM lix_state \ WHERE schema_key = 'lix_version_ref' \ ORDER BY entity_id", &[], ) .await .expect("version refs should be readable"); let ref_rows = refs_result; assert_eq!(ref_rows.len(), 2); let ref_values = ref_rows .rows() .iter() .map(|row| row.values().to_vec()) .collect::>(); assert!(ref_values.contains(&vec![ Value::Json(json!(["global"])), Value::Json(json!({"commit_id": sim.initial_commit_id(), "id": "global"})), Value::Boolean(true), ])); assert!(ref_values.contains(&vec![ Value::Json(json!([sim.main_version_id()])), Value::Json(json!({"commit_id": sim.initial_commit_id(), "id": sim.main_version_id()})), Value::Boolean(true), ])); drop(main_session); drop(session); drop(engine); } ); simulation_test!( session_execute_inserts_key_value_then_reads_it_back, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("backend should open a session"), &engine, ); let uuid_result = session .execute("SELECT lix_uuid_v7()", &[]) .await .expect("session should expose lix_uuid_v7 UDF"); let uuid_rows = uuid_result; assert_eq!(uuid_rows.len(), 1); let Value::Text(uuid) = &uuid_rows.rows()[0].values()[0] else { panic!("lix_uuid_v7 should return text"); }; assert!( !uuid.is_empty(), "lix_uuid_v7 should return a non-empty UUID" ); let insert_result = session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('sql2-key', 'sql2-value')", &[], ) .await .expect("session insert should succeed"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT key, value FROM lix_key_value WHERE key = 'sql2-key'", &[], ) .await .expect("session read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); assert_eq!( row_set.rows()[0].values(), &[ Value::Text("sql2-key".to_string()), Value::Json(json!("sql2-value")), ] ); } ); simulation_test!( failed_write_validation_does_not_poison_session_transaction, |sim| async move { let engine = sim.boot_engine().await; let session = engine .open_workspace_session() .await .expect("backend should open a session"); register_poison_task_schema(&session).await; let error = session .execute( "INSERT INTO poison_task (id, title) VALUES ('bad-task', 'missing meta')", &[], ) .await .expect_err("schema validation should reject missing required field"); assert_eq!(error.code, "LIX_ERROR_SCHEMA_VALIDATION"); assert_single_integer( session .execute("SELECT 1 AS ok", &[]) .await .expect("read after failed write should succeed"), 1, ); let insert_result = session .execute( "INSERT INTO poison_task (id, title, meta) \ VALUES ('good-task', 'valid', lix_json('{\"priority\":\"high\"}'))", &[], ) .await .expect("valid write after failed write should succeed"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); } ); simulation_test!( session_close_is_idempotent_and_rejects_later_operations, |sim| async move { let engine = sim.boot_engine().await; let session = engine .open_workspace_session() .await .expect("backend should open a session"); session.close().await.expect("first close should succeed"); session.close().await.expect("second close should succeed"); assert!(session.is_closed()); assert_closed( session .execute("SELECT value FROM lix_key_value WHERE key = 'lix_id'", &[]) .await .expect_err("execute after close should fail"), ); assert_closed( session .active_version_id() .await .expect_err("active_version_id after close should fail"), ); assert_closed( session .create_version(CreateVersionOptions { id: Some("closed-version".to_string()), name: "Closed".to_string(), from_commit_id: None, }) .await .expect_err("create_version after close should fail"), ); match session .switch_version(SwitchVersionOptions { version_id: sim.main_version_id().to_string(), }) .await { Ok(_) => panic!("switch_version after close should fail"), Err(error) => assert_closed(error), } assert_closed( session .merge_version(MergeVersionOptions { source_version_id: sim.main_version_id().to_string(), }) .await .expect_err("merge_version after close should fail"), ); } ); async fn register_poison_task_schema(session: &lix_engine::SessionContext) { let schema = json!({ "$schema": "https://json-schema.org/draft/2020-12/schema", "x-lix-key": "poison_task", "x-lix-primary-key": ["/id"], "type": "object", "required": ["id", "title", "meta"], "properties": { "id": { "type": "string" }, "title": { "type": "string" }, "meta": { "type": "object" } }, "additionalProperties": false }); session .execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", &[Value::Text(schema.to_string())], ) .await .expect("schema registration should succeed"); } simulation_test!( session_close_state_is_shared_with_switched_session, |sim| async move { let engine = sim.boot_engine().await; let session = engine .open_workspace_session() .await .expect("backend should open a session"); let (switched_session, _) = session .switch_version(SwitchVersionOptions { version_id: sim.main_version_id().to_string(), }) .await .expect("switch_version should succeed before close"); session.close().await.expect("close should succeed"); assert_closed( switched_session .active_version_id() .await .expect_err("derived session should observe closed state"), ); } ); simulation_test!( session_execute_persists_deterministic_function_sequence_across_sessions, options = support::simulation_test::engine::SimulationOptions { deterministic: false, }, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("backend should open first session"), &engine, ); let mode_result = session .execute( "INSERT INTO lix_key_value (key, value, lixcol_global, lixcol_untracked) \ VALUES ('lix_deterministic_mode', \ lix_json('{\"enabled\":true}'), true, true)", &[], ) .await .expect("deterministic mode insert should succeed"); assert_eq!(mode_result, ExecuteResult::from_rows_affected(1)); assert_single_text( session .execute("SELECT lix_uuid_v7()", &[]) .await .expect("first deterministic uuid should succeed"), "01920000-0000-7000-8000-000000000000", ); assert_single_text( session .execute("SELECT lix_uuid_v7()", &[]) .await .expect("second deterministic uuid should succeed"), "01920000-0000-7000-8000-000000000001", ); let second_session = sim.wrap_session( engine .open_workspace_session() .await .expect("backend should open second session"), &engine, ); assert_single_text( second_session .execute("SELECT lix_uuid_v7()", &[]) .await .expect("third deterministic uuid should succeed"), "01920000-0000-7000-8000-000000000002", ); let write_result = second_session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"det-write\"]'), 'lix_key_value', NULL, lix_json('{\"key\":\"det-write\",\"value\":\"ok\"}'), false, false\ )", &[], ) .await .expect("deterministic write should succeed"); assert_eq!(write_result, ExecuteResult::from_rows_affected(1)); assert_single_text( second_session .execute("SELECT lix_uuid_v7()", &[]) .await .expect("uuid after deterministic write should continue"), // The tracked write consumes deterministic values for row // metadata and commit metadata. "01920000-0000-7000-8000-000000000008", ); } ); simulation_test!( session_execute_does_not_persist_deterministic_sequence_after_failed_statement, options = support::simulation_test::engine::SimulationOptions { deterministic: false, }, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("backend should open a session"), &engine, ); let mode_result = session .execute( "INSERT INTO lix_key_value (key, value, lixcol_global, lixcol_untracked) \ VALUES ('lix_deterministic_mode', \ lix_json('{\"enabled\":true}'), true, true)", &[], ) .await .expect("deterministic mode insert should succeed"); assert_eq!(mode_result, ExecuteResult::from_rows_affected(1)); let failed_read = session .execute("SELECT lix_uuid_v7() FROM missing_engine_table", &[]) .await; assert!( failed_read.is_err(), "missing table query should fail before persisting deterministic sequence" ); assert_single_text( session .execute("SELECT lix_uuid_v7()", &[]) .await .expect("first deterministic uuid should still start at zero"), "01920000-0000-7000-8000-000000000000", ); let failed_write = session .execute( "INSERT INTO missing_engine_table VALUES (lix_uuid_v7())", &[], ) .await; assert!( failed_write.is_err(), "failed write should not persist deterministic sequence" ); assert_single_text( session .execute("SELECT lix_uuid_v7()", &[]) .await .expect("second deterministic uuid should continue after last success"), "01920000-0000-7000-8000-000000000001", ); } ); fn assert_single_text(result: ExecuteResult, expected: &str) { let row_set = result; assert_eq!(row_set.len(), 1); assert_eq!( row_set.rows()[0].values(), &[Value::Text(expected.to_string())] ); } fn assert_single_integer(result: ExecuteResult, expected: i64) { let row_set = result; assert_eq!(row_set.len(), 1); assert_eq!(row_set.rows()[0].values(), &[Value::Integer(expected)]); } fn assert_single_json(result: ExecuteResult, expected: &str) { let row_set = result; assert_eq!(row_set.len(), 1); let expected_json = serde_json::from_str::(expected) .expect("expected JSON value should parse"); assert_eq!(row_set.rows()[0].values(), &[Value::Json(expected_json)]); } fn assert_closed(error: lix_engine::LixError) { assert_eq!(error.code, lix_engine::LixError::CODE_CLOSED); assert_eq!(error.message, "Lix handle is closed"); assert_eq!( error.hint.as_deref(), Some("Open a new Lix handle before calling this method.") ); } ================================================ FILE: packages/engine/tests/json_pointer_crud_storage.rs ================================================ #![cfg(feature = "storage-benches")] use std::fs; use std::path::Path; use lix_engine::{ CreateVersionOptions, Engine, MergeVersionOptions, MergeVersionOutcome, SessionContext, SwitchVersionOptions, }; use rusqlite::{params, Connection}; use serde_json::Value as JsonValue; use tempfile::TempDir; #[path = "../benches/storage/rocksdb_backend.rs"] mod rocksdb_backend; #[path = "../benches/storage/sqlite_backend.rs"] mod sqlite_backend; use rocksdb_backend::RocksDbBenchBackend; use sqlite_backend::SqliteBenchBackend; const JSON_POINTER_SCHEMA_JSON: &str = include_str!("../../plugin-json-v2/schema/json_pointer.json"); const PNPM_LOCK_JSON: &str = include_str!("../benches/fixtures/pnpm-lock.fixture.json"); const ROW_COUNTS: [usize; 2] = [100, 1_000]; const CHUNK_SIZE: usize = 500; const CHANGE_ROW_DENOMINATOR: usize = 10; #[derive(Clone)] struct PointerRow { path: String, value_json: String, } #[tokio::test] #[ignore = "prints JSON pointer CRUD storage-size reference rows"] async fn json_pointer_crud_storage_accounting() { let rows = fixture_rows(); println!("| backend | rows | bytes on disk | bytes/row |"); println!("| ------- | ---: | ------------: | --------: |"); for row_count in ROW_COUNTS { let rows = &rows[..row_count]; print_storage_row("raw SQLite", row_count, raw_sqlite_storage_bytes(rows)); for row in lix_sqlite_storage_rows(rows).await { print_storage_workflow_row("Lix SQLite", row_count, &row); } for row in lix_rocksdb_storage_rows(rows).await { print_storage_workflow_row("Lix RocksDB", row_count, &row); } } } fn print_storage_row(backend: &str, rows: usize, bytes: u64) { println!( "| {backend} | {rows} | {bytes} | {:.1} |", bytes as f64 / rows as f64 ); } struct WorkflowStorageRow { workflow: &'static str, bytes: u64, } fn print_storage_workflow_row(backend: &str, rows: usize, row: &WorkflowStorageRow) { println!( "| {backend} / {} | {rows} | {} | {:.1} |", row.workflow, row.bytes, row.bytes as f64 / rows as f64 ); } fn raw_sqlite_storage_bytes(rows: &[PointerRow]) -> u64 { let dir = TempDir::new().expect("create raw sqlite storage tempdir"); let db_path = dir.path().join("json-pointer-crud.sqlite"); let conn = Connection::open(&db_path).expect("open raw sqlite storage db"); conn.execute_batch( " PRAGMA journal_mode = WAL; PRAGMA synchronous = NORMAL; PRAGMA temp_store = MEMORY; PRAGMA foreign_keys = ON; CREATE TABLE json_pointer ( path TEXT NOT NULL PRIMARY KEY, value TEXT NOT NULL ) WITHOUT ROWID; ", ) .expect("configure raw sqlite storage db"); { let tx = conn .unchecked_transaction() .expect("begin raw sqlite storage transaction"); { let mut statement = tx .prepare_cached("INSERT INTO json_pointer (path, value) VALUES (?1, ?2)") .expect("prepare raw sqlite storage insert"); for row in rows { statement .execute(params![row.path.as_str(), row.value_json.as_str()]) .expect("insert raw sqlite storage row"); } } tx.commit().expect("commit raw sqlite storage transaction"); } conn.execute_batch("PRAGMA wal_checkpoint(FULL)") .expect("checkpoint raw sqlite storage db"); directory_size(dir.path()) } fn changed_row_count(rows: usize) -> usize { (rows / CHANGE_ROW_DENOMINATOR).max(1) } async fn lix_sqlite_storage_rows(rows: &[PointerRow]) -> Vec { let backend = SqliteBenchBackend::tempfile().expect("create sqlite storage backend"); let dir = backend .path() .and_then(Path::parent) .expect("sqlite backend should expose tempfile parent") .to_path_buf(); let engine = initialize_engine(Box::new(backend.clone()), Box::new(backend)).await; let session = prepare_session(&engine).await; lix_workflow_storage_rows(&session, rows, &dir).await } async fn lix_rocksdb_storage_rows(rows: &[PointerRow]) -> Vec { let backend = RocksDbBenchBackend::new().expect("create rocksdb storage backend"); let dir = backend.path().to_path_buf(); let engine = initialize_engine(Box::new(backend.clone()), Box::new(backend)).await; let session = prepare_session(&engine).await; lix_workflow_storage_rows(&session, rows, &dir).await } async fn lix_workflow_storage_rows( session: &SessionContext, rows: &[PointerRow], dir: &Path, ) -> Vec { let change_rows = changed_row_count(rows.len()); let main_id = session .active_version_id() .await .expect("load active storage main version id"); insert_lix_rows(session, rows).await; let mut storage_rows = vec![WorkflowStorageRow { workflow: "inserted", bytes: directory_size(dir), }]; create_lix_version(session, "bench-draft", "bench draft").await; storage_rows.push(WorkflowStorageRow { workflow: "after create_version", bytes: directory_size(dir), }); let (draft_session, _) = session .switch_version(SwitchVersionOptions { version_id: "bench-draft".to_string(), }) .await .expect("switch to storage draft version"); update_lix_rows_by_pk(&draft_session, &rows[..change_rows], "source").await; let (main_session, _) = draft_session .switch_version(SwitchVersionOptions { version_id: main_id.clone(), }) .await .expect("switch back to storage main version"); let receipt = main_session .merge_version(MergeVersionOptions { source_version_id: "bench-draft".to_string(), }) .await .expect("merge storage fast-forward draft"); assert_eq!(receipt.outcome, MergeVersionOutcome::FastForward); storage_rows.push(WorkflowStorageRow { workflow: "after fast-forward merge", bytes: directory_size(dir), }); create_lix_version(&main_session, "bench-divergent", "bench divergent").await; let (divergent_session, _) = main_session .switch_version(SwitchVersionOptions { version_id: "bench-divergent".to_string(), }) .await .expect("switch to divergent storage draft version"); update_lix_rows_by_pk(&divergent_session, &rows[..change_rows], "source-divergent").await; let (main_session, _) = divergent_session .switch_version(SwitchVersionOptions { version_id: main_id, }) .await .expect("switch back to storage main version after divergent edits"); update_lix_rows_by_pk( &main_session, &rows[change_rows..change_rows * 2], "target-divergent", ) .await; let receipt = main_session .merge_version(MergeVersionOptions { source_version_id: "bench-divergent".to_string(), }) .await .expect("merge storage divergent draft"); assert_eq!(receipt.outcome, MergeVersionOutcome::MergeCommitted); storage_rows.push(WorkflowStorageRow { workflow: "after divergent merge", bytes: directory_size(dir), }); storage_rows } async fn initialize_engine( initializer_backend: Box, engine_backend: Box, ) -> Engine { Engine::initialize(initializer_backend) .await .expect("initialize storage benchmark engine"); Engine::new(engine_backend) .await .expect("open storage benchmark engine") } async fn prepare_session(engine: &Engine) -> SessionContext { let session = engine .open_workspace_session() .await .expect("open json pointer storage workspace"); register_json_pointer_schema(&session).await; session } async fn register_json_pointer_schema(session: &SessionContext) { let sql = format!( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) VALUES (lix_json('{}'), false, false)", sql_string(JSON_POINTER_SCHEMA_JSON) ); let affected = session .execute(&sql, &[]) .await .expect("register json_pointer storage schema") .rows_affected(); assert_eq!(affected, 1); } async fn insert_lix_rows(session: &SessionContext, rows: &[PointerRow]) { for chunk in rows.chunks(CHUNK_SIZE) { let mut sql = String::from("INSERT INTO json_pointer (path, value) VALUES "); for (index, row) in chunk.iter().enumerate() { if index > 0 { sql.push(','); } sql.push_str(&format!( "('{}', lix_json('{}'))", sql_string(row.path.as_str()), sql_string(row.value_json.as_str()) )); } let affected = session .execute(&sql, &[]) .await .expect("insert json_pointer storage rows") .rows_affected(); assert_eq!(affected as usize, chunk.len()); } } async fn create_lix_version(session: &SessionContext, id: &str, name: &str) { session .create_version(CreateVersionOptions { id: Some(id.to_string()), name: name.to_string(), from_commit_id: None, }) .await .expect("create json_pointer storage version"); } async fn update_lix_rows_by_pk(session: &SessionContext, rows: &[PointerRow], side: &str) { for row in rows { let value = serde_json::json!({ "updated": true, "side": side, "path": row.path, }) .to_string(); let sql = format!( "UPDATE json_pointer SET value = lix_json('{}') WHERE path = '{}'", sql_string(value.as_str()), sql_string(row.path.as_str()) ); let affected = session .execute(&sql, &[]) .await .expect("update json_pointer storage row by path") .rows_affected(); assert_eq!(affected, 1); } } fn fixture_rows() -> Vec { let root: JsonValue = serde_json::from_str(PNPM_LOCK_JSON).expect("pnpm lock JSON fixture"); let mut rows = Vec::new(); flatten_json("", &root, &mut rows); assert!(rows.len() >= 10_000); rows } fn flatten_json(path: &str, value: &JsonValue, rows: &mut Vec) { rows.push(PointerRow { path: path.to_string(), value_json: value.to_string(), }); match value { JsonValue::Array(items) => { for (index, item) in items.iter().enumerate() { flatten_json(&format!("{path}/{}", index), item, rows); } } JsonValue::Object(map) => { for (key, child) in map { flatten_json( &format!("{path}/{}", key.replace('~', "~0").replace('/', "~1")), child, rows, ); } } JsonValue::Null | JsonValue::Bool(_) | JsonValue::Number(_) | JsonValue::String(_) => {} } } fn directory_size(path: &Path) -> u64 { let metadata = fs::metadata(path).expect("read storage path metadata"); if metadata.is_file() { return metadata.len(); } let mut bytes = 0; for entry in fs::read_dir(path).expect("read storage directory") { let entry = entry.expect("read storage directory entry"); bytes += directory_size(&entry.path()); } bytes } fn sql_string(value: &str) -> String { value.replace('\'', "''") } ================================================ FILE: packages/engine/tests/sql/entity_history.rs ================================================ use lix_engine::Value; use serde_json::json; use super::assert_rows_eq; simulation_test!( entity_history_reads_typed_rows_from_commit_graph, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_history_schema\",\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"count\":{\"type\":\"integer\"},\"active\":{\"type\":\"boolean\"},\"meta\":{\"type\":\"object\"}},\"required\":[\"id\",\"count\",\"active\",\"meta\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); session .execute( "INSERT INTO engine_history_schema \ (lixcol_entity_id, id, count, active, meta, lixcol_untracked) \ VALUES (lix_json('[\"history-entity\"]'), 'history-entity', 1, true, lix_json('{\"source\":\"insert\"}'), false)", &[], ) .await .expect("entity insert should succeed"); let first_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("first head should load") .expect("first head should exist"); session .execute( "UPDATE engine_history_schema \ SET count = 2, active = false, meta = lix_json('{\"source\":\"update\"}') \ WHERE lixcol_entity_id = lix_json('[\"history-entity\"]')", &[], ) .await .expect("entity update should succeed"); let second_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("second head should load") .expect("second head should exist"); assert_ne!(first_commit_id, second_commit_id); let result = session .execute( &format!( "SELECT id, count, active, meta, lixcol_entity_id, lixcol_observed_commit_id, lixcol_start_commit_id, lixcol_depth \ FROM engine_history_schema_history \ WHERE lixcol_start_commit_id = '{second_commit_id}' \ AND lixcol_entity_id = lix_json('[\"history-entity\"]') \ ORDER BY lixcol_depth" ), &[], ) .await .expect("entity history read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("history-entity".to_string()), Value::Integer(2), Value::Boolean(false), Value::Json(json!({"source": "update"})), Value::Json(json!(["history-entity"])), Value::Text(second_commit_id.clone()), Value::Text(second_commit_id.clone()), Value::Integer(0), ], vec![ Value::Text("history-entity".to_string()), Value::Integer(1), Value::Boolean(true), Value::Json(json!({"source": "insert"})), Value::Json(json!(["history-entity"])), Value::Text(first_commit_id), Value::Text(second_commit_id), Value::Integer(1), ], ], ); } ); simulation_test!( entity_history_requires_lixcol_start_commit_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_history_error_schema\",\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); let error = session .execute("SELECT id FROM engine_history_error_schema_history", &[]) .await .expect_err("typed history queries must provide start commit"); assert_eq!( error.code, lix_engine::LixError::CODE_HISTORY_FILTER_REQUIRED ); assert!( error .to_string() .contains("requires a lixcol_start_commit_id filter"), "unexpected error: {error}" ); assert!( error .hint() .is_some_and(|hint| hint.contains("WHERE lixcol_start_commit_id")), "unexpected error: {error}" ); } ); simulation_test!( entity_history_rejects_bare_start_commit_id_filter, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_history_bare_error_schema\",\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); let error = session .execute( "SELECT id \ FROM engine_history_bare_error_schema_history \ WHERE start_commit_id = lix_active_version_commit_id()", &[], ) .await .expect_err("typed history should only expose lixcol_start_commit_id"); assert_eq!(error.code, lix_engine::LixError::CODE_COLUMN_NOT_FOUND); assert!( error.to_string().contains("start_commit_id"), "unexpected error: {error}" ); } ); ================================================ FILE: packages/engine/tests/sql/errors.rs ================================================ use lix_engine::LixError; use lix_engine::Value; simulation_test!(sql_missing_table_has_lix_error_code, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT * FROM missing_table", &[]) .await .expect_err("missing table should fail"); assert_eq!(error.code, LixError::CODE_TABLE_NOT_FOUND); assert!(error.hint().is_some(), "expected discovery hint: {error}"); }); simulation_test!(sql_missing_column_has_lix_error_code, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT missing_column FROM lix_file", &[]) .await .expect_err("missing column should fail"); assert_eq!(error.code, LixError::CODE_COLUMN_NOT_FOUND); }); simulation_test!( sql_duplicate_projection_name_is_parse_error, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT 1 AS x, 2 AS x", &[]) .await .expect_err("duplicate projection names should fail during planning"); assert_eq!(error.code, LixError::CODE_PARSE_ERROR); } ); simulation_test!(sql_question_mark_placeholder_has_hint, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT * FROM lix_file WHERE id = ?", &[]) .await .expect_err("question mark placeholders should fail"); assert_eq!(error.code, LixError::CODE_PARSE_ERROR); assert!( error.hint().is_some_and(|hint| hint.contains("$1")), "expected placeholder hint: {error}" ); }); simulation_test!(sql_json_function_miss_has_lix_udf_hint, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT json_extract('{\"a\":1}', '$.a')", &[]) .await .expect_err("non-Lix JSON UDF should fail with a targeted hint"); assert_eq!(error.code, LixError::CODE_UDF_NOT_FOUND); assert!( error .hint() .is_some_and(|hint| hint.contains("lix_json_get")), "expected JSON UDF hint: {error}" ); }); simulation_test!( sql_json_arrow_operator_has_dialect_error, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT lix_json('{\"a\":1}') ->> 'a'", &[]) .await .expect_err("Postgres JSON arrow operator should fail with a dialect error"); assert_eq!(error.code, LixError::CODE_DIALECT_UNSUPPORTED); assert!( error .hint() .is_some_and(|hint| hint.contains("lix_json_get_text")), "expected JSON dialect hint: {error}" ); } ); simulation_test!( sql_udf_argument_mismatch_is_public_invalid_param, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT lix_uuid_v7('unexpected')", &[]) .await .expect_err("wrong UDF arity should fail as public invalid input"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); } ); simulation_test!( sql_non_utf8_blob_parameter_has_targeted_error, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT length($1)", &[Value::Blob(vec![0xff])]) .await .expect_err("non-UTF-8 blob should fail as text"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH); assert!( error.message.contains("valid UTF-8 text"), "expected targeted UTF-8 message: {error}" ); assert!( error .hint() .is_some_and(|hint| hint.contains("blob") && !hint.contains("lix_json")), "expected blob-specific hint without JSON detour: {error}" ); } ); simulation_test!( sql_blob_insert_into_json_entity_has_targeted_error, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('blob-value', $1)", &[Value::Blob(vec![1, 2, 3, 255, 0, 128])], ) .await .expect_err("blob entity insert should fail cleanly"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); assert!( error.message.contains("cannot store blob values directly"), "expected targeted blob-to-JSON message: {error}" ); assert!( !error.message.contains("Binary("), "error should not expose Rust/DataFusion debug formatting: {error}" ); } ); simulation_test!(sql_create_table_returns_error, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("CREATE TABLE scratch (id TEXT)", &[]) .await .expect_err("CREATE TABLE should return an error, not panic"); assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL); }); simulation_test!( sql_recursive_cte_over_commit_views_returns_error, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "WITH RECURSIVE commit_walk(id) AS ( \ SELECT id FROM lix_commit \ UNION ALL \ SELECT lix_commit_edge.child_id \ FROM lix_commit_edge \ JOIN commit_walk ON lix_commit_edge.parent_id = commit_walk.id \ ) \ SELECT id FROM commit_walk", &[], ) .await .expect_err("recursive CTE should return an error, not panic"); assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL, "{error:?}"); } ); ================================================ FILE: packages/engine/tests/sql/history_conformance.rs ================================================ use lix_engine::Value; use super::select_rows; simulation_test!( history_surfaces_are_introspected_as_views, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_history_table_type\",\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); let rows = select_rows( &session, "SELECT table_name, table_type \ FROM information_schema.tables \ WHERE table_name IN (\ 'lix_state_history',\ 'lix_file_history',\ 'lix_directory_history',\ 'engine_history_table_type_history'\ ) \ ORDER BY table_name", ) .await; let expected = [ "engine_history_table_type_history", "lix_directory_history", "lix_file_history", "lix_state_history", ] .into_iter() .map(|table| { vec![ Value::Text(table.to_string()), Value::Text("VIEW".to_string()), ] }) .collect::>(); assert_eq!(rows, expected); } ); simulation_test!( history_view_schemas_expose_tombstone_contract, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_history_contract_schema\",\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"count\":{\"type\":\"integer\"},\"active\":{\"type\":\"boolean\"},\"meta\":{\"type\":\"object\"}},\"required\":[\"id\",\"count\",\"active\",\"meta\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); let rows = select_rows( &session, "SELECT table_name, column_name, is_nullable \ FROM information_schema.columns \ WHERE table_name IN (\ 'lix_file_history',\ 'lix_directory_history',\ 'engine_history_contract_schema_history'\ ) \ AND (\ column_name IN ('path', 'directory_id', 'parent_id', 'name', 'data', 'id', 'count', 'active', 'meta') \ OR column_name = 'lixcol_snapshot_content'\ ) \ ORDER BY table_name, column_name", ) .await; let expected = vec![ ("engine_history_contract_schema_history", "active", "YES"), ("engine_history_contract_schema_history", "count", "YES"), ("engine_history_contract_schema_history", "id", "YES"), ( "engine_history_contract_schema_history", "lixcol_snapshot_content", "YES", ), ("engine_history_contract_schema_history", "meta", "YES"), ("lix_directory_history", "id", "NO"), ("lix_directory_history", "lixcol_snapshot_content", "YES"), ("lix_directory_history", "name", "YES"), ("lix_directory_history", "parent_id", "YES"), ("lix_directory_history", "path", "YES"), ("lix_file_history", "data", "YES"), ("lix_file_history", "directory_id", "YES"), ("lix_file_history", "id", "NO"), ("lix_file_history", "lixcol_snapshot_content", "YES"), ("lix_file_history", "name", "YES"), ("lix_file_history", "path", "YES"), ] .into_iter() .map(|(table, column, nullable)| { vec![ Value::Text(table.to_string()), Value::Text(column.to_string()), Value::Text(nullable.to_string()), ] }) .collect::>(); assert_eq!(rows, expected); } ); simulation_test!( typed_entity_history_exposes_tombstones_like_lix_state_history, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_history_conformance\",\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"value\":{\"type\":\"string\"}},\"required\":[\"id\",\"value\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); session .execute( "INSERT INTO engine_history_conformance \ (lixcol_entity_id, id, value, lixcol_untracked) \ VALUES (lix_json('[\"history-conformance-entity\"]'), 'history-conformance-entity', 'one', false)", &[], ) .await .expect("entity insert should succeed"); session .execute( "UPDATE engine_history_conformance \ SET value = 'two' \ WHERE lixcol_entity_id = lix_json('[\"history-conformance-entity\"]')", &[], ) .await .expect("entity update should succeed"); session .execute( "DELETE FROM engine_history_conformance \ WHERE lixcol_entity_id = lix_json('[\"history-conformance-entity\"]')", &[], ) .await .expect("entity delete should succeed"); let typed_rows = select_rows( &session, "SELECT id, value, lixcol_entity_id, lixcol_snapshot_content, lixcol_depth \ FROM engine_history_conformance_history \ WHERE lixcol_start_commit_id = lix_active_version_commit_id() \ AND lixcol_entity_id = lix_json('[\"history-conformance-entity\"]') \ ORDER BY lixcol_depth", ) .await; assert_eq!(typed_rows.len(), 3); assert_eq!( typed_rows[0], vec![ Value::Null, Value::Null, Value::Json(serde_json::json!(["history-conformance-entity"])), Value::Null, Value::Integer(0), ] ); let state_rows = select_rows( &session, "SELECT snapshot_content, depth \ FROM lix_state_history \ WHERE start_commit_id = lix_active_version_commit_id() \ AND schema_key = 'engine_history_conformance' \ AND entity_id = lix_json('[\"history-conformance-entity\"]') \ AND snapshot_content IS NULL", ) .await; assert_eq!(state_rows, vec![vec![Value::Null, Value::Integer(0)]]); } ); simulation_test!( typed_entity_history_backfills_primary_key_columns_on_tombstones, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) \ VALUES ('history-pk-backfill', 'one')", &[], ) .await .expect("key value insert should succeed"); session .execute( "DELETE FROM lix_key_value WHERE key = 'history-pk-backfill'", &[], ) .await .expect("key value delete should succeed"); let rows = select_rows( &session, "SELECT key, value, lixcol_entity_id, lixcol_snapshot_content, lixcol_depth \ FROM lix_key_value_history \ WHERE lixcol_start_commit_id = lix_active_version_commit_id() \ AND key = 'history-pk-backfill' \ ORDER BY lixcol_depth", ) .await; assert_eq!( rows, vec![ vec![ Value::Text("history-pk-backfill".to_string()), Value::Null, Value::Json(serde_json::json!(["history-pk-backfill"])), Value::Null, Value::Integer(0), ], vec![ Value::Text("history-pk-backfill".to_string()), lix_engine::Value::Json(serde_json::json!("one")), Value::Json(serde_json::json!(["history-pk-backfill"])), lix_engine::Value::Json(serde_json::json!({ "key": "history-pk-backfill", "value": "one" })), Value::Integer(1), ], ] ); } ); simulation_test!( typed_entity_history_backfills_composite_primary_key_columns_on_tombstones, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_history_composite_pk\",\"x-lix-primary-key\":[\"/namespace\",\"/id\"],\"type\":\"object\",\"properties\":{\"namespace\":{\"type\":\"string\"},\"id\":{\"type\":\"string\"},\"value\":{\"type\":\"string\"}},\"required\":[\"namespace\",\"id\",\"value\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); session .execute( "INSERT INTO engine_history_composite_pk \ (namespace, id, value, lixcol_untracked) \ VALUES ('messages', '7', 'one', false)", &[], ) .await .expect("composite entity insert should succeed"); session .execute( "DELETE FROM engine_history_composite_pk \ WHERE namespace = 'messages' AND id = '7'", &[], ) .await .expect("composite entity delete should succeed"); let rows = select_rows( &session, "SELECT namespace, id, value, lixcol_snapshot_content, lixcol_depth \ FROM engine_history_composite_pk_history \ WHERE lixcol_start_commit_id = lix_active_version_commit_id() \ AND namespace = 'messages' \ AND id = '7' \ ORDER BY lixcol_depth", ) .await; assert_eq!( rows, vec![ vec![ Value::Text("messages".to_string()), Value::Text("7".to_string()), Value::Null, Value::Null, Value::Integer(0), ], vec![ Value::Text("messages".to_string()), Value::Text("7".to_string()), Value::Text("one".to_string()), lix_engine::Value::Json(serde_json::json!({ "namespace": "messages", "id": "7", "value": "one" })), Value::Integer(1), ], ] ); } ); simulation_test!( lix_file_history_exposes_descriptor_tombstones_like_lix_state_history, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('history-conformance-file', '/docs/conformance.txt', X'6F6E65')", &[], ) .await .expect("file insert should succeed"); session .execute( "UPDATE lix_file SET data = X'74776F' WHERE id = 'history-conformance-file'", &[], ) .await .expect("file update should succeed"); session .execute( "DELETE FROM lix_file WHERE id = 'history-conformance-file'", &[], ) .await .expect("file delete should succeed"); let file_rows = select_rows( &session, "SELECT id, path, name, data, lixcol_entity_id, lixcol_file_id, lixcol_snapshot_content, lixcol_depth \ FROM lix_file_history \ WHERE lixcol_start_commit_id = lix_active_version_commit_id() \ AND id = 'history-conformance-file' \ AND lixcol_depth = 0", ) .await; assert_eq!( file_rows, vec![vec![ Value::Text("history-conformance-file".to_string()), Value::Null, Value::Null, Value::Null, Value::Json(serde_json::json!(["history-conformance-file"])), Value::Text("history-conformance-file".to_string()), Value::Null, Value::Integer(0), ]] ); let state_rows = select_rows( &session, "SELECT snapshot_content, depth \ FROM lix_state_history \ WHERE start_commit_id = lix_active_version_commit_id() \ AND schema_key = 'lix_file_descriptor' \ AND entity_id = lix_json('[\"history-conformance-file\"]') \ AND snapshot_content IS NULL", ) .await; assert_eq!(state_rows, vec![vec![Value::Null, Value::Integer(0)]]); } ); simulation_test!( lix_directory_history_exposes_descriptor_tombstones_like_lix_state_history, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, path) \ VALUES ('history-conformance-dir', '/conformance/')", &[], ) .await .expect("directory insert should succeed"); session .execute( "UPDATE lix_directory SET name = 'conformance-updated' \ WHERE id = 'history-conformance-dir'", &[], ) .await .expect("directory update should succeed"); session .execute( "DELETE FROM lix_directory WHERE id = 'history-conformance-dir'", &[], ) .await .expect("directory delete should succeed"); let directory_rows = select_rows( &session, "SELECT id, path, parent_id, name, lixcol_entity_id, lixcol_snapshot_content, lixcol_depth \ FROM lix_directory_history \ WHERE lixcol_start_commit_id = lix_active_version_commit_id() \ AND id = 'history-conformance-dir' \ AND lixcol_depth = 0", ) .await; assert_eq!( directory_rows, vec![vec![ Value::Text("history-conformance-dir".to_string()), Value::Null, Value::Null, Value::Null, Value::Json(serde_json::json!(["history-conformance-dir"])), Value::Null, Value::Integer(0), ]] ); let state_rows = select_rows( &session, "SELECT snapshot_content, depth \ FROM lix_state_history \ WHERE start_commit_id = lix_active_version_commit_id() \ AND schema_key = 'lix_directory_descriptor' \ AND entity_id = lix_json('[\"history-conformance-dir\"]') \ AND snapshot_content IS NULL", ) .await; assert_eq!(state_rows, vec![vec![Value::Null, Value::Integer(0)]]); } ); ================================================ FILE: packages/engine/tests/sql/lix_change.rs ================================================ use std::collections::BTreeSet; use lix_engine::Value; use serde_json::json; use super::select_rows; simulation_test!(lix_change_queries_tracked_changes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('change-query', 'one')", &[], ) .await .expect("tracked write should succeed"); let result = session .execute( "SELECT entity_id, schema_key, snapshot_content \ FROM lix_change \ WHERE entity_id = lix_json('[\"change-query\"]')", &[], ) .await .expect("lix_change should read"); let rows = result; assert_eq!(rows.len(), 1); assert_eq!( rows.rows()[0].values(), &[ Value::Json(json!(["change-query"])), Value::Text("lix_key_value".to_string()), Value::Json(json!({"key": "change-query", "value": "one"})), ] ); }); simulation_test!( lix_change_entity_id_is_json_array_for_composite_primary_keys, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_composite_message\",\"x-lix-primary-key\":[\"/key\",\"/locale\"],\"type\":\"object\",\"properties\":{\"key\":{\"type\":\"string\"},\"locale\":{\"type\":\"string\"},\"text\":{\"type\":\"string\"}},\"required\":[\"key\",\"locale\",\"text\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("composite schema insert should succeed"); session .execute( "INSERT INTO engine_composite_message (key, locale, text) \ VALUES ('welcome.title', 'en', 'Welcome')", &[], ) .await .expect("composite entity insert should succeed"); let result = session .execute( "SELECT entity_id, \ lix_json_get_text(entity_id, 0) AS entity_key, \ lix_json_get_text(entity_id, 1) AS entity_locale \ FROM lix_change \ WHERE schema_key = 'engine_composite_message' \ AND entity_id = lix_json('[\"welcome.title\",\"en\"]')", &[], ) .await .expect("lix_change should expose composite entity_id as JSON"); assert_eq!(result.len(), 1); assert_eq!( result.rows()[0].values(), &[ Value::Json(json!(["welcome.title", "en"])), Value::Text("welcome.title".to_string()), Value::Text("en".to_string()), ] ); } ); simulation_test!( lix_change_rejects_non_string_primary_key_schemas, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_numeric_message\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"number\"},\"text\":{\"type\":\"string\"}},\"required\":[\"id\",\"text\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect_err("numeric primary-key schema should be rejected"); assert_eq!(error.code, lix_engine::LixError::CODE_SCHEMA_DEFINITION); assert!( error .message .contains("x-lix-primary-key property \"/id\" must have type \"string\""), "error should explain non-string primary-key schema: {error:?}" ); } ); simulation_test!( lix_change_sql_surface_matches_builtin_schema, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); assert_eq!( non_system_column_names(&session, "lix_change").await, builtin_schema_property_names(), ); } ); simulation_test!( lix_change_count_handles_empty_projection, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let rows = select_rows(&session, "SELECT count(*) FROM lix_change").await; assert_single_count(rows); } ); fn assert_single_count(rows: Vec>) { assert_eq!(rows.len(), 1); assert_eq!(rows[0].len(), 1); let Value::Integer(count) = rows[0][0] else { panic!("expected integer count, got {:?}", rows[0][0]); }; assert!(count >= 0); } fn builtin_schema_property_names() -> BTreeSet { let schema = serde_json::from_str::(include_str!( "../../src/schema/builtin/lix_change.json" )) .expect("builtin lix_change schema should parse"); schema .get("properties") .and_then(serde_json::Value::as_object) .expect("builtin lix_change schema should define properties") .keys() .cloned() .collect::>() } async fn non_system_column_names( session: &crate::support::simulation_test::engine::SimSession, table_name: &str, ) -> BTreeSet { let result = session .execute( &format!( "SELECT column_name \ FROM information_schema.columns \ WHERE table_name = '{table_name}'" ), &[], ) .await .expect("information_schema.columns should read"); result .rows() .iter() .map(|row| { let Value::Text(column_name) = &row.values()[0] else { panic!("expected text column name, got {:?}", row.values()[0]); }; column_name.clone() }) .filter(|column_name| !column_name.starts_with("lixcol_")) .collect() } ================================================ FILE: packages/engine/tests/sql/lix_commit.rs ================================================ use std::collections::BTreeSet; use lix_engine::{CreateVersionOptions, Value}; use super::select_rows; simulation_test!( lix_commit_surfaces_expose_commits_and_edges, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let initial_head = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("version head should load") .expect("version head should exist"); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('commit-surface', 'one')", &[], ) .await .expect("first tracked write should succeed"); let first_head = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("version head should load") .expect("version head should exist"); session .execute( "UPDATE lix_key_value SET value = 'two' WHERE key = 'commit-surface'", &[], ) .await .expect("second tracked write should succeed"); let second_head = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("version head should load") .expect("version head should exist"); let commit_rows = select_rows( &session, &format!( "SELECT id, lixcol_global, lixcol_untracked \ FROM lix_commit WHERE id = '{second_head}'" ), ) .await; assert_eq!( commit_rows, vec![vec![ Value::Text(second_head.clone()), Value::Boolean(true), Value::Boolean(false), ]] ); let edge_rows = select_rows( &session, &format!( "SELECT parent_id, child_id, parent_order, lixcol_global, lixcol_untracked \ FROM lix_commit_edge WHERE child_id = '{second_head}'" ), ) .await; assert_eq!( edge_rows, vec![vec![ Value::Text(first_head.clone()), Value::Text(second_head.clone()), Value::Integer(0), Value::Boolean(true), Value::Boolean(false), ]] ); let by_version_rows = select_rows( &session, &format!( "SELECT id, lixcol_version_id, lixcol_global, lixcol_untracked \ FROM lix_commit_by_version \ WHERE id IN ('{initial_head}', '{first_head}', '{second_head}') \ ORDER BY id, lixcol_version_id" ), ) .await; assert!(by_version_rows.contains(&vec![ Value::Text(initial_head.clone()), Value::Text(sim.main_version_id().to_string()), Value::Boolean(true), Value::Boolean(false), ])); assert!(by_version_rows.contains(&vec![ Value::Text(initial_head), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ])); assert!(by_version_rows.contains(&vec![ Value::Text(first_head.clone()), Value::Text(sim.main_version_id().to_string()), Value::Boolean(true), Value::Boolean(false), ])); assert!(by_version_rows.contains(&vec![ Value::Text(first_head.clone()), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ])); assert!(by_version_rows.contains(&vec![ Value::Text(second_head.clone()), Value::Text(sim.main_version_id().to_string()), Value::Boolean(true), Value::Boolean(false), ])); assert!(by_version_rows.contains(&vec![ Value::Text(second_head.clone()), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ])); let edge_by_version_rows = select_rows( &session, &format!( "SELECT parent_id, child_id, parent_order, lixcol_version_id, lixcol_global, lixcol_untracked \ FROM lix_commit_edge_by_version \ WHERE child_id = '{second_head}' \ ORDER BY lixcol_version_id" ), ) .await; assert_eq!( edge_by_version_rows, vec![ vec![ Value::Text(first_head.clone()), Value::Text(second_head.clone()), Value::Integer(0), Value::Text(sim.main_version_id().to_string()), Value::Boolean(true), Value::Boolean(false), ], vec![ Value::Text(first_head), Value::Text(second_head), Value::Integer(0), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ], ] ); } ); simulation_test!( lix_commit_is_plain_global_entity_not_active_reachability_view, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('main-only', 'main')", &[], ) .await .expect("main write should succeed"); main.create_version(CreateVersionOptions { id: Some("commit-branch".to_string()), name: "Commit branch".to_string(), from_commit_id: None, }) .await .expect("branch version should be created"); let branch = sim.wrap_session( engine .open_session("commit-branch") .await .expect("branch session should open"), &engine, ); branch .execute( "INSERT INTO lix_key_value (key, value) VALUES ('branch-only', 'branch')", &[], ) .await .expect("branch write should succeed"); let branch_head = engine .load_version_head_commit_id("commit-branch") .await .expect("branch head should load") .expect("branch head should exist"); let main_commit_rows = select_rows( &main, &format!("SELECT id FROM lix_commit WHERE id = '{branch_head}'"), ) .await; let branch_commit_rows = select_rows( &branch, &format!("SELECT id FROM lix_commit WHERE id = '{branch_head}'"), ) .await; assert_eq!( main_commit_rows, branch_commit_rows, "lix_commit should not depend on the active version" ); assert_eq!( main_commit_rows, vec![vec![Value::Text(branch_head.clone())]] ); let main_edge_rows = select_rows( &main, &format!("SELECT child_id FROM lix_commit_edge WHERE child_id = '{branch_head}'"), ) .await; let branch_edge_rows = select_rows( &branch, &format!("SELECT child_id FROM lix_commit_edge WHERE child_id = '{branch_head}'"), ) .await; assert_eq!( main_edge_rows, branch_edge_rows, "derived commit surfaces should also expose global commit-derived rows" ); assert_eq!(main_edge_rows, vec![vec![Value::Text(branch_head)]]); } ); simulation_test!( lix_commit_derived_by_version_surfaces_match_commit_entity_projection, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); main.execute( "INSERT INTO lix_key_value (key, value) VALUES ('main-edge-probe', 'main')", &[], ) .await .expect("main write should succeed"); main.create_version(CreateVersionOptions { id: Some("edge-probe-a".to_string()), name: "Edge Probe A".to_string(), from_commit_id: Some(sim.initial_commit_id().to_string()), }) .await .expect("edge-probe-a should be created from the initial commit"); main.create_version(CreateVersionOptions { id: Some("edge-probe-b".to_string()), name: "Edge Probe B".to_string(), from_commit_id: Some(sim.initial_commit_id().to_string()), }) .await .expect("edge-probe-b should be created from the initial commit"); let branch_a = sim.wrap_session( engine .open_session("edge-probe-a") .await .expect("edge-probe-a session should open"), &engine, ); branch_a .execute( "INSERT INTO lix_key_value (key, value) VALUES ('edge-probe-a-only', 'a')", &[], ) .await .expect("edge-probe-a write should succeed"); let branch_b = sim.wrap_session( engine .open_session("edge-probe-b") .await .expect("edge-probe-b session should open"), &engine, ); branch_b .execute( "INSERT INTO lix_key_value (key, value) VALUES ('edge-probe-b-only', 'b')", &[], ) .await .expect("edge-probe-b write should succeed"); let global_edges = commit_edges_by_version(&main, "global").await; for version_id in [sim.main_version_id(), "edge-probe-a", "edge-probe-b"] { let actual_edges = commit_edges_by_version(&main, version_id).await; assert_eq!( actual_edges, global_edges, "lix_commit_edge_by_version should project derived global edges for {version_id}" ); } } ); simulation_test!( lix_commit_surfaces_match_canonical_schema_definitions, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for (schema_key, tables) in [ ("lix_commit", vec!["lix_commit", "lix_commit_by_version"]), ( "lix_commit_edge", vec!["lix_commit_edge", "lix_commit_edge_by_version"], ), ] { let schema_properties = builtin_schema_property_names(schema_key); for table in tables { let surface_columns = non_system_column_names(&session, table).await; assert_eq!( surface_columns, schema_properties, "{table} data columns should match {schema_key} properties" ); } } } ); simulation_test!( lix_commit_surfaces_count_handle_empty_projection, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for table in [ "lix_commit", "lix_commit_by_version", "lix_commit_edge", "lix_commit_edge_by_version", ] { let rows = select_rows(&session, &format!("SELECT count(*) FROM {table}")).await; assert_single_count(rows, table); } } ); fn assert_single_count(rows: Vec>, table: &str) { assert_eq!(rows.len(), 1, "{table} should return one count row"); assert_eq!(rows[0].len(), 1, "{table} should return one count column"); let Value::Integer(count) = rows[0][0] else { panic!( "{table} should return an integer count, got {:?}", rows[0][0] ); }; assert!(count >= 0, "{table} count should be non-negative"); } fn text_value(value: &Value) -> String { let Value::Text(value) = value else { panic!("expected text value, got {value:?}"); }; value.clone() } async fn commit_edges_by_version( session: &crate::support::simulation_test::engine::SimSession, version_id: &str, ) -> BTreeSet<(String, String)> { select_rows( session, &format!( "SELECT parent_id, child_id \ FROM lix_commit_edge_by_version \ WHERE lixcol_version_id = '{version_id}'" ), ) .await .into_iter() .map(|row| (text_value(&row[0]), text_value(&row[1]))) .collect() } fn builtin_schema_property_names(schema_key: &str) -> BTreeSet { let schema = match schema_key { "lix_commit" => include_str!("../../src/schema/builtin/lix_commit.json"), "lix_commit_edge" => include_str!("../../src/schema/builtin/lix_commit_edge.json"), other => panic!("unexpected builtin schema key: {other}"), }; let schema = serde_json::from_str::(schema) .expect("builtin schema fixture should parse"); schema .get("properties") .and_then(serde_json::Value::as_object) .expect("builtin schema should define properties") .keys() .cloned() .collect::>() } async fn non_system_column_names( session: &crate::support::simulation_test::engine::SimSession, table_name: &str, ) -> BTreeSet { let rows = select_rows( session, &format!( "SELECT column_name \ FROM information_schema.columns \ WHERE table_name = '{table_name}'" ), ) .await; rows.into_iter() .map(|row| text_value(&row[0])) .filter(|column_name| !column_name.starts_with("lixcol_")) .collect() } ================================================ FILE: packages/engine/tests/sql/lix_directory.rs ================================================ use lix_engine::ExecuteResult; use lix_engine::LixError; use lix_engine::Value; use serde_json::json; use super::assert_rows_eq; simulation_test!( lix_directory_path_insert_rejects_overlong_paths_and_segments, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let long_segment = "a".repeat(256); let segment_error = session .execute( "INSERT INTO lix_directory (id, path) VALUES ('dir-long-segment', $1)", &[Value::Text(format!("/{long_segment}/"))], ) .await .expect_err("overlong directory path segment should be rejected"); assert_eq!(segment_error.code, LixError::CODE_INVALID_PARAM); let long_path = format!("/{}/", ["abcd"; 820].join("/")); let path_error = session .execute( "INSERT INTO lix_directory (id, path) VALUES ('dir-long-path', $1)", &[Value::Text(long_path)], ) .await .expect_err("overlong directory path should be rejected"); assert_eq!(path_error.code, LixError::CODE_INVALID_PARAM); let encoded_segment_at_limit = "%61".repeat(255); session .execute( "INSERT INTO lix_directory (id, path) VALUES ('dir-encoded-limit', $1)", &[Value::Text(format!("/{encoded_segment_at_limit}/"))], ) .await .expect("percent-encoded segment should be measured after canonicalization"); let encoded_segment_over_limit = "%61".repeat(256); let encoded_segment_error = session .execute( "INSERT INTO lix_directory (id, path) VALUES ('dir-encoded-over-limit', $1)", &[Value::Text(format!("/{encoded_segment_over_limit}/"))], ) .await .expect_err("overlong canonical segment should be rejected"); assert_eq!(encoded_segment_error.code, LixError::CODE_INVALID_PARAM); let huge_path = format!("/{}/", "a".repeat(1024 * 1024)); let huge_error = session .execute( "INSERT INTO lix_directory (id, path) VALUES ('dir-huge-path', $1)", &[Value::Text(huge_path)], ) .await .expect_err("huge path input should be rejected without runtime internals"); assert_eq!(huge_error.code, LixError::CODE_INVALID_PARAM); } ); simulation_test!( lix_directory_path_insert_rejects_percent_encoded_forbidden_code_points, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for (id, path) in [ ("dir-percent-nul", "/docs/%00evil/"), ("dir-percent-bidi", "/docs/%E2%80%AEevil/"), ] { let error = session .execute( &format!("INSERT INTO lix_directory (id, path) VALUES ('{id}', '{path}')"), &[], ) .await .expect_err("percent-encoded forbidden path code point should be rejected"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); } } ); simulation_test!(lix_directory_insert_reads_nested_paths, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let insert_result = session .execute( "INSERT INTO lix_directory (id, parent_id, name) \ VALUES ('dir-docs', NULL, 'docs')", &[], ) .await .expect("directory insert should succeed"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let nested_insert_result = session .execute( "INSERT INTO lix_directory (id, path) \ VALUES ('dir-nested', '/docs/nested/')", &[], ) .await .expect("nested directory path insert should succeed"); assert_eq!(nested_insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, path, parent_id, name \ FROM lix_directory \ WHERE id IN ('dir-docs', 'dir-nested') \ ORDER BY path", &[], ) .await .expect("directory read should succeed"); let row_set = result; assert_eq!(row_set.len(), 2); assert_eq!( row_set.rows()[0].values(), &[ Value::Text("dir-docs".to_string()), Value::Text("/docs/".to_string()), Value::Null, Value::Text("docs".to_string()), ] ); assert_eq!( row_set.rows()[1].values(), &[ Value::Text("dir-nested".to_string()), Value::Text("/docs/nested/".to_string()), Value::Text("dir-docs".to_string()), Value::Text("nested".to_string()), ] ); }); simulation_test!( lix_directory_insert_applies_defaulted_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let insert_result = session .execute( "INSERT INTO lix_directory (parent_id, name) \ VALUES (NULL, 'docs')", &[], ) .await .expect("directory insert should apply defaulted id"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, path, parent_id, name \ FROM lix_directory \ WHERE path = '/docs/'", &[], ) .await .expect("directory read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); let values = row_set.rows()[0].values(); let [Value::Text(id), Value::Text(path), Value::Null, Value::Text(name)] = values else { panic!("expected generated directory row, got {values:?}"); }; assert!(!id.is_empty(), "defaulted directory id should be non-empty"); assert_eq!(path, "/docs/"); assert_eq!(name, "docs"); } ); simulation_test!( lix_directory_path_insert_applies_defaulted_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let insert_result = session .execute("INSERT INTO lix_directory (path) VALUES ('/docs/')", &[]) .await .expect("directory path insert should apply defaulted id"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, path, parent_id, name \ FROM lix_directory \ WHERE path = '/docs/'", &[], ) .await .expect("directory read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); let values = row_set.rows()[0].values(); let [Value::Text(id), Value::Text(path), Value::Null, Value::Text(name)] = values else { panic!("expected generated directory path row, got {values:?}"); }; assert!(!id.is_empty(), "defaulted directory id should be non-empty"); assert_eq!(path, "/docs/"); assert_eq!(name, "docs"); } ); simulation_test!( lix_directory_path_insert_rejects_duplicate_root_path, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute("INSERT INTO lix_directory (path) VALUES ('/docs/')", &[]) .await .expect("first directory insert should succeed"); let error = session .execute("INSERT INTO lix_directory (path) VALUES ('/docs/')", &[]) .await .expect_err("duplicate directory path insert should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_directory_insert_duplicate_id_reports_lix_directory, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, path) VALUES ('same-dir', '/a/')", &[], ) .await .expect("first directory insert should succeed"); let error = session .execute( "INSERT INTO lix_directory (id, path) VALUES ('same-dir', '/b/')", &[], ) .await .expect_err("duplicate directory id insert should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("table 'lix_directory'") && error.message.contains("id 'same-dir'") && !error.message.contains("lix_directory_descriptor"), "unexpected error: {error:?}" ); } ); simulation_test!( lix_directory_by_version_insert_duplicate_id_reports_lix_directory_by_version, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let version_id = sim.main_version_id(); session .execute( &format!( "INSERT INTO lix_directory_by_version \ (id, path, lixcol_version_id) \ VALUES ('same-dir', '/a/', '{version_id}')" ), &[], ) .await .expect("first by-version directory insert should succeed"); let error = session .execute( &format!( "INSERT INTO lix_directory_by_version \ (id, path, lixcol_version_id) \ VALUES ('same-dir', '/b/', '{version_id}')" ), &[], ) .await .expect_err("duplicate by-version directory id insert should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("table 'lix_directory_by_version'") && error.message.contains("id 'same-dir'") && !error.message.contains("table 'lix_directory':") && !error.message.contains("lix_directory_descriptor"), "unexpected error: {error:?}" ); } ); simulation_test!( lix_directory_path_insert_rejects_existing_file_entry, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute("INSERT INTO lix_file (path) VALUES ('/foo')", &[]) .await .expect("file insert should succeed"); let error = session .execute("INSERT INTO lix_directory (path) VALUES ('/foo/')", &[]) .await .expect_err("directory should conflict with file at same entry name"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_directory_descriptor_shape_insert_rejects_existing_file_entry, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, directory_id, name) \ VALUES ('file-foo', NULL, 'foo')", &[], ) .await .expect("file insert should succeed"); let error = session .execute( "INSERT INTO lix_directory (id, parent_id, name) VALUES ('dir-foo', NULL, 'foo')", &[], ) .await .expect_err("descriptor-shaped directory insert should conflict with file"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_directory_update_rejects_existing_file_entry, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, parent_id, name) VALUES ('dir-bar', NULL, 'bar')", &[], ) .await .expect("directory insert should succeed"); session .execute("INSERT INTO lix_file (path) VALUES ('/foo')", &[]) .await .expect("file insert should succeed"); let error = session .execute( "UPDATE lix_directory SET name = 'foo' WHERE id = 'dir-bar'", &[], ) .await .expect_err("directory rename should conflict with file"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_directory_path_insert_rejects_dot_segments, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for path in ["/a/../b/", "/a/%2e%2e/b/", "/a/./b/"] { let error = session .execute( "INSERT INTO lix_directory (path) VALUES ($1)", &[Value::Text(path.to_string())], ) .await .expect_err("directory path insert should reject dot segments"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); } let result = session .execute("SELECT path FROM lix_directory WHERE path = '/b/'", &[]) .await .expect("directory read should succeed"); assert_eq!(result.len(), 0); } ); simulation_test!( lix_directory_update_rejects_parent_cycle, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, parent_id, name) VALUES \ ('dir-parent', NULL, 'parent'), \ ('dir-child', 'dir-parent', 'child')", &[], ) .await .expect("directory tree insert should succeed"); let self_cycle = session .execute( "UPDATE lix_directory SET parent_id = id WHERE id = 'dir-parent'", &[], ) .await .expect_err("self parent must be rejected"); assert_eq!(self_cycle.code, LixError::CODE_CONSTRAINT_VIOLATION); let descendant_cycle = session .execute( "UPDATE lix_directory SET parent_id = 'dir-child' WHERE id = 'dir-parent'", &[], ) .await .expect_err("parenting a directory under its descendant must be rejected"); assert_eq!(descendant_cycle.code, LixError::CODE_CONSTRAINT_VIOLATION); } ); simulation_test!( lix_directory_descriptor_writes_use_canonical_path_segment_validation, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute("INSERT INTO lix_directory (path) VALUES ('/Café/')", &[]) .await .expect("canonical directory insert should succeed"); let nfc_collision = session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (lix_json('[\"dir-cafe-decomposed\"]'), 'lix_directory_descriptor', NULL, $1, false, false)", &[Value::Json(json!({ "id": "dir-cafe-decomposed", "parent_id": null, "name": "Cafe\u{301}", }))], ) .await .expect_err("decomposed descriptor name should normalize before uniqueness"); assert_eq!(nfc_collision.code, LixError::CODE_UNIQUE); let zero_width = session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (lix_json('[\"dir-zero-width\"]'), 'lix_directory_descriptor', NULL, $1, false, false)", &[Value::Json(json!({ "id": "dir-zero-width", "parent_id": null, "name": "zero\u{200D}width", }))], ) .await .expect_err("descriptor name should reject zero-width characters"); assert_eq!(zero_width.code, "LIX_ERROR_PATH_INVALID_SEGMENT_CODE_POINT"); } ); simulation_test!( lix_state_insert_rejects_directory_parent_cycle, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES \ (lix_json('[\"dir-a\"]'), 'lix_directory_descriptor', NULL, lix_json('{\"id\":\"dir-a\",\"parent_id\":\"dir-b\",\"name\":\"a\"}'), false, false), \ (lix_json('[\"dir-b\"]'), 'lix_directory_descriptor', NULL, lix_json('{\"id\":\"dir-b\",\"parent_id\":\"dir-a\",\"name\":\"b\"}'), false, false)", &[], ) .await .expect_err("descriptor cycles staged through lix_state must be rejected"); assert_eq!(error.code, LixError::CODE_CONSTRAINT_VIOLATION); } ); simulation_test!( lix_state_insert_rejects_directory_file_namespace_conflict, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute("INSERT INTO lix_file (path) VALUES ('/foo')", &[]) .await .expect("file insert should succeed"); let error = session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES \ (lix_json('[\"dir-foo\"]'), 'lix_directory_descriptor', NULL, lix_json('{\"id\":\"dir-foo\",\"parent_id\":null,\"name\":\"foo\"}'), false, false)", &[], ) .await .expect_err("lix_state directory descriptor must not bypass filesystem namespace"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("filesystem namespace conflict"), "expected namespace conflict error: {error}" ); } ); simulation_test!( lix_directory_allows_version_local_entry_matching_global_file_entry, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, lixcol_global) \ VALUES ('global-file-foo', '/foo', true)", &[], ) .await .expect("global file insert should succeed"); session .execute( "INSERT INTO lix_directory (id, path) VALUES ('version-dir-foo', '/foo/')", &[], ) .await .expect("version-local directory should be a distinct storage namespace"); let global_file = session .execute( "SELECT id, path, lixcol_version_id, lixcol_global \ FROM lix_file_by_version \ WHERE id = 'global-file-foo' AND lixcol_version_id = 'global'", &[], ) .await .expect("global file should query"); let version_directory = session .execute( "SELECT id, path \ FROM lix_directory \ WHERE id = 'version-dir-foo'", &[], ) .await .expect("version directory should query"); assert_eq!(global_file.len(), 1); assert_eq!(version_directory.len(), 1); } ); simulation_test!( lix_directory_delete_recursively_deletes_tree, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let file_result = session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('file-readme', '/docs/guides/readme.md', X'68656C6C6F')", &[], ) .await .expect("file insert should succeed"); assert_eq!(file_result, ExecuteResult::from_rows_affected(1)); let directory_ids_result = session .execute( "SELECT id \ FROM lix_directory \ WHERE path IN ('/docs/', '/docs/guides/') \ ORDER BY path", &[], ) .await .expect("directory id read before delete should succeed"); let directory_id_rows = directory_ids_result; assert_eq!(directory_id_rows.len(), 2); let directory_ids = directory_id_rows .rows() .iter() .map(|row| { let Value::Text(id) = &row.values()[0] else { panic!("directory id should be text"); }; id.clone() }) .collect::>(); let delete_result = session .execute("DELETE FROM lix_directory WHERE path = '/docs/'", &[]) .await .expect("recursive directory delete should succeed"); assert_eq!(delete_result, ExecuteResult::from_rows_affected(3)); let directories_result = session .execute( "SELECT id, path \ FROM lix_directory \ WHERE path IN ('/docs/', '/docs/guides/') \ ORDER BY path", &[], ) .await .expect("directory read after delete should succeed"); let directory_rows = directories_result; assert_eq!( directory_rows.len(), 0, "recursive directory delete should delete the root and child directories" ); let file_result = session .execute( "SELECT id, path \ FROM lix_file \ WHERE path = '/docs/guides/readme.md'", &[], ) .await .expect("file read after delete should succeed"); let file_rows = file_result; assert_eq!( file_rows.len(), 0, "recursive directory delete should delete nested files" ); let state_result = session .execute( &format!( "SELECT entity_id, schema_key \ FROM lix_state \ WHERE entity_id IN (lix_json('[\"{}\"]'), lix_json('[\"{}\"]'), lix_json('[\"file-readme\"]')) \ ORDER BY schema_key, entity_id", directory_ids[0], directory_ids[1] ), &[], ) .await .expect("state read after delete should succeed"); let state_rows = state_result; assert_eq!( state_rows.len(), 0, "recursive directory delete should make descriptor/blob-ref state rows not visible" ); } ); simulation_test!( lix_directory_by_version_expands_global_rows, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, path, lixcol_global, lixcol_untracked) \ VALUES ('dir-global-overlay', '/shared/', true, false)", &[], ) .await .expect("global directory insert should succeed"); let result = session .execute( "SELECT id, path, lixcol_version_id, lixcol_global, lixcol_untracked \ FROM lix_directory_by_version \ WHERE id = 'dir-global-overlay' \ ORDER BY lixcol_version_id", &[], ) .await .expect("directory by-version read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("dir-global-overlay".to_string()), Value::Text("/shared/".to_string()), Value::Text(sim.main_version_id().to_string()), Value::Boolean(true), Value::Boolean(false), ], vec![ Value::Text("dir-global-overlay".to_string()), Value::Text("/shared/".to_string()), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ], ], ); } ); ================================================ FILE: packages/engine/tests/sql/lix_directory_history.rs ================================================ use lix_engine::Value; use serde_json::json; use super::assert_rows_eq; simulation_test!( lix_directory_history_reads_paths_from_commit_graph, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, path) \ VALUES ('history-dir-docs', '/docs/')", &[], ) .await .expect("root directory insert should succeed"); let first_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("first directory commit head should load") .expect("first directory commit head should exist"); session .execute( "INSERT INTO lix_directory (id, path) \ VALUES ('history-dir-guides', '/docs/guides/')", &[], ) .await .expect("nested directory insert should succeed"); let second_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("second directory commit head should load") .expect("second directory commit head should exist"); assert_ne!(first_commit_id, second_commit_id); let result = session .execute( &format!( "SELECT id, path, parent_id, name, lixcol_start_commit_id, lixcol_depth \ FROM lix_directory_history \ WHERE lixcol_start_commit_id = '{second_commit_id}' \ AND id IN ('history-dir-docs', 'history-dir-guides') \ ORDER BY lixcol_depth, id" ), &[], ) .await .expect("directory history read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("history-dir-guides".to_string()), Value::Text("/docs/guides/".to_string()), Value::Text("history-dir-docs".to_string()), Value::Text("guides".to_string()), Value::Text(second_commit_id.clone()), Value::Integer(0), ], vec![ Value::Text("history-dir-docs".to_string()), Value::Text("/docs/".to_string()), Value::Null, Value::Text("docs".to_string()), Value::Text(second_commit_id.clone()), Value::Integer(1), ], ], ); let snapshot_result = session .execute( &format!( "SELECT lixcol_snapshot_content \ FROM lix_directory_history \ WHERE lixcol_start_commit_id = '{second_commit_id}' \ AND id = 'history-dir-guides' \ AND lixcol_depth = 0" ), &[], ) .await .expect("directory history descriptor snapshot should be selectable"); let snapshot = snapshot_result.rows()[0] .get::("lixcol_snapshot_content") .expect("snapshot_content should be present"); let Value::Json(snapshot) = snapshot else { panic!("snapshot_content should be semantic JSON, got {snapshot:?}"); }; assert_eq!(snapshot["parent_id"], json!("history-dir-docs")); assert_eq!(snapshot["name"], json!("guides")); } ); simulation_test!( lix_directory_history_requires_start_commit_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT id FROM lix_directory_history", &[]) .await .expect_err("directory history queries must provide start commit"); assert!( error .to_string() .contains("requires a lixcol_start_commit_id filter"), "unexpected error: {error}" ); assert!( error .hint() .is_some_and(|hint| hint.contains("WHERE lixcol_start_commit_id")), "unexpected error: {error}" ); } ); simulation_test!( lix_directory_history_records_recursive_delete, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, path) \ VALUES ('history-delete-docs', '/docs/')", &[], ) .await .expect("root directory insert should succeed"); session .execute( "INSERT INTO lix_directory (id, path) \ VALUES ('history-delete-guides', '/docs/guides/')", &[], ) .await .expect("nested directory insert should succeed"); session .execute( "DELETE FROM lix_directory WHERE id = 'history-delete-docs'", &[], ) .await .expect("recursive directory delete should succeed"); let delete_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("delete commit head should load") .expect("delete commit head should exist"); let result = session .execute( &format!( "SELECT id, path, name, lixcol_snapshot_content, lixcol_schema_key, lixcol_start_commit_id, lixcol_depth \ FROM lix_directory_history \ WHERE lixcol_start_commit_id = '{delete_commit_id}' \ AND lixcol_entity_id IN (lix_json('[\"history-delete-docs\"]'), lix_json('[\"history-delete-guides\"]')) \ AND lixcol_depth = 0 \ ORDER BY lixcol_entity_id" ), &[], ) .await .expect("directory delete history read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("history-delete-docs".to_string()), Value::Null, Value::Null, Value::Null, Value::Text("lix_directory_descriptor".to_string()), Value::Text(delete_commit_id.clone()), Value::Integer(0), ], vec![ Value::Text("history-delete-guides".to_string()), Value::Null, Value::Null, Value::Null, Value::Text("lix_directory_descriptor".to_string()), Value::Text(delete_commit_id), Value::Integer(0), ], ], ); } ); ================================================ FILE: packages/engine/tests/sql/lix_file.rs ================================================ use lix_engine::ExecuteResult; use lix_engine::LixError; use lix_engine::Value; use super::assert_rows_eq; simulation_test!( lix_file_read_rejects_public_path_inside_scalar_function, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "SELECT id FROM lix_file WHERE lower(path) = '/readme.md'", &[], ) .await .expect_err("public path column should not be hidden inside scalar functions"); assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL); assert!(error.message.contains("public column 'path'")); } ); simulation_test!( lix_file_by_version_read_rejects_dynamic_version_id_operand, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "SELECT id FROM lix_file_by_version WHERE lixcol_version_id = lower('main')", &[], ) .await .expect_err("public version id predicate should only accept literal/param operands"); assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL); assert!(error.message.contains("public column 'lixcol_version_id'")); } ); simulation_test!( lix_file_path_insert_rejects_overlong_paths_and_segments, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let long_segment = "a".repeat(256); let segment_error = session .execute( "INSERT INTO lix_file (id, path) VALUES ('file-long-segment', $1)", &[Value::Text(format!("/{long_segment}"))], ) .await .expect_err("overlong file path segment should be rejected"); assert_eq!(segment_error.code, LixError::CODE_INVALID_PARAM); assert!(segment_error.message.contains("path segment is too long")); let long_path = format!("/{}", ["abcd"; 820].join("/")); let path_error = session .execute( "INSERT INTO lix_file (id, path) VALUES ('file-long-path', $1)", &[Value::Text(long_path)], ) .await .expect_err("overlong file path should be rejected"); assert_eq!(path_error.code, LixError::CODE_INVALID_PARAM); assert!(path_error.message.contains("path is too long")); let encoded_segment_at_limit = "%61".repeat(255); session .execute( "INSERT INTO lix_file (id, path) VALUES ('file-encoded-limit', $1)", &[Value::Text(format!("/{encoded_segment_at_limit}"))], ) .await .expect("percent-encoded segment should be measured after canonicalization"); let encoded_segment_over_limit = "%61".repeat(256); let encoded_segment_error = session .execute( "INSERT INTO lix_file (id, path) VALUES ('file-encoded-over-limit', $1)", &[Value::Text(format!("/{encoded_segment_over_limit}"))], ) .await .expect_err("overlong canonical segment should be rejected"); assert_eq!(encoded_segment_error.code, LixError::CODE_INVALID_PARAM); assert!(encoded_segment_error .message .contains("path segment is too long")); let huge_path = format!("/{}", "a".repeat(1024 * 1024)); let huge_error = session .execute( "INSERT INTO lix_file (id, path) VALUES ('file-huge-path', $1)", &[Value::Text(huge_path)], ) .await .expect_err("huge path input should be rejected without runtime internals"); assert_eq!(huge_error.code, LixError::CODE_INVALID_PARAM); assert!(huge_error.message.contains("path input is too long")); } ); simulation_test!( lix_file_path_insert_rejects_percent_encoded_forbidden_code_points, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for (id, path, expected_reason) in [ ( "file-percent-nul", "/docs/%00evil.txt", "path must not contain a NUL byte", ), ( "file-percent-bidi", "/docs/%E2%80%AEevil.txt", "path segment contains a character that is not allowed", ), ] { let error = session .execute( &format!("INSERT INTO lix_file (id, path) VALUES ('{id}', '{path}')"), &[], ) .await .expect_err("percent-encoded forbidden path code point should be rejected"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); assert!(error.message.contains(expected_reason), "{error:?}"); } } ); simulation_test!( lix_file_path_insert_preserves_opaque_file_name_segments, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for (id, path) in [ ("file-foo-dot", "/foo."), ("file-foo-dot-dot", "/foo.."), ("file-foo-dot-dot-dot", "/foo..."), ("file-archive", "/archive.tar.gz"), ("file-dotenv", "/.env"), ("file-percent-dot", "/docs/%2Ehidden"), ] { session .execute( &format!("INSERT INTO lix_file (id, path) VALUES ('{id}', '{path}')"), &[], ) .await .expect("opaque file name insert should succeed"); } let result = session .execute( "SELECT id, path, name \ FROM lix_file \ WHERE id IN (\ 'file-foo-dot',\ 'file-foo-dot-dot',\ 'file-foo-dot-dot-dot',\ 'file-archive',\ 'file-dotenv',\ 'file-percent-dot'\ ) \ ORDER BY id", &[], ) .await .expect("file read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("file-archive".to_string()), Value::Text("/archive.tar.gz".to_string()), Value::Text("archive.tar.gz".to_string()), ], vec![ Value::Text("file-dotenv".to_string()), Value::Text("/.env".to_string()), Value::Text(".env".to_string()), ], vec![ Value::Text("file-foo-dot".to_string()), Value::Text("/foo.".to_string()), Value::Text("foo.".to_string()), ], vec![ Value::Text("file-foo-dot-dot".to_string()), Value::Text("/foo..".to_string()), Value::Text("foo..".to_string()), ], vec![ Value::Text("file-foo-dot-dot-dot".to_string()), Value::Text("/foo...".to_string()), Value::Text("foo...".to_string()), ], vec![ Value::Text("file-percent-dot".to_string()), Value::Text("/docs/.hidden".to_string()), Value::Text(".hidden".to_string()), ], ], ); } ); simulation_test!( lix_file_descriptor_shape_insert_uses_name_as_full_basename, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, directory_id, name) \ VALUES ('file-descriptor-dot', NULL, 'foo.')", &[], ) .await .expect("descriptor-shaped insert should accept full opaque basename"); let result = session .execute( "SELECT id, path, name \ FROM lix_file \ WHERE id = 'file-descriptor-dot'", &[], ) .await .expect("file read should succeed"); assert_rows_eq( result, vec![vec![ Value::Text("file-descriptor-dot".to_string()), Value::Text("/foo.".to_string()), Value::Text("foo.".to_string()), ]], ); } ); simulation_test!( lix_file_extension_column_is_not_writable_identity, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, directory_id, name, extension) \ VALUES ('file-extension-write', NULL, 'readme', 'md')", &[], ) .await .expect_err("extension should not be accepted as writable file identity"); } ); simulation_test!( lix_file_namespace_treats_trailing_dot_names_as_distinct, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path) VALUES ('file-foo', '/foo')", &[], ) .await .expect("plain file insert should succeed"); session .execute( "INSERT INTO lix_file (id, path) VALUES ('file-foo-dot', '/foo.')", &[], ) .await .expect("trailing-dot file insert should be distinct from plain name"); let result = session .execute( "SELECT id, path, name \ FROM lix_file \ WHERE id IN ('file-foo', 'file-foo-dot') \ ORDER BY id", &[], ) .await .expect("file read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("file-foo".to_string()), Value::Text("/foo".to_string()), Value::Text("foo".to_string()), ], vec![ Value::Text("file-foo-dot".to_string()), Value::Text("/foo.".to_string()), Value::Text("foo.".to_string()), ], ], ); } ); simulation_test!( lix_file_insert_reads_path_data_and_parent_dirs, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let file_result = session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('file-readme', '/docs/guides/readme.md', X'68656C6C6F')", &[], ) .await .expect("file insert should succeed"); assert_eq!(file_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, path, data, lixcol_schema_key \ FROM lix_file \ WHERE id = 'file-readme'", &[], ) .await .expect("file read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); assert_eq!( row_set.rows()[0].values(), &[ Value::Text("file-readme".to_string()), Value::Text("/docs/guides/readme.md".to_string()), Value::Blob(b"hello".to_vec()), Value::Text("lix_file_descriptor".to_string()), ] ); let staged_state_result = session .execute( "SELECT entity_id, schema_key \ FROM lix_state \ WHERE entity_id = lix_json('[\"file-readme\"]') \ ORDER BY schema_key, entity_id", &[], ) .await .expect("filesystem state read should succeed"); let staged_state_rows = staged_state_result; assert_eq!( staged_state_rows.len(), 2, "file path insert should stage one file descriptor and one blob ref for the file" ); let directory_result = session .execute( "SELECT path \ FROM lix_directory \ WHERE path IN ('/docs/', '/docs/guides/') \ ORDER BY path", &[], ) .await .expect("directory read after file insert should succeed"); let directory_rows = directory_result; assert_eq!( directory_rows.len(), 2, "file path insert should stage exactly the two missing parent directories" ); } ); simulation_test!(lix_file_insert_applies_defaulted_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, parent_id, name) \ VALUES ('dir-docs', NULL, 'docs')", &[], ) .await .expect("directory insert should succeed"); let insert_result = session .execute( "INSERT INTO lix_file (directory_id, name) \ VALUES ('dir-docs', 'readme.md')", &[], ) .await .expect("file insert should apply defaulted id"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, path, directory_id, name \ FROM lix_file \ WHERE path = '/docs/readme.md'", &[], ) .await .expect("file read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); let values = row_set.rows()[0].values(); let [Value::Text(id), Value::Text(path), Value::Text(directory_id), Value::Text(name)] = values else { panic!("expected generated file row, got {values:?}"); }; assert!(!id.is_empty(), "defaulted file id should be non-empty"); assert_eq!(path, "/docs/readme.md"); assert_eq!(directory_id, "dir-docs"); assert_eq!(name, "readme.md"); }); simulation_test!( lix_file_path_insert_applies_defaulted_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let insert_result = session .execute( "INSERT INTO lix_file (path) VALUES ('/docs/readme.md')", &[], ) .await .expect("file path insert should apply defaulted id"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, path, name \ FROM lix_file \ WHERE path = '/docs/readme.md'", &[], ) .await .expect("file read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); let values = row_set.rows()[0].values(); let [Value::Text(id), Value::Text(path), Value::Text(name)] = values else { panic!("expected generated file path row, got {values:?}"); }; assert!(!id.is_empty(), "defaulted file id should be non-empty"); assert_eq!(path, "/docs/readme.md"); assert_eq!(name, "readme.md"); } ); simulation_test!( lix_file_path_data_insert_applies_defaulted_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let insert_result = session .execute( "INSERT INTO lix_file (path, data) VALUES ('/docs/readme.md', X'68656C6C6F')", &[], ) .await .expect("file path data insert should apply defaulted id"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, path, data \ FROM lix_file \ WHERE path = '/docs/readme.md'", &[], ) .await .expect("file read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); let values = row_set.rows()[0].values(); let [Value::Text(id), Value::Text(path), Value::Blob(data)] = values else { panic!("expected generated file data row, got {values:?}"); }; assert!(!id.is_empty(), "defaulted file id should be non-empty"); assert_eq!(path, "/docs/readme.md"); assert_eq!(data, b"hello"); } ); simulation_test!(lix_file_insert_rejects_null_data, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('null-data-file', '/null.bin', NULL)", &[], ) .await .expect_err("explicit NULL data should be rejected"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH); let parameter_error = session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('null-param-data-file', '/null-param.bin', $1)", &[Value::Null], ) .await .expect_err("parameterized NULL data should be rejected"); assert_eq!(parameter_error.code, LixError::CODE_TYPE_MISMATCH); let result = session .execute( "SELECT id FROM lix_file \ WHERE id IN ('null-data-file', 'null-param-data-file')", &[], ) .await .expect("file read should succeed"); assert_eq!(result.len(), 0); }); simulation_test!( lix_file_insert_rejects_non_binary_data_literals, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for (id, sql) in [ ( "text-data-file", "INSERT INTO lix_file (id, path, data) \ VALUES ('text-data-file', '/text.bin', 'hello')", ), ( "int-data-file", "INSERT INTO lix_file (id, path, data) \ VALUES ('int-data-file', '/int.bin', 12345)", ), ( "float-data-file", "INSERT INTO lix_file (id, path, data) \ VALUES ('float-data-file', '/float.bin', 1.5)", ), ( "bool-data-file", "INSERT INTO lix_file (id, path, data) \ VALUES ('bool-data-file', '/bool.bin', true)", ), ] { let error = session .execute(sql, &[]) .await .expect_err("non-binary data literal should be rejected"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH, "{id}"); } let result = session .execute( "SELECT id FROM lix_file \ WHERE id IN (\ 'text-data-file',\ 'int-data-file',\ 'float-data-file',\ 'bool-data-file'\ )", &[], ) .await .expect("file read should succeed"); assert_eq!(result.len(), 0); } ); simulation_test!( lix_file_insert_rejects_non_binary_data_from_select, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_file (id, path, data) \ SELECT 'select-text-data-file', '/select-text.bin', 'hello'", &[], ) .await .expect_err("non-binary data from SELECT should be rejected"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH); let result = session .execute( "SELECT id FROM lix_file WHERE id = 'select-text-data-file'", &[], ) .await .expect("file read should succeed"); assert_eq!(result.len(), 0); } ); simulation_test!( lix_file_insert_rejects_non_binary_data_parameters, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for (id, value) in [ ("text-param-data-file", Value::Text("hello".to_string())), ("int-param-data-file", Value::Integer(12345)), ] { let error = session .execute( &format!( "INSERT INTO lix_file (id, path, data) \ VALUES ('{id}', '/{id}.bin', $1)" ), &[value], ) .await .expect_err("non-binary data parameter should be rejected"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH, "{id}"); } } ); simulation_test!(lix_file_insert_accepts_empty_blob_data, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let insert_result = session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('empty-data-file', '/empty.bin', X'')", &[], ) .await .expect("empty blob data should be accepted"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT data FROM lix_file WHERE id = 'empty-data-file'", &[], ) .await .expect("file read should succeed"); assert_eq!(result.len(), 1); assert_eq!(result.rows()[0].values(), &[Value::Blob(Vec::new())]); }); simulation_test!( lix_file_path_insert_rejects_duplicate_root_path, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (path, data) VALUES ('/x.bin', $1)", &[Value::Blob(vec![1])], ) .await .expect("first file path insert should succeed"); let error = session .execute( "INSERT INTO lix_file (path, data) VALUES ('/x.bin', $1)", &[Value::Blob(vec![2])], ) .await .expect_err("duplicate file path insert should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_file_insert_duplicate_id_with_data_reports_lix_file, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('same-file', '/a.bin', X'01')", &[], ) .await .expect("first file insert should succeed"); let error = session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('same-file', '/b.bin', X'02')", &[], ) .await .expect_err("duplicate file id insert should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("table 'lix_file'") && error.message.contains("id 'same-file'") && !error.message.contains("lix_binary_blob_ref"), "unexpected error: {error:?}" ); } ); simulation_test!( lix_file_insert_duplicate_id_without_data_reports_lix_file, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path) VALUES ('same-file', '/a.bin')", &[], ) .await .expect("first file insert should succeed"); let error = session .execute( "INSERT INTO lix_file (id, path) VALUES ('same-file', '/b.bin')", &[], ) .await .expect_err("duplicate file id insert should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("table 'lix_file'") && error.message.contains("id 'same-file'") && !error.message.contains("lix_file_descriptor"), "unexpected error: {error:?}" ); } ); simulation_test!( lix_file_insert_duplicate_id_in_same_batch_reports_lix_file, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_file (id, path, data) VALUES \ ('same-file', '/a.bin', X'01'), \ ('same-file', '/b.bin', X'02')", &[], ) .await .expect_err("same-batch duplicate file id insert should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("table 'lix_file'") && error.message.contains("id 'same-file'") && !error.message.contains("lix_binary_blob_ref"), "unexpected error: {error:?}" ); } ); simulation_test!( lix_file_by_version_insert_duplicate_id_reports_lix_file_by_version, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let version_id = sim.main_version_id(); session .execute( &format!( "INSERT INTO lix_file_by_version \ (id, path, data, lixcol_version_id) \ VALUES ('same-file', '/a.bin', X'01', '{version_id}')" ), &[], ) .await .expect("first by-version file insert should succeed"); let error = session .execute( &format!( "INSERT INTO lix_file_by_version \ (id, path, data, lixcol_version_id) \ VALUES ('same-file', '/b.bin', X'02', '{version_id}')" ), &[], ) .await .expect_err("duplicate by-version file id insert should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("table 'lix_file_by_version'") && error.message.contains("id 'same-file'") && !error.message.contains("table 'lix_file':") && !error.message.contains("lix_binary_blob_ref"), "unexpected error: {error:?}" ); } ); simulation_test!( lix_file_path_insert_rejects_existing_directory_entry, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute("INSERT INTO lix_directory (path) VALUES ('/foo/')", &[]) .await .expect("directory insert should succeed"); let error = session .execute("INSERT INTO lix_file (path) VALUES ('/foo')", &[]) .await .expect_err("file should conflict with directory at same entry name"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("filesystem namespace conflict"), "expected namespace conflict error: {error}" ); } ); simulation_test!( lix_file_path_insert_allows_extension_distinct_from_directory, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute("INSERT INTO lix_directory (path) VALUES ('/foo/')", &[]) .await .expect("directory insert should succeed"); session .execute("INSERT INTO lix_file (path) VALUES ('/foo.txt')", &[]) .await .expect("file basename foo.txt should not conflict with directory foo"); let file_result = session .execute("SELECT path FROM lix_file WHERE path = '/foo.txt'", &[]) .await .expect("file path should query"); let directory_result = session .execute("SELECT path FROM lix_directory WHERE path = '/foo/'", &[]) .await .expect("directory path should query"); assert_eq!(file_result.len(), 1); assert_eq!(directory_result.len(), 1); } ); simulation_test!( lix_file_path_insert_rejects_file_as_implicit_ancestor, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute("INSERT INTO lix_file (path) VALUES ('/foo')", &[]) .await .expect("file insert should succeed"); let error = session .execute("INSERT INTO lix_file (path) VALUES ('/foo/bar.txt')", &[]) .await .expect_err("implicit ancestor directory should conflict with existing file"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_file_descriptor_shape_insert_rejects_existing_directory_entry, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, parent_id, name) VALUES ('dir-foo', NULL, 'foo')", &[], ) .await .expect("directory insert should succeed"); let error = session .execute( "INSERT INTO lix_file (id, directory_id, name) \ VALUES ('file-foo', NULL, 'foo')", &[], ) .await .expect_err("descriptor-shaped file insert should conflict with directory"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_file_update_rejects_existing_directory_entry, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path) VALUES ('file-foo', '/foo')", &[], ) .await .expect("file insert should succeed"); session .execute("INSERT INTO lix_directory (path) VALUES ('/bar/')", &[]) .await .expect("directory insert should succeed"); let error = session .execute( "UPDATE lix_file SET path = '/bar' WHERE id = 'file-foo'", &[], ) .await .expect_err("file path update should conflict with directory"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_file_insert_rejects_missing_directory_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_file (directory_id, name) \ VALUES ('missing-dir', 'readme.md')", &[], ) .await .expect_err("file insert should reject missing directory_id"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } ); simulation_test!( lix_file_update_rejects_missing_directory_id_and_preserves_path, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, path) VALUES ('dir-docs', '/docs/')", &[], ) .await .expect("directory insert should succeed"); session .execute( "INSERT INTO lix_file (id, directory_id, name) \ VALUES ('file-readme', 'dir-docs', 'readme.md')", &[], ) .await .expect("file insert should succeed"); let error = session .execute( "UPDATE lix_file SET directory_id = 'missing-dir' WHERE id = 'file-readme'", &[], ) .await .expect_err("file update should reject missing directory_id"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); let result = session .execute( "SELECT path, directory_id FROM lix_file WHERE id = 'file-readme'", &[], ) .await .expect("file read should succeed"); assert_eq!( result.rows()[0].values(), &[ Value::Text("/docs/readme.md".to_string()), Value::Text("dir-docs".to_string()) ] ); } ); simulation_test!( lix_file_path_insert_rejects_dot_segments, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for path in ["/a/../b/c.txt", "/a/%2e%2e/b/c.txt", "/a/./b/c.txt"] { let error = session .execute( "INSERT INTO lix_file (path, data) VALUES ($1, $2)", &[Value::Text(path.to_string()), Value::Blob(Vec::new())], ) .await .expect_err("file path insert should reject dot segments"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); assert!(error.message.contains("path segment cannot be '.' or '..'")); } let result = session .execute("SELECT path FROM lix_file WHERE path = '/b/c.txt'", &[]) .await .expect("file read should succeed"); assert_eq!(result.len(), 0); } ); simulation_test!( lix_file_data_insert_applies_defaulted_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_directory (id, parent_id, name) \ VALUES ('dir-docs', NULL, 'docs')", &[], ) .await .expect("directory insert should succeed"); let insert_result = session .execute( "INSERT INTO lix_file (directory_id, name, data) \ VALUES ('dir-docs', 'readme.md', X'68656C6C6F')", &[], ) .await .expect("file data insert should apply defaulted id"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, path, data \ FROM lix_file \ WHERE path = '/docs/readme.md'", &[], ) .await .expect("file read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); let values = row_set.rows()[0].values(); let [Value::Text(id), Value::Text(path), Value::Blob(data)] = values else { panic!("expected generated file data row, got {values:?}"); }; assert!(!id.is_empty(), "defaulted file id should be non-empty"); assert_eq!(path, "/docs/readme.md"); assert_eq!(data, b"hello"); } ); simulation_test!(lix_file_path_update_preserves_data, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let insert_result = session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('file-readme', '/docs/guides/readme.md', X'68656C6C6F')", &[], ) .await .expect("file insert should succeed"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let update_result = session .execute( "UPDATE lix_file \ SET path = '/docs/readme-renamed.md' \ WHERE id = 'file-readme'", &[], ) .await .expect("file path update should succeed"); assert_eq!(update_result, ExecuteResult::from_rows_affected(1)); let file_result = session .execute( "SELECT id, path, data \ FROM lix_file \ WHERE id = 'file-readme'", &[], ) .await .expect("file read after path update should succeed"); let file_rows = file_result; assert_eq!(file_rows.len(), 1); assert_eq!( file_rows.rows()[0].values(), &[ Value::Text("file-readme".to_string()), Value::Text("/docs/readme-renamed.md".to_string()), Value::Blob(b"hello".to_vec()), ] ); let state_result = session .execute( "SELECT entity_id, schema_key \ FROM lix_state \ WHERE entity_id = lix_json('[\"file-readme\"]') \ ORDER BY schema_key, entity_id", &[], ) .await .expect("filesystem state read after path update should succeed"); let state_rows = state_result; assert_eq!( state_rows.len(), 2, "path update should update one file descriptor and preserve one blob ref" ); let directory_result = session .execute( "SELECT path \ FROM lix_directory \ WHERE path IN ('/docs/', '/docs/guides/') \ ORDER BY path", &[], ) .await .expect("directory read after path update should succeed"); let directory_rows = directory_result; assert_eq!( directory_rows.len(), 2, "path update should not stage an extra directory descriptor" ); }); simulation_test!( lix_file_update_rejects_null_data_and_preserves_existing_data, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('update-null-file', '/update-null.bin', X'68656C6C6F')", &[], ) .await .expect("file insert should succeed"); let error = session .execute( "UPDATE lix_file SET data = NULL WHERE id = 'update-null-file'", &[], ) .await .expect_err("explicit NULL data update should be rejected"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH); let parameter_error = session .execute( "UPDATE lix_file SET data = $1 WHERE id = 'update-null-file'", &[Value::Null], ) .await .expect_err("parameterized NULL data update should be rejected"); assert_eq!(parameter_error.code, LixError::CODE_TYPE_MISMATCH); let result = session .execute( "SELECT data FROM lix_file WHERE id = 'update-null-file'", &[], ) .await .expect("file read should succeed"); assert_eq!(result.len(), 1); assert_eq!(result.rows()[0].values(), &[Value::Blob(b"hello".to_vec())]); } ); simulation_test!( lix_file_update_rejects_non_binary_data_literals_and_preserves_existing_data, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for (id, assignment) in [ ("update-text-file", "'hello'"), ("update-int-file", "12345"), ("update-float-file", "1.5"), ("update-bool-file", "true"), ] { session .execute( &format!( "INSERT INTO lix_file (id, path, data) \ VALUES ('{id}', '/{id}.bin', X'68656C6C6F')" ), &[], ) .await .expect("file insert should succeed"); let error = session .execute( &format!("UPDATE lix_file SET data = {assignment} WHERE id = '{id}'"), &[], ) .await .expect_err("non-binary data literal update should be rejected"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH, "{id}"); } let result = session .execute( "SELECT id, data FROM lix_file \ WHERE id IN (\ 'update-text-file',\ 'update-int-file',\ 'update-float-file',\ 'update-bool-file'\ ) \ ORDER BY id", &[], ) .await .expect("file read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("update-bool-file".to_string()), Value::Blob(b"hello".to_vec()), ], vec![ Value::Text("update-float-file".to_string()), Value::Blob(b"hello".to_vec()), ], vec![ Value::Text("update-int-file".to_string()), Value::Blob(b"hello".to_vec()), ], vec![ Value::Text("update-text-file".to_string()), Value::Blob(b"hello".to_vec()), ], ], ); } ); simulation_test!( lix_file_update_rejects_non_binary_data_parameters_and_preserves_existing_data, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); for (id, value) in [ ("update-text-param-file", Value::Text("hello".to_string())), ("update-int-param-file", Value::Integer(12345)), ] { session .execute( &format!( "INSERT INTO lix_file (id, path, data) \ VALUES ('{id}', '/{id}.bin', X'68656C6C6F')" ), &[], ) .await .expect("file insert should succeed"); let error = session .execute( &format!("UPDATE lix_file SET data = $1 WHERE id = '{id}'"), &[value], ) .await .expect_err("non-binary data parameter update should be rejected"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH, "{id}"); } let result = session .execute( "SELECT id, data FROM lix_file \ WHERE id IN ('update-text-param-file', 'update-int-param-file') \ ORDER BY id", &[], ) .await .expect("file read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("update-int-param-file".to_string()), Value::Blob(b"hello".to_vec()), ], vec![ Value::Text("update-text-param-file".to_string()), Value::Blob(b"hello".to_vec()), ], ], ); } ); simulation_test!(lix_file_update_accepts_empty_blob_data, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('empty-update-file', '/empty-update.bin', X'68656C6C6F')", &[], ) .await .expect("file insert should succeed"); let update_result = session .execute( "UPDATE lix_file SET data = X'' WHERE id = 'empty-update-file'", &[], ) .await .expect("empty blob data update should be accepted"); assert_eq!(update_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT data FROM lix_file WHERE id = 'empty-update-file'", &[], ) .await .expect("file read should succeed"); assert_eq!(result.len(), 1); assert_eq!(result.rows()[0].values(), &[Value::Blob(Vec::new())]); }); simulation_test!(lix_file_by_version_expands_global_rows, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data, lixcol_global, lixcol_untracked) \ VALUES ('file-global-overlay', '/global.txt', X'67', true, false)", &[], ) .await .expect("global file insert should succeed"); let result = session .execute( "SELECT id, path, lixcol_version_id, lixcol_global, lixcol_untracked \ FROM lix_file_by_version \ WHERE id = 'file-global-overlay' \ ORDER BY lixcol_version_id", &[], ) .await .expect("file by-version read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("file-global-overlay".to_string()), Value::Text("/global.txt".to_string()), Value::Text(sim.main_version_id().to_string()), Value::Boolean(true), Value::Boolean(false), ], vec![ Value::Text("file-global-overlay".to_string()), Value::Text("/global.txt".to_string()), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ], ], ); }); ================================================ FILE: packages/engine/tests/sql/lix_file_history.rs ================================================ use lix_engine::Value; use serde_json::json; use super::assert_rows_eq; simulation_test!( lix_file_history_reads_path_and_data_from_commit_graph, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('history-file', '/docs/guides/readme.md', X'68656C6C6F')", &[], ) .await .expect("file insert should succeed"); let first_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("first file commit head should load") .expect("first file commit head should exist"); session .execute( "UPDATE lix_file \ SET path = '/docs/readme-renamed.md' \ WHERE id = 'history-file'", &[], ) .await .expect("file path update should succeed"); let second_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("second file commit head should load") .expect("second file commit head should exist"); assert_ne!(first_commit_id, second_commit_id); let result = session .execute( &format!( "SELECT id, path, name, data, lixcol_start_commit_id, lixcol_depth \ FROM lix_file_history \ WHERE lixcol_start_commit_id = '{second_commit_id}' \ AND id = 'history-file' \ AND path LIKE '/docs/%' \ ORDER BY lixcol_depth" ), &[], ) .await .expect("file history read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("history-file".to_string()), Value::Text("/docs/readme-renamed.md".to_string()), Value::Text("readme-renamed.md".to_string()), Value::Blob(b"hello".to_vec()), Value::Text(second_commit_id.clone()), Value::Integer(0), ], vec![ Value::Text("history-file".to_string()), Value::Text("/docs/guides/readme.md".to_string()), Value::Text("readme.md".to_string()), Value::Blob(b"hello".to_vec()), Value::Text(second_commit_id.clone()), Value::Integer(1), ], ], ); let snapshot_result = session .execute( &format!( "SELECT lixcol_snapshot_content \ FROM lix_file_history \ WHERE lixcol_start_commit_id = '{second_commit_id}' \ AND id = 'history-file' \ AND lixcol_depth = 0" ), &[], ) .await .expect("file history descriptor snapshot should be selectable"); let snapshot = snapshot_result.rows()[0] .get::("lixcol_snapshot_content") .expect("snapshot_content should be present"); let Value::Json(snapshot) = snapshot else { panic!("snapshot_content should be semantic JSON, got {snapshot:?}"); }; assert_eq!(snapshot["name"], json!("readme-renamed.md")); let result = session .execute( &format!( "SELECT id \ FROM lix_file_history \ WHERE lixcol_start_commit_id = '{first_commit_id}' \ AND path LIKE '/missing/%'" ), &[], ) .await .expect("file history should route start commit and leave path LIKE as residual"); assert_rows_eq(result, Vec::>::new()); } ); simulation_test!( lix_file_history_requires_start_commit_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute("SELECT id FROM lix_file_history", &[]) .await .expect_err("file history queries must provide start commit"); assert!( error .to_string() .contains("requires a lixcol_start_commit_id filter"), "unexpected error: {error}" ); assert!( error .hint() .is_some_and(|hint| hint.contains("WHERE lixcol_start_commit_id")), "unexpected error: {error}" ); } ); simulation_test!( lix_file_history_exposes_file_descriptor_schema_key, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('history-file-blob-filter', '/docs/blob-filter.txt', X'626C6F62')", &[], ) .await .expect("file insert should succeed"); session .execute( "UPDATE lix_file SET data = X'626C6F6232' \ WHERE id = 'history-file-blob-filter'", &[], ) .await .expect("file data update should succeed"); let commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("file commit head should load") .expect("file commit head should exist"); let result = session .execute( &format!( "SELECT id, path, data, lixcol_schema_key \ FROM lix_file_history \ WHERE lixcol_start_commit_id = '{commit_id}' \ AND lixcol_schema_key = 'lix_file_descriptor' \ AND id = 'history-file-blob-filter' \ AND lixcol_depth = 0" ), &[], ) .await .expect("file-descriptor-filtered file history read should succeed"); assert_rows_eq( result, vec![vec![ Value::Text("history-file-blob-filter".to_string()), Value::Text("/docs/blob-filter.txt".to_string()), Value::Blob(b"blob2".to_vec()), Value::Text("lix_file_descriptor".to_string()), ]], ); let blob_schema_result = session .execute( &format!( "SELECT id \ FROM lix_file_history \ WHERE lixcol_start_commit_id = '{commit_id}' \ AND lixcol_schema_key = 'lix_binary_blob_ref' \ AND id = 'history-file-blob-filter'" ), &[], ) .await .expect("blob-ref-filtered file history read should succeed"); assert_rows_eq(blob_schema_result, Vec::>::new()); } ); ================================================ FILE: packages/engine/tests/sql/lix_json.rs ================================================ use lix_engine::{LixError, Value}; use serde_json::json; use super::assert_rows_eq; simulation_test!( lix_json_expression_results_are_semantic_json, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let result = session .execute( "SELECT \ lix_json('{\"name\":\"Ada\",\"tags\":[\"db\"]}') AS document, \ lix_json(NULL) AS json_null, \ lix_json_get('{\"name\":\"Ada\",\"tags\":[\"db\"]}', 'tags') AS tags, \ lix_json_get('{\"name\":\"Ada\"}', 'missing') AS missing", &[], ) .await .expect("select should succeed"); assert_rows_eq( result, vec![vec![ Value::Json(json!({"name": "Ada", "tags": ["db"]})), Value::Json(json!(null)), Value::Json(json!(["db"])), Value::Null, ]], ); } ); simulation_test!(lix_json_get_uses_variadic_path_segments, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let result = session .execute( "SELECT lix_json_get_text('{\"user\":{\"names\":[\"Ada\"]}}', 'user', 'names', 0) AS name", &[], ) .await .expect("select should succeed"); assert_rows_eq(result, vec![vec![Value::Text("Ada".to_string())]]); }); simulation_test!(lix_json_get_rejects_jsonpath_strings, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "SELECT lix_json_get_text('{\"path\":\"ok\"}', '$.path')", &[], ) .await .expect_err("JSONPath-looking strings should fail loudly"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); assert!( error.message.contains("uses variadic path segments"), "expected path segment diagnostic: {error}" ); }); simulation_test!( json_column_predicates_reject_bare_text_literals, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "SELECT entity_id FROM lix_state WHERE entity_id = 'state-latest'", &[], ) .await .expect_err("JSON column compared to text should fail loudly"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH); assert!( error.hint().is_some_and(|hint| hint.contains("lix_json")), "expected lix_json hint: {error}" ); } ); simulation_test!( json_column_predicates_accept_lix_json_expressions, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "SELECT entity_id FROM lix_state WHERE entity_id = lix_json('[\"state-latest\"]')", &[], ) .await .expect("JSON column compared to lix_json expression should succeed"); } ); simulation_test!( typed_json_property_predicates_reject_bare_text_literals, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_json_predicate_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"meta\":{\"type\":\"object\"}},\"required\":[\"id\",\"meta\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("schema insert should succeed"); session .execute( "INSERT INTO engine_json_predicate_schema (id, meta, lixcol_untracked) \ VALUES ('json-predicate-1', lix_json('{\"flag\":true}'), false)", &[], ) .await .expect("typed entity insert should succeed"); let error = session .execute( "SELECT id FROM engine_json_predicate_schema WHERE meta = '{\"flag\":true}'", &[], ) .await .expect_err("typed JSON property compared to text should fail loudly"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH); let result = session .execute( "SELECT id FROM engine_json_predicate_schema WHERE meta = lix_json('{\"flag\":true}')", &[], ) .await .expect("typed JSON property compared to lix_json should succeed"); assert_rows_eq( result, vec![vec![Value::Text("json-predicate-1".to_string())]], ); } ); simulation_test!( registered_schema_dml_rejects_bare_lixcol_entity_id_text, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "UPDATE lix_registered_schema \ SET value = lix_json('{\"x-lix-key\":\"engine_schema_update_history\",\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}') \ WHERE lixcol_entity_id = 'engine_schema_update_history'", &[], ) .await .expect_err("bare text lixcol_entity_id update should fail before matching rows"); assert_eq!(error.code, LixError::CODE_TYPE_MISMATCH); let error = session .execute( "DELETE FROM lix_registered_schema \ WHERE lixcol_entity_id = 'engine_schema_update_history'", &[], ) .await .expect_err("bare text lixcol_entity_id delete should fail before matching rows"); assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL); } ); ================================================ FILE: packages/engine/tests/sql/lix_key_value.rs ================================================ use lix_engine::ExecuteResult; use lix_engine::LixError; use lix_engine::Value; simulation_test!(lix_key_value_roundtrips_arbitrary_json, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) \ VALUES ('kv-json', lix_json('{\"nested\":{\"flag\":true,\"items\":[1,\"two\",null]}}'))", &[], ) .await .expect("insert should succeed"); let result = session .execute("SELECT value FROM lix_key_value WHERE key = 'kv-json'", &[]) .await .expect("select should succeed"); assert_single_text( result, "{\"nested\":{\"flag\":true,\"items\":[1,\"two\",null]}}", ); }); simulation_test!(lix_key_value_duplicate_insert_rejects, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('kv-duplicate', 'first')", &[], ) .await .expect("initial insert should succeed"); let error = session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('kv-duplicate', 'second')", &[], ) .await .expect_err("plain INSERT should reject duplicate primary keys"); assert_eq!(error.code, LixError::CODE_UNIQUE); session .execute( "UPDATE lix_key_value SET value = 'second' WHERE key = 'kv-duplicate'", &[], ) .await .expect("explicit UPDATE should still replace existing state"); let result = session .execute( "SELECT value FROM lix_key_value WHERE key = 'kv-duplicate'", &[], ) .await .expect("select should succeed"); assert_single_text(result, "\"second\""); }); fn assert_single_text(result: ExecuteResult, expected: &str) { let row_set = result; assert_eq!(row_set.len(), 1); let expected_json = serde_json::from_str::(expected) .expect("expected value should be valid JSON"); assert_eq!(row_set.rows()[0].values(), &[Value::Json(expected_json)]); } ================================================ FILE: packages/engine/tests/sql/lix_label_assignment.rs ================================================ use lix_engine::{LixError, Value}; use serde_json::json; use super::select_rows; simulation_test!( lix_label_assignment_generates_id_and_enforces_mapping_uniqueness, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('label-target', 'one')", &[], ) .await .expect("target entity insert should succeed"); session .execute( "INSERT INTO lix_label (id, name) VALUES ('label-a', 'Needs review')", &[], ) .await .expect("label insert should succeed"); session .execute( "INSERT INTO lix_label_assignment \ (target_entity_id, target_schema_key, target_file_id, label_id) \ VALUES (lix_json('[\"label-target\"]'), 'lix_key_value', NULL, 'label-a')", &[], ) .await .expect("label assignment insert should succeed"); let rows = select_rows( &session, "SELECT id, target_entity_id, target_schema_key, target_file_id, label_id \ FROM lix_label_assignment \ WHERE target_entity_id = lix_json('[\"label-target\"]')", ) .await; assert_eq!(rows.len(), 1); let id = match &rows[0][0] { Value::Text(value) => value, other => panic!("expected generated string id, got {other:?}"), }; assert!(!id.is_empty()); assert_eq!( &rows[0][1..], &[ Value::Json(json!(["label-target"])), Value::Text("lix_key_value".to_string()), Value::Null, Value::Text("label-a".to_string()), ] ); let error = session .execute( "INSERT INTO lix_label_assignment \ (target_entity_id, target_schema_key, target_file_id, label_id) \ VALUES (lix_json('[\"label-target\"]'), 'lix_key_value', NULL, 'label-a')", &[], ) .await .expect_err("duplicate label assignment should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); let error = session .execute( "INSERT INTO lix_label (id, name) VALUES ('label-b', 'Needs review')", &[], ) .await .expect_err("duplicate label name should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); } ); simulation_test!( lix_label_assignment_rejects_missing_target_state_row, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_label (id, name) VALUES ('label-a', 'Needs review')", &[], ) .await .expect("label insert should succeed"); let error = session .execute( "INSERT INTO lix_label_assignment \ (target_entity_id, target_schema_key, target_file_id, label_id) \ VALUES (lix_json('[\"missing-target\"]'), 'lix_key_value', NULL, 'label-a')", &[], ) .await .expect_err("label assignment to missing live state row should be rejected"); assert_eq!(error.code, LixError::CODE_FOREIGN_KEY); } ); ================================================ FILE: packages/engine/tests/sql/lix_registered_schema.rs ================================================ use lix_engine::CreateVersionOptions; use lix_engine::ExecuteResult; use lix_engine::LixError; use lix_engine::Value; use serde_json::json; use super::assert_rows_eq; simulation_test!( lix_registered_schema_insert_makes_schema_visible_to_lix_state, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let register_schema_result = session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_dummy_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"name\":{\"type\":\"string\"}},\"required\":[\"id\",\"name\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); assert_eq!(register_schema_result, ExecuteResult::from_rows_affected(1)); let registered_schema_row = session .execute( "SELECT lixcol_entity_id, value \ FROM lix_registered_schema", &[], ) .await .expect("registered schema read should succeed"); let registered_schema_rows = registered_schema_row; let registered_schema_entity_id = registered_schema_rows .rows() .iter() .find_map(|row| match row.values() { [Value::Json(entity_id), Value::Json(value)] if value.get("x-lix-key").and_then(serde_json::Value::as_str) == Some("engine_dummy_schema") => { Some(entity_id) } [Value::Json(entity_id), Value::Text(value)] => { let value = serde_json::from_str::(value).ok()?; (value.get("x-lix-key").and_then(serde_json::Value::as_str) == Some("engine_dummy_schema")) .then_some(entity_id) } _ => None, }) .expect("registered schema row should be visible"); assert_eq!(registered_schema_entity_id, &json!(["engine_dummy_schema"])); let insert_state_result = session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"dummy-1\"]'), 'engine_dummy_schema', NULL, lix_json('{\"id\":\"dummy-1\",\"name\":\"Dummy\"}'), false, true\ )", &[], ) .await .expect("lix_state insert for registered schema should succeed"); assert_eq!(insert_state_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT entity_id, schema_key, snapshot_content \ FROM lix_state \ WHERE schema_key = 'engine_dummy_schema' AND entity_id = lix_json('[\"dummy-1\"]')", &[], ) .await .expect("lix_state read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); assert_eq!( row_set.rows()[0].values(), &[ Value::Json(json!(["dummy-1"])), Value::Text("engine_dummy_schema".to_string()), Value::Json(json!({"id": "dummy-1", "name": "Dummy"})), ] ); } ); simulation_test!( untracked_registered_schema_does_not_authorize_tracked_state_write, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_untracked_only_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"name\":{\"type\":\"string\"}},\"required\":[\"id\",\"name\"],\"additionalProperties\":false}'),\ false,\ true\ )", &[], ) .await .expect("untracked schema registration should succeed"); let error = session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"tracked-1\"]'), 'engine_untracked_only_schema', NULL, lix_json('{\"id\":\"tracked-1\",\"name\":\"Tracked\"}'), false, false\ )", &[], ) .await .expect_err("tracked rows must not validate against committed untracked schemas"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); } ); simulation_test!( lix_registered_schema_insert_rejects_system_schema_key, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"lix_change\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect_err("system schema keys should not be user-registerable"); assert_eq!(error.code, LixError::CODE_INVALID_PARAM); assert!( error.message.contains("system schema"), "unexpected error: {error:?}" ); } ); simulation_test!(lix_registered_schema_delete_is_rejected, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_delete_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("schema should register before delete attempt"); let registered_schema_rows = session .execute( "SELECT lixcol_entity_id, value \ FROM lix_registered_schema", &[], ) .await .expect("registered schema read should succeed"); let delete_schema_entity_id = registered_schema_rows .rows() .iter() .find_map(|row| match row.values() { [Value::Json(entity_id), Value::Json(value)] if value.get("x-lix-key").and_then(serde_json::Value::as_str) == Some("engine_delete_schema") => { Some(entity_id.clone()) } [Value::Json(entity_id), Value::Text(value)] => { let value = serde_json::from_str::(value).ok()?; (value.get("x-lix-key").and_then(serde_json::Value::as_str) == Some("engine_delete_schema")) .then_some(entity_id.clone()) } _ => None, }) .expect("registered schema entity id should be discoverable"); let error = session .execute( "DELETE FROM lix_registered_schema \ WHERE lixcol_entity_id = $1", &[Value::Json(delete_schema_entity_id)], ) .await .expect_err("schema deletion is not supported yet"); assert_eq!(error.code, LixError::CODE_UNSUPPORTED_SQL); assert!( error .message .contains("delete lix_registered_schema is not supported"), "unexpected error: {error:?}" ); }); simulation_test!( tracked_registered_schema_update_allows_compatible_amendment_and_history, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let initial_schema = json!({ "x-lix-key": "engine_schema_update_history", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "title": { "type": "string" } }, "required": ["id", "title"], "additionalProperties": false }); let amended_schema = json!({ "x-lix-key": "engine_schema_update_history", "x-lix-primary-key": ["/id"], "type": "object", "description": "Compatible tracked schema amendment", "properties": { "id": { "type": "string" }, "title": { "type": "string" }, "subtitle": { "type": "string" } }, "required": ["id", "title"], "additionalProperties": false }); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES ($1, false, false)", &[Value::Json(initial_schema.clone())], ) .await .expect("tracked schema insert should succeed"); let first_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("first head should load") .expect("first head should exist"); session .execute( "UPDATE lix_registered_schema \ SET value = $1 \ WHERE lixcol_entity_id = lix_json('[\"engine_schema_update_history\"]')", &[Value::Json(amended_schema.clone())], ) .await .expect("compatible tracked schema amendment should succeed"); let second_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("second head should load") .expect("second head should exist"); assert_ne!(first_commit_id, second_commit_id); let result = session .execute( &format!( "SELECT value, lixcol_entity_id, lixcol_observed_commit_id, lixcol_start_commit_id, lixcol_depth \ FROM lix_registered_schema_history \ WHERE lixcol_start_commit_id = '{second_commit_id}' \ AND lixcol_entity_id = lix_json('[\"engine_schema_update_history\"]') \ ORDER BY lixcol_depth" ), &[], ) .await .expect("tracked registered schema history read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Json(amended_schema), Value::Json(json!(["engine_schema_update_history"])), Value::Text(second_commit_id.clone()), Value::Text(second_commit_id.clone()), Value::Integer(0), ], vec![ Value::Json(initial_schema), Value::Json(json!(["engine_schema_update_history"])), Value::Text(first_commit_id), Value::Text(second_commit_id), Value::Integer(1), ], ], ); } ); simulation_test!( lix_registered_schema_insert_rejects_primary_key_without_json_pointer_slash, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_bad_pointer_schema\",\"x-lix-primary-key\":[\"id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect_err("registered schema insert should reject JSON Pointers without leading slash"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("must begin with '/'"), "unexpected message: {}", error.message ); assert!( error .message .contains("x-lix-primary-key: \"id\" → \"/id\""), "message should show the offending primary key pointer: {}", error.message ); let hint = error.hint.as_deref().expect("error should include a hint"); assert!( hint.contains("Did you mean [\"/id\"]?"), "hint should suggest the JSON Pointer form: {hint}" ); } ); simulation_test!( lix_registered_schema_insert_rejects_unprojectable_entity_property, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_empty_property_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"kind\":{}},\"required\":[\"id\",\"kind\"],\"additionalProperties\":false}'),\ true,\ false\ )", &[], ) .await .expect_err("registered schema insert should reject properties without a SQL projection type"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("property '/kind'"), "message should identify the unprojectable property: {}", error.message ); assert!( error.message.contains("SQL-projectable JSON Schema type"), "message should explain the projection requirement: {}", error.message ); } ); simulation_test!( entity_by_version_insert_rejects_target_version_without_schema, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_session(sim.main_version_id()) .await .expect("main session should open"), &engine, ); main.create_version(CreateVersionOptions { id: Some("schemaless-target".to_string()), name: "Schemaless Target".to_string(), from_commit_id: None, }) .await .expect("target version should be created before schema registration"); main.execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_poison_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"name\":{\"type\":\"string\"}},\"required\":[\"id\",\"name\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("schema should be visible on active main"); let error = main .execute( "INSERT INTO engine_poison_schema_by_version \ (id, name, lixcol_version_id, lixcol_untracked) \ VALUES ('poison-1', 'Poisoned', 'schemaless-target', true)", &[], ) .await .expect_err("_by_version write must use the target version schema catalog"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("engine_poison_schema"), "unexpected error: {error:?}" ); } ); simulation_test!( registered_schema_identity_is_scoped_per_version, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_session(sim.main_version_id()) .await .expect("main session should open"), &engine, ); main.create_version(CreateVersionOptions { id: Some("divergent-target".to_string()), name: "Divergent Target".to_string(), from_commit_id: None, }) .await .expect("target version should be created before schema divergence"); main.execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_divergent_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"name\":{\"type\":\"string\"}},\"required\":[\"id\",\"name\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("main schema should be registered"); let main_schema = json!({ "x-lix-key": "engine_divergent_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "name": { "type": "string" } }, "required": ["id", "name"], "additionalProperties": false }); let target_schema = json!({ "x-lix-key": "engine_divergent_schema", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "title": { "type": "string" } }, "required": ["id", "title"], "additionalProperties": false }); let target = sim.wrap_session( engine .open_session("divergent-target") .await .expect("target session should open"), &engine, ); target .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_divergent_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"title\":{\"type\":\"string\"}},\"required\":[\"id\",\"title\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("same schema key may have independent version-local definitions"); let main_result = main .execute( "SELECT value \ FROM lix_registered_schema \ WHERE lixcol_entity_id = lix_json('[\"engine_divergent_schema\"]')", &[], ) .await .expect("main schema read should succeed"); assert_rows_eq(main_result, vec![vec![Value::Json(main_schema)]]); let target_result = target .execute( "SELECT value \ FROM lix_registered_schema \ WHERE lixcol_entity_id = lix_json('[\"engine_divergent_schema\"]')", &[], ) .await .expect("target schema read should succeed"); assert_rows_eq(target_result, vec![vec![Value::Json(target_schema)]]); } ); simulation_test!( independent_schema_amendments_on_two_versions_are_allowed, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_session(sim.main_version_id()) .await .expect("main session should open"), &engine, ); let base_schema = json!({ "x-lix-key": "engine_branch_schema_amendment", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "title": { "type": "string" } }, "required": ["id", "title"], "additionalProperties": false }); let main_schema = json!({ "x-lix-key": "engine_branch_schema_amendment", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "title": { "type": "string" }, "main_note": { "type": "string" } }, "required": ["id", "title"], "additionalProperties": false }); let draft_schema = json!({ "x-lix-key": "engine_branch_schema_amendment", "x-lix-primary-key": ["/id"], "type": "object", "properties": { "id": { "type": "string" }, "title": { "type": "string" }, "draft_note": { "type": "string" } }, "required": ["id", "title"], "additionalProperties": false }); main.execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES ($1, false, false)", &[Value::Json(base_schema)], ) .await .expect("base schema should be registered"); main.create_version(CreateVersionOptions { id: Some("schema-amendment-draft".to_string()), name: "Schema Amendment Draft".to_string(), from_commit_id: None, }) .await .expect("draft version should be created from base schema"); let draft = sim.wrap_session( engine .open_session("schema-amendment-draft") .await .expect("draft session should open"), &engine, ); let main_update = main .execute( "UPDATE lix_registered_schema \ SET value = $1 \ WHERE lixcol_entity_id = lix_json('[\"engine_branch_schema_amendment\"]')", &[Value::Json(main_schema.clone())], ) .await .expect("main additive schema amendment should succeed"); assert_eq!(main_update, ExecuteResult::from_rows_affected(1)); let draft_update = draft .execute( "UPDATE lix_registered_schema \ SET value = $1 \ WHERE lixcol_entity_id = lix_json('[\"engine_branch_schema_amendment\"]')", &[Value::Json(draft_schema.clone())], ) .await .expect("draft additive schema amendment should succeed"); assert_eq!(draft_update, ExecuteResult::from_rows_affected(1)); let main_result = main .execute( "SELECT value \ FROM lix_registered_schema \ WHERE lixcol_entity_id = lix_json('[\"engine_branch_schema_amendment\"]')", &[], ) .await .expect("main amended schema read should succeed"); assert_rows_eq(main_result, vec![vec![Value::Json(main_schema)]]); let draft_result = draft .execute( "SELECT value \ FROM lix_registered_schema \ WHERE lixcol_entity_id = lix_json('[\"engine_branch_schema_amendment\"]')", &[], ) .await .expect("draft amended schema read should succeed"); assert_rows_eq(draft_result, vec![vec![Value::Json(draft_schema)]]); } ); simulation_test!( entity_by_version_insert_rejects_fk_graph_when_target_version_lacks_schemas, |sim| async move { let engine = sim.boot_engine().await; let main = sim.wrap_session( engine .open_session(sim.main_version_id()) .await .expect("main session should open"), &engine, ); main.create_version(CreateVersionOptions { id: Some("fk-schemaless-target".to_string()), name: "FK Schemaless Target".to_string(), from_commit_id: None, }) .await .expect("target version should be created before FK schemas"); main.execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_fk_parent_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("parent schema should register on active main"); main.execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_fk_child_schema\",\"x-lix-primary-key\":[\"/id\"],\"x-lix-foreign-keys\":[{\"properties\":[\"/parent_id\"],\"references\":{\"schemaKey\":\"engine_fk_parent_schema\",\"properties\":[\"/id\"]}}],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"parent_id\":{\"type\":\"string\"}},\"required\":[\"id\",\"parent_id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("child schema should register on active main"); let parent_result = main .execute( "INSERT INTO engine_fk_parent_schema_by_version \ (id, lixcol_version_id, lixcol_untracked) \ VALUES ('parent-1', 'fk-schemaless-target', true)", &[], ) .await; if let Err(error) = parent_result { assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("engine_fk_parent_schema"), "unexpected error: {error:?}" ); return; } let error = main .execute( "INSERT INTO engine_fk_child_schema_by_version \ (id, parent_id, lixcol_version_id, lixcol_untracked) \ VALUES ('child-1', 'parent-1', 'fk-schemaless-target', true)", &[], ) .await .expect_err("FK-valid active graph must not be insertable into a schemaless target"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("engine_fk_child_schema") || error.message.contains("engine_fk_parent_schema"), "unexpected error: {error:?}" ); } ); simulation_test!( registered_entity_insert_applies_defaulted_primary_key, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_default_id_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\",\"x-lix-default\":\"lix_uuid_v7()\"},\"name\":{\"type\":\"string\"}},\"required\":[\"id\",\"name\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); let insert_result = session .execute( "INSERT INTO engine_default_id_schema (name) VALUES ('Generated')", &[], ) .await .expect("entity insert should apply defaulted primary key"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT lixcol_entity_id, id, name \ FROM engine_default_id_schema \ WHERE name = 'Generated'", &[], ) .await .expect("entity read should succeed"); let row_set = result; assert_eq!(row_set.len(), 1); let values = row_set.rows()[0].values(); let [Value::Json(entity_id), Value::Text(id), Value::Text(name)] = values else { panic!("expected generated id row, got {values:?}"); }; assert_eq!(entity_id, &json!([id])); assert!(!id.is_empty(), "defaulted id should be non-empty"); assert_eq!(name, "Generated"); } ); simulation_test!( registered_entity_insert_preserves_explicit_null_for_defaulted_column, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_nullable_default_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"status\":{\"type\":[\"string\",\"null\"],\"default\":\"computed\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); session .execute( "INSERT INTO engine_nullable_default_schema (id, status) \ VALUES ('explicit-null', NULL)", &[], ) .await .expect("entity insert should preserve explicit null"); session .execute( "INSERT INTO engine_nullable_default_schema (id) \ VALUES ('omitted')", &[], ) .await .expect("entity insert should apply default for omitted column"); let result = session .execute( "SELECT id, status \ FROM engine_nullable_default_schema \ ORDER BY id", &[], ) .await .expect("entity read should succeed"); assert_rows_eq( result, vec![ vec![Value::Text("explicit-null".to_string()), Value::Null], vec![ Value::Text("omitted".to_string()), Value::Text("computed".to_string()), ], ], ); } ); simulation_test!(entity_by_version_expands_global_rows, |sim| async move { let engine = sim.boot_engine().await; let global_session = sim.wrap_session( engine .open_session("global") .await .expect("global session should open"), &engine, ); let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); global_session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_overlay_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"name\":{\"type\":\"string\"}},\"required\":[\"id\",\"name\"],\"additionalProperties\":false}'),\ true,\ false\ )", &[], ) .await .expect("global registered schema insert should succeed"); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_overlay_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"name\":{\"type\":\"string\"}},\"required\":[\"id\",\"name\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); session .execute( "INSERT INTO engine_overlay_schema \ (id, name, lixcol_global, lixcol_untracked) \ VALUES ('entity-global-overlay', 'Global Entity', true, false)", &[], ) .await .expect("global entity insert should succeed"); let result = session .execute( "SELECT id, name, lixcol_version_id, lixcol_global, lixcol_untracked \ FROM engine_overlay_schema_by_version \ WHERE lixcol_entity_id = lix_json('[\"entity-global-overlay\"]') \ ORDER BY lixcol_version_id", &[], ) .await .expect("entity by-version read should succeed"); assert_rows_eq( result, vec![ vec![ Value::Text("entity-global-overlay".to_string()), Value::Text("Global Entity".to_string()), Value::Text(sim.main_version_id().to_string()), Value::Boolean(true), Value::Boolean(false), ], vec![ Value::Text("entity-global-overlay".to_string()), Value::Text("Global Entity".to_string()), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ], ], ); }); simulation_test!( global_entity_insert_rejects_active_only_schema, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_global_poison_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"name\":{\"type\":\"string\"}},\"required\":[\"id\",\"name\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("main-local schema registration should succeed"); let error = session .execute( "INSERT INTO engine_global_poison_schema \ (id, name, lixcol_global, lixcol_untracked) \ VALUES ('global-poison-1', 'Wrong Scope', true, false)", &[], ) .await .expect_err("global writes must validate through the global schema catalog"); assert_eq!(error.code, LixError::CODE_SCHEMA_DEFINITION); assert!( error.message.contains("engine_global_poison_schema"), "unexpected error: {error:?}" ); } ); simulation_test!( registered_typed_entity_surface_uses_primary_key_columns, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_typed_entity_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"name\":{\"type\":\"string\"},\"count\":{\"type\":\"number\"}},\"required\":[\"id\",\"name\",\"count\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); let insert_result = session .execute( "INSERT INTO engine_typed_entity_schema \ (id, name, count, lixcol_global, lixcol_untracked) \ VALUES ('typed-entity-1', 'Typed Entity', 7, false, false)", &[], ) .await .expect("typed entity insert should succeed"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); let result = session .execute( "SELECT id, name, count, lixcol_entity_id \ FROM engine_typed_entity_schema \ WHERE id = 'typed-entity-1'", &[], ) .await .expect("typed entity query by primary-key column should succeed"); assert_rows_eq( result, vec![vec![ Value::Text("typed-entity-1".to_string()), Value::Text("Typed Entity".to_string()), Value::Real(7.0), Value::Json(json!(["typed-entity-1"])), ]], ); } ); simulation_test!( typed_entity_number_update_accepts_integer_param_like_insert, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_number_update_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"score\":{\"type\":\"number\"}},\"required\":[\"id\",\"score\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); session .execute( "INSERT INTO engine_number_update_schema \ (id, score, lixcol_global, lixcol_untracked) \ VALUES ('score-1', 1, false, false)", &[], ) .await .expect("typed entity insert should accept integer literal for number column"); session .execute( "UPDATE engine_number_update_schema \ SET score = $1 \ WHERE id = 'score-1'", &[Value::Integer(52000)], ) .await .expect("typed entity update should accept integer param for number column"); let result = session .execute( "SELECT score \ FROM engine_number_update_schema \ WHERE id = 'score-1'", &[], ) .await .expect("typed entity query should succeed"); assert_rows_eq(result, vec![vec![Value::Real(52000.0)]]); } ); simulation_test!( typed_entity_update_preserves_absent_optional_non_nullable_fields, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"engine_optional_update_schema\",\"x-lix-primary-key\":[\"/id\"],\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"},\"title\":{\"type\":\"string\"},\"rank\":{\"type\":\"integer\"}},\"required\":[\"id\",\"title\"],\"additionalProperties\":false}'),\ false,\ false\ )", &[], ) .await .expect("registered schema insert should succeed"); session .execute( "INSERT INTO engine_optional_update_schema \ (id, title, lixcol_global, lixcol_untracked) \ VALUES ('row-1', 'before', false, false)", &[], ) .await .expect("insert should omit the optional rank field"); session .execute( "UPDATE engine_optional_update_schema \ SET title = 'after' \ WHERE id = 'row-1'", &[], ) .await .expect("update should preserve absent optional fields"); let result = session .execute( "SELECT title, rank, lixcol_snapshot_content \ FROM engine_optional_update_schema \ WHERE id = 'row-1'", &[], ) .await .expect("typed entity query should succeed"); assert_rows_eq( result, vec![vec![ Value::Text("after".to_string()), Value::Null, Value::Json(json!({"id": "row-1", "title": "after"})), ]], ); let error = session .execute( "UPDATE engine_optional_update_schema \ SET rank = NULL \ WHERE id = 'row-1'", &[], ) .await .expect_err("explicit NULL should still be validated as JSON null"); assert_eq!(error.code, LixError::CODE_SCHEMA_VALIDATION); assert!( error .message .contains("/rank null is not of type \"integer\""), "expected rank validation error, got {error:?}" ); } ); ================================================ FILE: packages/engine/tests/sql/lix_state.rs ================================================ use lix_engine::ExecuteResult; use lix_engine::Value; use serde_json::json; use super::assert_rows_eq; simulation_test!(lix_state_latest_update_wins, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"state-latest\"]'), 'lix_key_value', NULL, lix_json('{\"key\":\"state-latest\",\"value\":\"old\"}'), false, false\ )", &[], ) .await .expect("lix_state insert should succeed"); session .execute( "UPDATE lix_state \ SET snapshot_content = lix_json('{\"key\":\"state-latest\",\"value\":\"new\"}') \ WHERE entity_id = lix_json('[\"state-latest\"]') AND schema_key = 'lix_key_value'", &[], ) .await .expect("lix_state update should succeed"); let result = session .execute( "SELECT snapshot_content \ FROM lix_state \ WHERE entity_id = lix_json('[\"state-latest\"]') AND schema_key = 'lix_key_value'", &[], ) .await .expect("lix_state read should succeed"); assert_single_text(result, "{\"key\":\"state-latest\",\"value\":\"new\"}"); }); simulation_test!(lix_state_delete_hides_row, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"state-delete\"]'), 'lix_key_value', NULL, lix_json('{\"key\":\"state-delete\",\"value\":\"delete-me\"}'), false, false\ )", &[], ) .await .expect("lix_state insert should succeed"); session .execute( "DELETE FROM lix_state \ WHERE entity_id = lix_json('[\"state-delete\"]') AND schema_key = 'lix_key_value'", &[], ) .await .expect("lix_state delete should succeed"); let result = session .execute( "SELECT entity_id \ FROM lix_state \ WHERE entity_id = lix_json('[\"state-delete\"]') AND schema_key = 'lix_key_value'", &[], ) .await .expect("lix_state read should succeed"); let rows = result; assert_eq!(rows.len(), 0); }); simulation_test!( lix_state_global_rows_are_visible_through_version_overlay, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"state-global-overlay\"]'), 'lix_key_value', NULL, lix_json('{\"key\":\"state-global-overlay\",\"value\":\"global\"}'), true, false\ )", &[], ) .await .expect("global lix_state insert should succeed"); let active_result = session .execute( "SELECT entity_id, global, untracked \ FROM lix_state \ WHERE entity_id = lix_json('[\"state-global-overlay\"]') AND schema_key = 'lix_key_value'", &[], ) .await .expect("active lix_state read should succeed"); assert_rows_eq( active_result, vec![vec![ Value::Json(json!(["state-global-overlay"])), Value::Boolean(true), Value::Boolean(false), ]], ); let by_version_result = session .execute( &format!( "SELECT entity_id, version_id, global, untracked \ FROM lix_state_by_version \ WHERE entity_id = lix_json('[\"state-global-overlay\"]') AND schema_key = 'lix_key_value' \ AND version_id IN ('{}', 'global') \ ORDER BY version_id", sim.main_version_id() ), &[], ) .await .expect("by-version lix_state read should succeed"); assert_rows_eq( by_version_result, vec![ vec![ Value::Json(json!(["state-global-overlay"])), Value::Text(sim.main_version_id().to_string()), Value::Boolean(true), Value::Boolean(false), ], vec![ Value::Json(json!(["state-global-overlay"])), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ], ], ); } ); simulation_test!( lix_state_version_tombstone_hides_global_row_in_active_and_by_version, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"state-global-tombstone-overlay\"]'), 'lix_key_value', NULL, lix_json('{\"key\":\"state-global-tombstone-overlay\",\"value\":\"global\"}'), true, false\ )", &[], ) .await .expect("global lix_state insert should succeed"); session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, global, untracked\ ) VALUES (\ lix_json('[\"state-global-tombstone-overlay\"]'), 'lix_key_value', NULL, NULL, false, false\ )", &[], ) .await .expect("version-local tombstone insert should succeed"); let active_result = session .execute( "SELECT entity_id \ FROM lix_state \ WHERE entity_id = lix_json('[\"state-global-tombstone-overlay\"]') AND schema_key = 'lix_key_value'", &[], ) .await .expect("active lix_state read should succeed"); assert_rows_eq(active_result, Vec::new()); let by_version_result = session .execute( &format!( "SELECT entity_id, version_id, global, untracked \ FROM lix_state_by_version \ WHERE entity_id = lix_json('[\"state-global-tombstone-overlay\"]') AND schema_key = 'lix_key_value' \ AND version_id IN ('{}', 'global') \ ORDER BY version_id", sim.main_version_id() ), &[], ) .await .expect("by-version lix_state read should succeed"); assert_rows_eq( by_version_result, vec![vec![ Value::Json(json!(["state-global-tombstone-overlay"])), Value::Text("global".to_string()), Value::Boolean(true), Value::Boolean(false), ]], ); } ); fn assert_single_text(result: ExecuteResult, expected: &str) { let row_set = result; assert_eq!(row_set.len(), 1); let expected_json = serde_json::from_str::(expected) .expect("expected snapshot_content should be valid JSON"); assert_eq!(row_set.rows()[0].values(), &[Value::Json(expected_json)]); } ================================================ FILE: packages/engine/tests/sql/lix_state_history.rs ================================================ use lix_engine::Value; use serde_json::json; simulation_test!( lix_state_history_requires_start_commit_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-start-required', 'one')", &[], ) .await .expect("tracked write should succeed"); let error = session .execute("SELECT entity_id FROM lix_state_history", &[]) .await .expect_err("history queries must provide start_commit_id"); assert!(error .to_string() .contains("requires a start_commit_id filter")); assert_eq!( error.code, lix_engine::LixError::CODE_HISTORY_FILTER_REQUIRED ); assert!( error .hint() .is_some_and(|hint| hint.contains("lix_active_version_commit_id()")), "expected active-version-head hint: {error}" ); } ); simulation_test!( lix_state_history_accepts_active_version_commit_id_filter, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-active-head', 'one')", &[], ) .await .expect("tracked write should succeed"); let rows = select_history_rows( &session, "SELECT entity_id FROM lix_state_history WHERE start_commit_id = lix_active_version_commit_id()", ) .await; assert!( rows.iter() .any(|row| row.first() == Some(&Value::Json(json!(["history-active-head"])))), "expected active-head history row, got {rows:?}" ); } ); simulation_test!( lix_state_history_rejects_prefixed_start_commit_id_filter, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-prefixed-start', 'one')", &[], ) .await .expect("tracked write should succeed"); let error = session .execute( "SELECT entity_id \ FROM lix_state_history \ WHERE lixcol_start_commit_id = lix_active_version_commit_id()", &[], ) .await .expect_err("lix_state_history should only expose bare start_commit_id"); assert_eq!(error.code, lix_engine::LixError::CODE_COLUMN_NOT_FOUND); assert!( error.to_string().contains("lixcol_start_commit_id"), "unexpected error: {error}" ); } ); simulation_test!( lix_state_history_reads_from_explicit_historical_commit, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-explicit', 'one')", &[], ) .await .expect("initial tracked write should succeed"); let first_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("first head should load") .expect("first head should exist"); session .execute( "UPDATE lix_key_value SET value = 'two' WHERE key = 'history-explicit'", &[], ) .await .expect("second tracked write should succeed"); let second_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("second head should load") .expect("second head should exist"); session .execute( "DELETE FROM lix_key_value WHERE key = 'history-explicit'", &[], ) .await .expect("tombstone write should succeed"); let third_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("third head should load") .expect("third head should exist"); assert_ne!(first_commit_id, second_commit_id); assert_ne!(second_commit_id, third_commit_id); let first_history = select_history_rows( &session, &format!( "SELECT start_commit_id, depth, snapshot_content, change_id, observed_commit_id, commit_created_at \ FROM lix_state_history \ WHERE start_commit_id = '{first_commit_id}' \ AND entity_id = lix_json('[\"history-explicit\"]') \ ORDER BY depth" ), ) .await; assert_eq!( &first_history[0][0..3], &[ Value::Text(first_commit_id.clone()), Value::Integer(0), Value::Json(json!({"key": "history-explicit", "value": "one"})), ], "historical commit should be queryable after later commits" ); let Value::Text(first_change_id) = &first_history[0][3] else { panic!("change_id should be text"); }; let Value::Text(first_row_commit_id) = &first_history[0][4] else { panic!("observed_commit_id should be text"); }; let Value::Text(first_commit_created_at) = &first_history[0][5] else { panic!("commit_created_at should be text"); }; assert!(!first_change_id.is_empty()); assert_eq!(first_row_commit_id, &first_commit_id); assert!( !first_commit_created_at.is_empty(), "commit_created_at should be populated" ); let second_history = select_history_rows( &session, &format!( "SELECT depth, snapshot_content \ FROM lix_state_history \ WHERE start_commit_id = '{second_commit_id}' \ AND entity_id = lix_json('[\"history-explicit\"]') \ ORDER BY depth" ), ) .await; assert_eq!( second_history, vec![ vec![ Value::Integer(0), Value::Json(json!({"key": "history-explicit", "value": "two"})), ], vec![ Value::Integer(1), Value::Json(json!({"key": "history-explicit", "value": "one"})), ], ], "depth 0 is the start commit and parent changes appear at depth > 0" ); let tombstone_history = select_history_rows( &session, &format!( "SELECT depth, snapshot_content \ FROM lix_state_history \ WHERE start_commit_id = '{third_commit_id}' \ AND entity_id = lix_json('[\"history-explicit\"]') \ AND depth = 0 \ AND snapshot_content IS NULL" ), ) .await; assert_eq!( tombstone_history, vec![vec![Value::Integer(0), Value::Null]], "tombstone changes should be visible as NULL snapshot_content" ); } ); simulation_test!( lix_state_history_routes_schema_entity_file_and_depth_filters, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('history-file-a', '/history/a.txt', X'61')", &[], ) .await .expect("file insert should succeed"); let first_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("first head should load") .expect("first head should exist"); session .execute( "UPDATE lix_file SET data = X'62' WHERE id = 'history-file-a'", &[], ) .await .expect("file update should succeed"); let second_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("second head should load") .expect("second head should exist"); let rows = select_history_rows( &session, &format!( "SELECT entity_id, schema_key, file_id, depth \ FROM lix_state_history \ WHERE start_commit_id = '{second_commit_id}' \ AND schema_key = 'lix_binary_blob_ref' \ AND entity_id = lix_json('[\"history-file-a\"]') \ AND file_id = 'history-file-a' \ AND depth >= 0 \ AND depth <= 1 \ ORDER BY depth" ), ) .await; assert_eq!( rows, vec![ vec![ Value::Json(json!(["history-file-a"])), Value::Text("lix_binary_blob_ref".to_string()), Value::Text("history-file-a".to_string()), Value::Integer(0), ], vec![ Value::Json(json!(["history-file-a"])), Value::Text("lix_binary_blob_ref".to_string()), Value::Text("history-file-a".to_string()), Value::Integer(1), ], ], "schema_key, entity_id, file_id, and depth range filters should route through the provider" ); let parent_only_rows = select_history_rows( &session, &format!( "SELECT start_commit_id, depth \ FROM lix_state_history \ WHERE start_commit_id = '{second_commit_id}' \ AND schema_key = 'lix_binary_blob_ref' \ AND entity_id = lix_json('[\"history-file-a\"]') \ AND file_id = 'history-file-a' \ AND depth > 0 \ AND depth < 2" ), ) .await; assert_eq!( parent_only_rows, vec![vec![Value::Text(second_commit_id), Value::Integer(1)]], "strict depth ranges should keep only matching parent rows" ); let historical_start_rows = select_history_rows( &session, &format!( "SELECT start_commit_id, depth \ FROM lix_state_history \ WHERE start_commit_id = '{first_commit_id}' \ AND schema_key = 'lix_binary_blob_ref' \ AND entity_id = lix_json('[\"history-file-a\"]') \ AND file_id = 'history-file-a'" ), ) .await; assert_eq!( historical_start_rows, vec![vec![Value::Text(first_commit_id), Value::Integer(0)]], "file_id filtering should also work for historical non-head starts" ); } ); simulation_test!( lix_state_history_shows_tombstone_at_ancestor_depth, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-ancestor-tombstone', 'one')", &[], ) .await .expect("initial tracked write should succeed"); session .execute( "DELETE FROM lix_key_value WHERE key = 'history-ancestor-tombstone'", &[], ) .await .expect("delete should succeed"); let delete_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("delete head should load") .expect("delete head should exist"); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-unrelated-after-delete', 'later')", &[], ) .await .expect("unrelated later write should succeed"); let later_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("later head should load") .expect("later head should exist"); assert_ne!(delete_commit_id, later_commit_id); let tombstone_rows = select_history_rows( &session, &format!( "SELECT observed_commit_id, depth, snapshot_content \ FROM lix_state_history \ WHERE start_commit_id = '{later_commit_id}' \ AND entity_id = lix_json('[\"history-ancestor-tombstone\"]') \ AND snapshot_content IS NULL \ ORDER BY depth" ), ) .await; assert_eq!( tombstone_rows, vec![vec![ Value::Text(delete_commit_id), Value::Integer(1), Value::Null, ]], "a tombstone from the parent commit should appear at ancestor depth" ); } ); simulation_test!( lix_state_history_supports_multiple_start_commit_filters, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-multi-start', 'one')", &[], ) .await .expect("first write should succeed"); let first_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("first head should load") .expect("first head should exist"); session .execute( "UPDATE lix_key_value SET value = 'two' WHERE key = 'history-multi-start'", &[], ) .await .expect("second write should succeed"); let second_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("second head should load") .expect("second head should exist"); let in_rows = select_history_rows( &session, &format!( "SELECT start_commit_id, depth, snapshot_content \ FROM lix_state_history \ WHERE start_commit_id IN ('{first_commit_id}', '{second_commit_id}') \ AND entity_id = lix_json('[\"history-multi-start\"]') \ AND depth = 0 \ ORDER BY start_commit_id" ), ) .await; assert_eq!( in_rows, vec![ vec![ Value::Text(first_commit_id.clone()), Value::Integer(0), Value::Json(json!({"key": "history-multi-start", "value": "one"})), ], vec![ Value::Text(second_commit_id.clone()), Value::Integer(0), Value::Json(json!({"key": "history-multi-start", "value": "two"})), ], ], "IN should allow multiple explicit history starts" ); let or_rows = select_history_rows( &session, &format!( "SELECT start_commit_id \ FROM lix_state_history \ WHERE (start_commit_id = '{first_commit_id}' \ OR start_commit_id = '{second_commit_id}') \ AND entity_id = lix_json('[\"history-multi-start\"]') \ AND depth = 0 \ ORDER BY start_commit_id" ), ) .await; assert_eq!( or_rows, vec![ vec![Value::Text(first_commit_id)], vec![Value::Text(second_commit_id)], ], "OR should also allow multiple explicit history starts" ); } ); simulation_test!( lix_state_history_intersects_conjunctive_value_filters, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-and-a', 'a')", &[], ) .await .expect("first write should succeed"); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('history-and-b', 'b')", &[], ) .await .expect("second write should succeed"); let head_commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("head should load") .expect("head should exist"); let narrowed_rows = select_history_rows( &session, &format!( "SELECT entity_id \ FROM lix_state_history \ WHERE start_commit_id = '{head_commit_id}' \ AND entity_id IN (lix_json('[\"history-and-a\"]'), lix_json('[\"history-and-b\"]')) \ AND entity_id = lix_json('[\"history-and-a\"]')" ), ) .await; assert_eq!( narrowed_rows, vec![vec![Value::Json(json!(["history-and-a"]))]], "AND filters on the same history column should intersect, not union" ); let contradictory_rows = select_history_rows( &session, &format!( "SELECT entity_id \ FROM lix_state_history \ WHERE start_commit_id = '{head_commit_id}' \ AND entity_id = lix_json('[\"history-and-a\"]') \ AND entity_id = lix_json('[\"history-and-b\"]')" ), ) .await; assert_eq!( contradictory_rows, Vec::>::new(), "contradictory AND filters on the same history column should return no rows" ); } ); async fn select_history_rows( session: &crate::support::simulation_test::engine::SimSession, sql: &str, ) -> Vec> { let result = session .execute(sql, &[]) .await .expect("history SELECT should succeed"); let row_set = result; row_set .rows() .iter() .map(|row| row.values().to_vec()) .collect() } ================================================ FILE: packages/engine/tests/sql/lix_version.rs ================================================ use lix_engine::ExecuteResult; use lix_engine::LixError; use lix_engine::Value; simulation_test!(lix_version_lists_descriptors_with_refs, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_session("global") .await .expect("global session should open"), &engine, ); let result = session .execute( "SELECT id, name, hidden, commit_id FROM lix_version ORDER BY id", &[], ) .await .expect("lix_version should read"); let rows = result; assert_eq!(rows.len(), 2); let values = rows .rows() .iter() .map(|row| row.values().to_vec()) .collect::>(); assert!(values.contains(&vec![ Value::Text("global".to_string()), Value::Text("global".to_string()), Value::Boolean(true), Value::Text(sim.initial_commit_id().to_string()), ])); assert!(values.contains(&vec![ Value::Text(sim.main_version_id().to_string()), Value::Text("main".to_string()), Value::Boolean(false), Value::Text(sim.initial_commit_id().to_string()), ])); }); simulation_test!( lix_version_count_star_handles_empty_projection, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_session("global") .await .expect("global session should open"), &engine, ); assert_eq!( count_rows(&session, "SELECT COUNT(*) FROM lix_version").await, 2 ); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version WHERE name = 'main'", ) .await, 1 ); } ); simulation_test!( lix_version_insert_creates_descriptor_and_ref, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); let insert_result = session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-insert', 'SQL Insert')", &[], ) .await .expect("lix_version insert should create descriptor and ref"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); assert_single_version_row( &session, "sql-version-insert", "SQL Insert", false, sim.initial_commit_id(), ) .await; assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version_descriptor WHERE id = 'sql-version-insert'", ) .await, 1 ); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version_ref WHERE id = 'sql-version-insert'", ) .await, 1 ); } ); simulation_test!( lix_version_insert_accepts_explicit_hidden_and_commit_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); let insert_result = session .execute( &format!( "INSERT INTO lix_version (id, name, hidden, commit_id) \ VALUES ('sql-version-explicit', 'Explicit', true, '{}')", sim.initial_commit_id() ), &[], ) .await .expect("lix_version insert should accept hidden and commit_id"); assert_eq!(insert_result, ExecuteResult::from_rows_affected(1)); assert_single_version_row( &session, "sql-version-explicit", "Explicit", true, sim.initial_commit_id(), ) .await; } ); simulation_test!( lix_version_update_splits_descriptor_and_ref_changes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-update', 'Before')", &[], ) .await .expect("version insert should succeed"); session .execute( "INSERT INTO lix_key_value (key, value) \ VALUES ('sql-version-update-head', 'after')", &[], ) .await .expect("tracked write should advance active version head"); let new_head = select_single_text( &session, &format!( "SELECT commit_id FROM lix_version WHERE id = '{}'", sim.main_version_id() ), ) .await; let update_result = session .execute( &format!( "UPDATE lix_version \ SET name = 'After', hidden = true, commit_id = '{new_head}' \ WHERE id = 'sql-version-update'" ), &[], ) .await .expect("lix_version update should split descriptor and ref changes"); assert_eq!(update_result, ExecuteResult::from_rows_affected(1)); assert_single_version_row(&session, "sql-version-update", "After", true, &new_head).await; assert_eq!( select_single_text( &session, "SELECT commit_id FROM lix_version_ref WHERE id = 'sql-version-update'", ) .await, new_head ); } ); simulation_test!( lix_version_delete_removes_descriptor_and_ref_atomically, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-delete', 'Delete Me')", &[], ) .await .expect("version insert should succeed"); let delete_result = session .execute( "DELETE FROM lix_version WHERE id = 'sql-version-delete'", &[], ) .await .expect("lix_version delete should remove descriptor and ref atomically"); assert_eq!(delete_result, ExecuteResult::from_rows_affected(1)); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version WHERE id = 'sql-version-delete'", ) .await, 0 ); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version_descriptor WHERE id = 'sql-version-delete'", ) .await, 0 ); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version_ref WHERE id = 'sql-version-delete'", ) .await, 0 ); } ); simulation_test!( lix_version_delete_rejects_active_and_global_versions, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); let active_error = session .execute( &format!( "DELETE FROM lix_version WHERE id = '{}'", sim.main_version_id() ), &[], ) .await .expect_err("delete should reject active version"); assert!( active_error.to_string().contains("active version"), "active delete error should explain the restriction: {active_error:?}" ); let global_error = session .execute("DELETE FROM lix_version WHERE id = 'global'", &[]) .await .expect_err("delete should reject global version"); assert!( global_error.to_string().contains("global version"), "global delete error should explain the restriction: {global_error:?}" ); assert_eq!( count_rows( &session, &format!( "SELECT COUNT(*) FROM lix_version WHERE id = '{}'", sim.main_version_id() ), ) .await, 1 ); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version WHERE id = 'global'" ) .await, 1 ); } ); simulation_test!(lix_version_duplicate_insert_rejects, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-duplicate', 'First')", &[], ) .await .expect("initial version insert should succeed"); let error = session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-duplicate', 'Second')", &[], ) .await .expect_err("duplicate version id should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.message.contains("table 'lix_version'") && error.message.contains("id 'sql-version-duplicate'") && !error.message.contains("lix_version_descriptor") && !error.message.contains("lix_version_ref"), "unexpected error: {error:?}" ); }); simulation_test!( lix_version_duplicate_name_insert_rejects, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-name-a', 'Duplicate Name')", &[], ) .await .expect("initial version insert should succeed"); let error = session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-name-b', 'Duplicate Name')", &[], ) .await .expect_err("duplicate version name should be rejected"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.to_string().contains("/name"), "error should explain duplicate version name: {error:?}" ); } ); simulation_test!( lix_version_duplicate_name_update_rejects, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-name-update-a', 'Name A')", &[], ) .await .expect("first version insert should succeed"); session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-name-update-b', 'Name B')", &[], ) .await .expect("second version insert should succeed"); let error = session .execute( "UPDATE lix_version \ SET name = 'Name A' \ WHERE id = 'sql-version-name-update-b'", &[], ) .await .expect_err("updating to a duplicate version name should fail"); assert_eq!(error.code, LixError::CODE_UNIQUE); assert!( error.to_string().contains("/name"), "error should explain duplicate version name: {error:?}" ); } ); simulation_test!( lix_version_insert_rejects_invalid_commit_id, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); let error = session .execute( "INSERT INTO lix_version (id, name, commit_id) \ VALUES ('sql-version-invalid-commit', 'Invalid Commit', 'missing-commit')", &[], ) .await .expect_err("version ref commit_id should reference an existing commit"); assert_eq!(error.code, LixError::CODE_VERSION_NOT_FOUND); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version WHERE id = 'sql-version-invalid-commit'", ) .await, 0 ); } ); simulation_test!(lix_version_update_rejects_id_change, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); session .execute( "INSERT INTO lix_version (id, name) \ VALUES ('sql-version-id-update', 'Before')", &[], ) .await .expect("version insert should succeed"); let error = session .execute( "UPDATE lix_version \ SET id = 'sql-version-id-update-renamed' \ WHERE id = 'sql-version-id-update'", &[], ) .await .expect_err("version id should be immutable through UPDATE"); assert!( error.to_string().contains("immutable column 'id'"), "id update error should explain the restriction: {error:?}" ); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version WHERE id = 'sql-version-id-update'", ) .await, 1 ); assert_eq!( count_rows( &session, "SELECT COUNT(*) FROM lix_version WHERE id = 'sql-version-id-update-renamed'", ) .await, 0 ); }); simulation_test!( lix_version_delete_missing_returns_zero_rows_affected, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); let delete_result = session .execute( "DELETE FROM lix_version WHERE id = 'sql-version-missing-delete'", &[], ) .await .expect("missing version delete should be a no-op"); assert_eq!(delete_result, ExecuteResult::from_rows_affected(0)); } ); async fn assert_single_version_row( session: &crate::support::simulation_test::engine::SimSession, version_id: &str, name: &str, hidden: bool, commit_id: &str, ) { let result = session .execute( &format!( "SELECT id, name, hidden, commit_id \ FROM lix_version \ WHERE id = '{version_id}'" ), &[], ) .await .expect("version row should be selectable"); assert_eq!(result.len(), 1); assert_eq!( result.rows()[0].values(), &[ Value::Text(version_id.to_string()), Value::Text(name.to_string()), Value::Boolean(hidden), Value::Text(commit_id.to_string()), ] ); } async fn select_single_text( session: &crate::support::simulation_test::engine::SimSession, sql: &str, ) -> String { let result = session .execute(sql, &[]) .await .expect("query should succeed"); assert_eq!(result.len(), 1, "expected exactly one row for query: {sql}"); match result.rows()[0].values()[0] { Value::Text(ref text) => text.clone(), ref other => panic!("expected text for query {sql}, got {other:?}"), } } async fn count_rows( session: &crate::support::simulation_test::engine::SimSession, sql: &str, ) -> i64 { let result = session .execute(sql, &[]) .await .expect("count should succeed"); assert_eq!(result.len(), 1, "expected exactly one row for query: {sql}"); match result.rows()[0].values()[0] { Value::Integer(count) => count, ref other => panic!("expected integer count for query {sql}, got {other:?}"), } } ================================================ FILE: packages/engine/tests/sql/metadata.rs ================================================ use lix_engine::LixError; use lix_engine::Value; use serde_json::json; simulation_test!( metadata_rejects_invalid_json_on_lix_file_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); assert_invalid_metadata_error( session .execute( "INSERT INTO lix_file (id, path, lixcol_metadata) \ VALUES ('metadata-file-insert', '/metadata-file-insert.txt', '{bad')", &[], ) .await .expect_err("invalid file metadata should be rejected on INSERT"), ); session .execute( "INSERT INTO lix_file (id, path) \ VALUES ('metadata-file-update', '/metadata-file-update.txt')", &[], ) .await .expect("file insert should succeed"); assert_invalid_metadata_error( session .execute( "UPDATE lix_file \ SET lixcol_metadata = '{bad' \ WHERE id = 'metadata-file-update'", &[], ) .await .expect_err("invalid file metadata should be rejected on UPDATE"), ); } ); simulation_test!( metadata_rejects_invalid_json_on_lix_directory_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); assert_invalid_metadata_error( session .execute( "INSERT INTO lix_directory (id, path, lixcol_metadata) \ VALUES ('metadata-dir-insert', '/metadata-dir-insert/', '{bad')", &[], ) .await .expect_err("invalid directory metadata should be rejected on INSERT"), ); session .execute( "INSERT INTO lix_directory (id, path) \ VALUES ('metadata-dir-update', '/metadata-dir-update/')", &[], ) .await .expect("directory insert should succeed"); assert_invalid_metadata_error( session .execute( "UPDATE lix_directory \ SET lixcol_metadata = '{bad' \ WHERE id = 'metadata-dir-update'", &[], ) .await .expect_err("invalid directory metadata should be rejected on UPDATE"), ); } ); simulation_test!( metadata_rejects_invalid_json_on_typed_entity_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); assert_invalid_metadata_error( session .execute( "INSERT INTO lix_key_value (key, value, lixcol_metadata) \ VALUES ('metadata-entity-insert', 'value', '{bad')", &[], ) .await .expect_err("invalid typed entity metadata should be rejected on INSERT"), ); session .execute( "INSERT INTO lix_key_value (key, value) \ VALUES ('metadata-entity-update', 'value')", &[], ) .await .expect("typed entity insert should succeed"); assert_invalid_metadata_error( session .execute( "UPDATE lix_key_value \ SET lixcol_metadata = '{bad' \ WHERE key = 'metadata-entity-update'", &[], ) .await .expect_err("invalid typed entity metadata should be rejected on UPDATE"), ); } ); simulation_test!( metadata_rejects_invalid_json_on_lix_state_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); assert_invalid_metadata_error( session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content, metadata\ ) VALUES (\ lix_json('[\"metadata-state-insert\"]'), 'lix_key_value', NULL, \ lix_json('{\"key\":\"metadata-state-insert\",\"value\":\"value\"}'), \ '{bad'\ )", &[], ) .await .expect_err("invalid lix_state metadata should be rejected on INSERT"), ); session .execute( "INSERT INTO lix_state (\ entity_id, schema_key, file_id, snapshot_content\ ) VALUES (\ lix_json('[\"metadata-state-update\"]'), 'lix_key_value', NULL, \ lix_json('{\"key\":\"metadata-state-update\",\"value\":\"value\"}')\ )", &[], ) .await .expect("lix_state insert should succeed"); assert_invalid_metadata_error( session .execute( "UPDATE lix_state \ SET metadata = '{bad' \ WHERE entity_id = lix_json('[\"metadata-state-update\"]') \ AND schema_key = 'lix_key_value'", &[], ) .await .expect_err("invalid lix_state metadata should be rejected on UPDATE"), ); } ); simulation_test!( valid_object_metadata_survives_live_change_and_history_reads, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); let expected = json!({ "source": "metadata-regression", "nested": {"ok": true} }); session .execute( "INSERT INTO lix_key_value (key, value, lixcol_metadata) \ VALUES (\ 'metadata-valid-object', \ 'value', \ '{\"source\":\"metadata-regression\",\"nested\":{\"ok\":true}}'\ )", &[], ) .await .expect("valid object metadata should write"); let commit_id = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("head commit should load") .expect("head commit should exist"); assert_metadata_value( session .execute( "SELECT lixcol_metadata \ FROM lix_key_value \ WHERE key = 'metadata-valid-object'", &[], ) .await .expect("typed entity metadata should read"), "lixcol_metadata", &expected, ); assert_metadata_value( session .execute( "SELECT metadata \ FROM lix_state \ WHERE entity_id = lix_json('[\"metadata-valid-object\"]') \ AND schema_key = 'lix_key_value'", &[], ) .await .expect("lix_state metadata should read"), "metadata", &expected, ); assert_metadata_value( session .execute( "SELECT metadata \ FROM lix_change \ WHERE entity_id = lix_json('[\"metadata-valid-object\"]') \ AND schema_key = 'lix_key_value'", &[], ) .await .expect("lix_change metadata should read"), "metadata", &expected, ); assert_metadata_value( session .execute( &format!( "SELECT metadata \ FROM lix_state_history \ WHERE start_commit_id = '{commit_id}' \ AND entity_id = lix_json('[\"metadata-valid-object\"]') \ AND schema_key = 'lix_key_value'" ), &[], ) .await .expect("lix_state_history metadata should read"), "metadata", &expected, ); } ); fn assert_invalid_metadata_error(error: LixError) { assert!( matches!( error.code.as_str(), "LIX_ERROR_INVALID_JSON" | LixError::CODE_SCHEMA_VALIDATION | LixError::CODE_INVALID_PARAM ), "expected invalid metadata public error, got {error:?}" ); assert!( error.message.contains("metadata") && error.message.contains("JSON"), "error should identify metadata JSON, got {error:?}" ); } fn assert_metadata_value( result: lix_engine::ExecuteResult, column: &str, expected: &serde_json::Value, ) { assert_eq!(result.len(), 1, "expected one metadata row"); let value = result.rows()[0] .get::(column) .unwrap_or_else(|_| panic!("{column} should be present")); assert_eq!(value, Value::Json(expected.clone())); } ================================================ FILE: packages/engine/tests/sql/read_only.rs ================================================ use lix_engine::{LixError, Value}; simulation_test!( read_only_version_components_reject_direct_entity_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_read_only_error( session .execute( "INSERT INTO lix_version_descriptor (id, name, hidden) \ VALUES ('orphan-descriptor', 'Orphan', false)", &[], ) .await .expect_err("descriptor insert should be read-only"), "lix_version_descriptor", "lix_version", ); assert_read_only_error( session .execute( "UPDATE lix_version_descriptor SET name = 'Renamed' \ WHERE id = 'main'", &[], ) .await .expect_err("descriptor update should be read-only"), "lix_version_descriptor", "lix_version", ); assert_read_only_error( session .execute("DELETE FROM lix_version_ref WHERE id = 'main'", &[]) .await .expect_err("ref delete should be read-only"), "lix_version_ref", "lix_version", ); } ); simulation_test!( read_only_version_components_reject_lix_state_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_read_only_error( session .execute( "INSERT INTO lix_state (entity_id, schema_key, snapshot_content) \ VALUES (lix_json('[\"orphan-descriptor\"]'), 'lix_version_descriptor', \ lix_json('{\"id\":\"orphan-descriptor\",\"name\":\"Orphan\"}'))", &[], ) .await .expect_err("descriptor insert via lix_state should be read-only"), "lix_version_descriptor", "lix_version", ); let descriptor_count = session .execute( "SELECT COUNT(*) FROM lix_version_descriptor WHERE id = 'orphan-descriptor'", &[], ) .await .expect("descriptor count should query"); assert_eq!( descriptor_count.rows()[0].values(), &[Value::Integer(0)], "read-only rejection should prevent orphan descriptor persistence" ); } ); simulation_test!(read_only_file_descriptor_rejects_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_read_only_error( session .execute( "INSERT INTO lix_file_descriptor (id, directory_id, name) \ VALUES ('file-direct', NULL, 'direct.txt')", &[], ) .await .expect_err("file descriptor insert should be read-only"), "lix_file_descriptor", "lix_file", ); }); simulation_test!(read_only_binary_blob_ref_rejects_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('file-with-data', '/file.bin', X'4142')", &[], ) .await .expect("file insert should create managed blob ref"); assert_read_only_error( session .execute( "INSERT INTO lix_binary_blob_ref (id, blob_hash, size_bytes) \ VALUES ('file-direct', 'fake-hash', 2)", &[], ) .await .expect_err("blob ref insert should be read-only"), "lix_binary_blob_ref", "lix_file data column", ); assert_read_only_error( session .execute( "UPDATE lix_binary_blob_ref \ SET blob_hash = 'other-hash' \ WHERE id = 'file-with-data'", &[], ) .await .expect_err("blob ref update should be read-only"), "lix_binary_blob_ref", "lix_file data column", ); assert_read_only_error( session .execute( "DELETE FROM lix_binary_blob_ref WHERE id = 'file-with-data'", &[], ) .await .expect_err("blob ref delete should be read-only"), "lix_binary_blob_ref", "lix_file data column", ); assert_read_only_error( session .execute( "DELETE FROM lix_state \ WHERE schema_key = 'lix_binary_blob_ref' \ AND entity_id = lix_json('[\"file-with-data\"]')", &[], ) .await .expect_err("blob ref delete via lix_state should be read-only"), "lix_binary_blob_ref", "lix_file data column", ); let data = session .execute("SELECT data FROM lix_file WHERE id = 'file-with-data'", &[]) .await .expect("file data should still be readable"); assert_eq!(data.rows()[0].values(), &[Value::Blob(vec![0x41, 0x42])]); }); simulation_test!( read_only_directory_descriptor_rejects_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_read_only_error( session .execute( "INSERT INTO lix_directory_descriptor (id, parent_id, name) \ VALUES ('dir-direct', NULL, 'direct')", &[], ) .await .expect_err("directory descriptor insert should be read-only"), "lix_directory_descriptor", "lix_directory", ); } ); simulation_test!( read_only_internal_state_rejects_lix_state_writes, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_read_only_error( session .execute( "INSERT INTO lix_state (entity_id, schema_key, snapshot_content, global) \ VALUES (lix_json('[\"fake-change\"]'), 'lix_change', \ lix_json('{\"id\":\"fake-change\",\"entity_id\":\"x\",\"schema_key\":\"lix_key_value\"}'), true)", &[], ) .await .expect_err("lix_change insert via lix_state should be read-only"), "lix_change", "transactions commit", ); } ); simulation_test!(read_only_history_views_reject_dml, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); assert_read_only_error( session .execute( "INSERT INTO lix_file_history (id, path) VALUES ('history-file', '/x.txt')", &[], ) .await .expect_err("history insert should be read-only"), "lix_file_history", "History views are query-only", ); assert_read_only_error( session .execute("UPDATE lix_directory_history SET name = 'renamed'", &[]) .await .expect_err("history update should be read-only"), "lix_directory_history", "History views are query-only", ); assert_read_only_error( session .execute("DELETE FROM lix_state_history", &[]) .await .expect_err("history delete should be read-only"), "lix_state_history", "History views are query-only", ); }); simulation_test!(read_only_typed_history_views_reject_dml, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("workspace session should open"), &engine, ); session .execute( "INSERT INTO lix_registered_schema (value, lixcol_global, lixcol_untracked) \ VALUES (\ lix_json('{\"x-lix-key\":\"read_only_history_entity\",\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"string\"}},\"required\":[\"id\"],\"additionalProperties\":false}'),\ false,\ true\ )", &[], ) .await .expect("registered schema insert should succeed"); assert_read_only_error( session .execute( "INSERT INTO read_only_history_entity_history (id) VALUES ('entity-a')", &[], ) .await .expect_err("typed history insert should be read-only"), "read_only_history_entity_history", "History views are query-only", ); }); fn assert_read_only_error(error: LixError, schema_key: &str, hint_fragment: &str) { assert_eq!(error.code, LixError::CODE_READ_ONLY); assert!( error.message.contains(schema_key), "read-only error should name {schema_key}: {error:?}" ); assert!( error .hint .as_deref() .is_some_and(|hint| hint.contains(hint_fragment)), "read-only error should guide callers toward {hint_fragment}: {error:?}" ); } ================================================ FILE: packages/engine/tests/sql/udfs.rs ================================================ simulation_test!( lix_active_version_commit_id_returns_active_head, |sim| async move { let engine = sim.boot_engine().await; let session = sim.wrap_session( engine .open_workspace_session() .await .expect("main session should open"), &engine, ); session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('active-head', 'one')", &[], ) .await .expect("tracked write should succeed"); let expected = engine .load_version_head_commit_id(sim.main_version_id()) .await .expect("head should load") .expect("head should exist"); let result = session .execute("SELECT lix_active_version_commit_id()", &[]) .await .expect("active head UDF should execute"); assert_eq!( result.rows()[0] .get::("lix_active_version_commit_id()") .unwrap(), expected ); } ); ================================================ FILE: packages/engine/tests/sql.rs ================================================ #[macro_use] #[path = "support/mod.rs"] mod support; #[path = "sql/entity_history.rs"] mod entity_history; #[path = "sql/errors.rs"] mod errors; #[path = "sql/history_conformance.rs"] mod history_conformance; #[path = "sql/lix_change.rs"] mod lix_change; #[path = "sql/lix_commit.rs"] mod lix_commit; #[path = "sql/lix_directory.rs"] mod lix_directory; #[path = "sql/lix_directory_history.rs"] mod lix_directory_history; #[path = "sql/lix_file.rs"] mod lix_file; #[path = "sql/lix_file_history.rs"] mod lix_file_history; #[path = "sql/lix_json.rs"] mod lix_json; #[path = "sql/lix_key_value.rs"] mod lix_key_value; #[path = "sql/lix_label_assignment.rs"] mod lix_label_assignment; #[path = "sql/lix_registered_schema.rs"] mod lix_registered_schema; #[path = "sql/lix_state.rs"] mod lix_state; #[path = "sql/lix_state_history.rs"] mod lix_state_history; #[path = "sql/lix_version.rs"] mod lix_version; #[path = "sql/metadata.rs"] mod metadata; #[path = "sql/read_only.rs"] mod read_only; #[path = "sql/udfs.rs"] mod udfs; use lix_engine::ExecuteResult; use lix_engine::Value; async fn select_rows( session: &crate::support::simulation_test::engine::SimSession, sql: &str, ) -> Vec> { let result = session .execute(sql, &[]) .await .expect("SELECT should succeed"); rows_from_result(result) } fn assert_rows_eq(result: ExecuteResult, expected: Vec>) { assert_eq!(rows_from_result(result), expected); } fn rows_from_result(result: ExecuteResult) -> Vec> { let row_set = result; row_set .rows() .iter() .map(|row| row.values().to_vec()) .collect() } ================================================ FILE: packages/engine/tests/storage_accounting.rs ================================================ #![cfg(feature = "storage-benches")] use async_trait::async_trait; use lix_engine::storage_bench::{ self, JsonStorePayloadShape, StorageBenchConfig, StorageBenchKeyPattern, StorageBenchSelectivity, StorageBenchUpdateFraction, }; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError, }; use std::collections::BTreeMap; use std::sync::{Arc, Mutex}; type Store = BTreeMap<(String, Vec), Vec>; fn byte_page_from_iter(values: impl IntoIterator>) -> lix_engine::BytePage { let values = values.into_iter(); let (lower_bound, _) = values.size_hint(); let mut page = BytePageBuilder::with_capacity(lower_bound, 0); for value in values { page.push(&value); } page.finish() } #[derive(Clone, Default)] struct AccountingBackend { store: Arc>, } #[derive(Debug, Clone, Copy, Default)] struct AccountingSnapshot { entries: usize, key_bytes: usize, value_bytes: usize, tracked_chunk_entries: usize, tracked_chunk_value_bytes: usize, tracked_snapshot_entries: usize, tracked_snapshot_value_bytes: usize, tracked_root_entries: usize, tracked_by_file_root_entries: usize, json_entries: usize, json_value_bytes: usize, json_chunk_entries: usize, json_chunk_value_bytes: usize, changelog_entries: usize, changelog_value_bytes: usize, untracked_entries: usize, untracked_value_bytes: usize, } impl AccountingSnapshot { fn total_bytes(self) -> usize { self.key_bytes + self.value_bytes } fn bytes_per_row(self, rows: usize) -> usize { if rows == 0 { 0 } else { self.total_bytes() / rows } } fn saturating_sub(self, before: Self) -> Self { Self { entries: self.entries.saturating_sub(before.entries), key_bytes: self.key_bytes.saturating_sub(before.key_bytes), value_bytes: self.value_bytes.saturating_sub(before.value_bytes), tracked_chunk_entries: self .tracked_chunk_entries .saturating_sub(before.tracked_chunk_entries), tracked_chunk_value_bytes: self .tracked_chunk_value_bytes .saturating_sub(before.tracked_chunk_value_bytes), tracked_snapshot_entries: self .tracked_snapshot_entries .saturating_sub(before.tracked_snapshot_entries), tracked_snapshot_value_bytes: self .tracked_snapshot_value_bytes .saturating_sub(before.tracked_snapshot_value_bytes), tracked_root_entries: self .tracked_root_entries .saturating_sub(before.tracked_root_entries), tracked_by_file_root_entries: self .tracked_by_file_root_entries .saturating_sub(before.tracked_by_file_root_entries), json_entries: self.json_entries.saturating_sub(before.json_entries), json_value_bytes: self .json_value_bytes .saturating_sub(before.json_value_bytes), json_chunk_entries: self .json_chunk_entries .saturating_sub(before.json_chunk_entries), json_chunk_value_bytes: self .json_chunk_value_bytes .saturating_sub(before.json_chunk_value_bytes), changelog_entries: self .changelog_entries .saturating_sub(before.changelog_entries), changelog_value_bytes: self .changelog_value_bytes .saturating_sub(before.changelog_value_bytes), untracked_entries: self .untracked_entries .saturating_sub(before.untracked_entries), untracked_value_bytes: self .untracked_value_bytes .saturating_sub(before.untracked_value_bytes), } } } #[derive(Debug, Clone, Copy)] enum AccountingWorkload { WriteRoot { label: &'static str, rows: usize, payload_bytes: usize, }, UpdateOne { rows: usize, }, AppendOne { rows: usize, }, Update10Pct { rows: usize, }, } #[derive(Debug, Clone, Copy)] enum JsonAccountingWorkload { Raw1k { rows: usize }, Structured16k { rows: usize }, Structured128k { rows: usize }, Array128k { rows: usize }, DedupeSame16k { rows: usize }, BaseUpdateObject1Of1000 { rows: usize }, BaseUpdateArray1Of1000 { rows: usize }, } #[derive(Debug, Clone, Copy)] enum ChangelogAccountingWorkload { AppendSmall { rows: usize }, Append1k { rows: usize }, Append16k { rows: usize }, Tombstones { rows: usize }, Metadata1k { rows: usize }, CompositeEntityIds { rows: usize }, } #[derive(Debug, Clone, Copy)] enum UntrackedAccountingWorkload { WriteRows { label: &'static str, rows: usize, payload_bytes: usize, }, } #[tokio::test] #[ignore = "prints deterministic storage accounting table"] async fn storage_accounting() { let workloads = [ AccountingWorkload::WriteRoot { label: "write_root_payload_small", rows: 10_000, payload_bytes: 0, }, AccountingWorkload::WriteRoot { label: "write_root_payload_1k", rows: 10_000, payload_bytes: 1024, }, AccountingWorkload::WriteRoot { label: "write_root_payload_16k", rows: 1_000, payload_bytes: 16 * 1024, }, AccountingWorkload::WriteRoot { label: "write_root_payload_128k", rows: 100, payload_bytes: 128 * 1024, }, AccountingWorkload::UpdateOne { rows: 100_000 }, AccountingWorkload::AppendOne { rows: 100_000 }, AccountingWorkload::Update10Pct { rows: 10_000 }, ]; println!( "{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>11} {:>11} {:>11} {:>9} {:>13}", "workload", "rows", "entries", "value_bytes", "total_bytes", "bytes/row", "chunks", "snapshots", "roots", "file_roots", "json", "json_bytes" ); for workload in workloads { let row = run_workload(workload) .await .expect("storage accounting workload should run"); println!( "{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>11} {:>11} {:>11} {:>9} {:>13}", workload_label(workload), row.rows, row.snapshot.entries, row.snapshot.value_bytes, row.snapshot.total_bytes(), row.snapshot.bytes_per_row(row.rows), row.snapshot.tracked_chunk_entries, row.snapshot.tracked_snapshot_entries, row.snapshot.tracked_root_entries, row.snapshot.tracked_by_file_root_entries, row.snapshot.json_entries, row.snapshot.json_value_bytes, ); } } #[tokio::test] #[ignore = "prints deterministic json_store storage accounting table"] async fn json_store_accounting() { let workloads = [ JsonAccountingWorkload::Raw1k { rows: 1_000 }, JsonAccountingWorkload::Structured16k { rows: 200 }, JsonAccountingWorkload::Structured128k { rows: 50 }, JsonAccountingWorkload::Array128k { rows: 50 }, JsonAccountingWorkload::DedupeSame16k { rows: 1_000 }, JsonAccountingWorkload::BaseUpdateObject1Of1000 { rows: 50 }, JsonAccountingWorkload::BaseUpdateArray1Of1000 { rows: 50 }, ]; println!( "{:<37} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>15}", "workload", "rows", "entries", "value_bytes", "total_bytes", "bytes/row", "json_refs", "json_chunks" ); for workload in workloads { let row = run_json_workload(workload) .await .expect("json_store accounting workload should run"); println!( "{:<37} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>15}", json_workload_label(workload), row.rows, row.snapshot.entries, row.snapshot.value_bytes, row.snapshot.total_bytes(), row.snapshot.bytes_per_row(row.rows), row.snapshot.json_entries, row.snapshot.json_chunk_entries, ); } } #[tokio::test] #[ignore = "prints deterministic changelog storage accounting table"] async fn changelog_accounting() { let workloads = [ ChangelogAccountingWorkload::AppendSmall { rows: 10_000 }, ChangelogAccountingWorkload::Append1k { rows: 10_000 }, ChangelogAccountingWorkload::Append16k { rows: 1_000 }, ChangelogAccountingWorkload::Tombstones { rows: 10_000 }, ChangelogAccountingWorkload::Metadata1k { rows: 10_000 }, ChangelogAccountingWorkload::CompositeEntityIds { rows: 10_000 }, ]; println!( "{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>13}", "workload", "rows", "entries", "value_bytes", "total_bytes", "bytes/row", "changes", "change_bytes" ); for workload in workloads { let row = run_changelog_workload(workload) .await .expect("changelog accounting workload should run"); println!( "{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>13}", changelog_workload_label(workload), row.rows, row.snapshot.entries, row.snapshot.value_bytes, row.snapshot.total_bytes(), row.snapshot.bytes_per_row(row.rows), row.snapshot.changelog_entries, row.snapshot.changelog_value_bytes, ); } } #[tokio::test] #[ignore = "prints deterministic untracked_state storage accounting table"] async fn untracked_state_accounting() { let workloads = [ UntrackedAccountingWorkload::WriteRows { label: "write_rows_payload_small", rows: 10_000, payload_bytes: 0, }, UntrackedAccountingWorkload::WriteRows { label: "write_rows_payload_1k", rows: 10_000, payload_bytes: 1024, }, UntrackedAccountingWorkload::WriteRows { label: "write_rows_payload_16k", rows: 1_000, payload_bytes: 16 * 1024, }, UntrackedAccountingWorkload::WriteRows { label: "write_rows_payload_128k", rows: 100, payload_bytes: 128 * 1024, }, ]; println!( "{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>13}", "workload", "rows", "entries", "value_bytes", "total_bytes", "bytes/row", "rows_ns", "row_bytes" ); for workload in workloads { let row = run_untracked_workload(workload) .await .expect("untracked_state accounting workload should run"); println!( "{:<31} {:>7} {:>8} {:>12} {:>12} {:>10} {:>11} {:>13}", untracked_workload_label(workload), row.rows, row.snapshot.entries, row.snapshot.value_bytes, row.snapshot.total_bytes(), row.snapshot.bytes_per_row(row.rows), row.snapshot.untracked_entries, row.snapshot.untracked_value_bytes, ); } } struct AccountingRow { rows: usize, snapshot: AccountingSnapshot, } async fn run_workload(workload: AccountingWorkload) -> Result { let accounting_backend = AccountingBackend::default(); let backend: Arc = Arc::new(accounting_backend.clone()); let config = config_for(workload); let rows = workload_rows(workload); let snapshot = match workload { AccountingWorkload::WriteRoot { .. } => { let fixture = storage_bench::prepare_tracked_state_write_root(config).await?; storage_bench::tracked_state_write_root_prepared(&backend, &fixture).await?; accounting_backend.accounting()? } AccountingWorkload::UpdateOne { .. } => { let fixture = storage_bench::prepare_tracked_state_update_rows(&backend, config, 1).await?; let before = accounting_backend.accounting()?; storage_bench::tracked_state_update_existing_prepared(&backend, &fixture).await?; accounting_backend.accounting()?.saturating_sub(before) } AccountingWorkload::AppendOne { .. } => { let fixture = storage_bench::prepare_tracked_state_append_child_rows(&backend, config, 1).await?; let before = accounting_backend.accounting()?; storage_bench::tracked_state_update_existing_prepared(&backend, &fixture).await?; accounting_backend.accounting()?.saturating_sub(before) } AccountingWorkload::Update10Pct { rows } => { let fixture = storage_bench::prepare_tracked_state_update_rows( &backend, config, rows.div_ceil(10), ) .await?; let before = accounting_backend.accounting()?; storage_bench::tracked_state_update_existing_prepared(&backend, &fixture).await?; accounting_backend.accounting()?.saturating_sub(before) } }; Ok(AccountingRow { rows, snapshot }) } async fn run_json_workload(workload: JsonAccountingWorkload) -> Result { let accounting_backend = AccountingBackend::default(); let backend: Arc = Arc::new(accounting_backend.clone()); let rows = json_workload_rows(workload); let snapshot = match workload { JsonAccountingWorkload::Raw1k { rows } => { let fixture = storage_bench::prepare_json_store_write(JsonStorePayloadShape::SmallRaw1k, rows) .await?; storage_bench::json_store_write_prepared(&backend, &fixture).await?; accounting_backend.accounting()? } JsonAccountingWorkload::Structured16k { rows } => { let fixture = storage_bench::prepare_json_store_write( JsonStorePayloadShape::MediumStructured16k, rows, ) .await?; storage_bench::json_store_write_prepared(&backend, &fixture).await?; accounting_backend.accounting()? } JsonAccountingWorkload::Structured128k { rows } => { let fixture = storage_bench::prepare_json_store_write( JsonStorePayloadShape::LargeStructured128k, rows, ) .await?; storage_bench::json_store_write_prepared(&backend, &fixture).await?; accounting_backend.accounting()? } JsonAccountingWorkload::Array128k { rows } => { let fixture = storage_bench::prepare_json_store_write( JsonStorePayloadShape::LargeArray128k, rows, ) .await?; storage_bench::json_store_write_prepared(&backend, &fixture).await?; accounting_backend.accounting()? } JsonAccountingWorkload::DedupeSame16k { rows } => { let fixture = storage_bench::prepare_json_store_write_dedupe( JsonStorePayloadShape::MediumStructured16k, rows, ) .await?; storage_bench::json_store_write_prepared(&backend, &fixture).await?; accounting_backend.accounting()? } JsonAccountingWorkload::BaseUpdateObject1Of1000 { rows } => { let fixture = storage_bench::prepare_json_store_base_update_object(&backend, rows).await?; let before = accounting_backend.accounting()?; storage_bench::json_store_write_against_base_object_prepared(&backend, &fixture) .await?; accounting_backend.accounting()?.saturating_sub(before) } JsonAccountingWorkload::BaseUpdateArray1Of1000 { rows } => { let fixture = storage_bench::prepare_json_store_base_update_array(&backend, rows).await?; let before = accounting_backend.accounting()?; storage_bench::json_store_write_against_base_array_prepared(&backend, &fixture).await?; accounting_backend.accounting()?.saturating_sub(before) } }; Ok(AccountingRow { rows, snapshot }) } async fn run_changelog_workload( workload: ChangelogAccountingWorkload, ) -> Result { let accounting_backend = AccountingBackend::default(); let backend: Arc = Arc::new(accounting_backend.clone()); let rows = changelog_workload_rows(workload); let config = changelog_config_for(workload); let fixture = match workload { ChangelogAccountingWorkload::AppendSmall { .. } | ChangelogAccountingWorkload::Append1k { .. } | ChangelogAccountingWorkload::Append16k { .. } => { storage_bench::prepare_changelog_append_changes(config).await? } ChangelogAccountingWorkload::Tombstones { .. } => { storage_bench::prepare_changelog_append_tombstones(config).await? } ChangelogAccountingWorkload::Metadata1k { .. } => { storage_bench::prepare_changelog_append_metadata(config).await? } ChangelogAccountingWorkload::CompositeEntityIds { .. } => { storage_bench::prepare_changelog_append_composite_entity_ids(config).await? } }; storage_bench::changelog_append_changes_prepared(&backend, &fixture).await?; Ok(AccountingRow { rows, snapshot: accounting_backend.accounting()?, }) } async fn run_untracked_workload( workload: UntrackedAccountingWorkload, ) -> Result { let accounting_backend = AccountingBackend::default(); let backend: Arc = Arc::new(accounting_backend.clone()); let rows = untracked_workload_rows(workload); let fixture = storage_bench::prepare_untracked_state_write_rows(untracked_config_for(workload)).await?; storage_bench::untracked_state_write_rows_prepared(&backend, &fixture).await?; Ok(AccountingRow { rows, snapshot: accounting_backend.accounting()?, }) } fn config_for(workload: AccountingWorkload) -> StorageBenchConfig { StorageBenchConfig { rows: workload_rows(workload), blob_bytes: 1024, state_payload_bytes: match workload { AccountingWorkload::WriteRoot { payload_bytes, .. } => payload_bytes, AccountingWorkload::UpdateOne { .. } | AccountingWorkload::AppendOne { .. } | AccountingWorkload::Update10Pct { .. } => 256, }, key_pattern: StorageBenchKeyPattern::Sequential, selectivity: StorageBenchSelectivity::Percent100, update_fraction: StorageBenchUpdateFraction::Percent100, } } fn workload_rows(workload: AccountingWorkload) -> usize { match workload { AccountingWorkload::WriteRoot { rows, .. } | AccountingWorkload::UpdateOne { rows } | AccountingWorkload::AppendOne { rows } | AccountingWorkload::Update10Pct { rows } => rows, } } fn workload_label(workload: AccountingWorkload) -> String { match workload { AccountingWorkload::WriteRoot { label, rows, .. } => format!("{label}/{}", row_label(rows)), AccountingWorkload::UpdateOne { rows } => format!("update_1_existing/{}", row_label(rows)), AccountingWorkload::AppendOne { rows } => { format!("append_1_new_child_commit/{}", row_label(rows)) } AccountingWorkload::Update10Pct { rows } => { format!("update_10pct_existing/{}", row_label(rows)) } } } fn json_workload_rows(workload: JsonAccountingWorkload) -> usize { match workload { JsonAccountingWorkload::Raw1k { rows } | JsonAccountingWorkload::Structured16k { rows } | JsonAccountingWorkload::Structured128k { rows } | JsonAccountingWorkload::Array128k { rows } | JsonAccountingWorkload::DedupeSame16k { rows } | JsonAccountingWorkload::BaseUpdateObject1Of1000 { rows } | JsonAccountingWorkload::BaseUpdateArray1Of1000 { rows } => rows, } } fn changelog_config_for(workload: ChangelogAccountingWorkload) -> StorageBenchConfig { StorageBenchConfig { rows: changelog_workload_rows(workload), blob_bytes: 1024, state_payload_bytes: match workload { ChangelogAccountingWorkload::AppendSmall { .. } | ChangelogAccountingWorkload::Tombstones { .. } | ChangelogAccountingWorkload::CompositeEntityIds { .. } => 0, ChangelogAccountingWorkload::Append1k { .. } | ChangelogAccountingWorkload::Metadata1k { .. } => 1024, ChangelogAccountingWorkload::Append16k { .. } => 16 * 1024, }, key_pattern: StorageBenchKeyPattern::Sequential, selectivity: StorageBenchSelectivity::Percent100, update_fraction: StorageBenchUpdateFraction::Percent100, } } fn changelog_workload_rows(workload: ChangelogAccountingWorkload) -> usize { match workload { ChangelogAccountingWorkload::AppendSmall { rows } | ChangelogAccountingWorkload::Append1k { rows } | ChangelogAccountingWorkload::Append16k { rows } | ChangelogAccountingWorkload::Tombstones { rows } | ChangelogAccountingWorkload::Metadata1k { rows } | ChangelogAccountingWorkload::CompositeEntityIds { rows } => rows, } } fn changelog_workload_label(workload: ChangelogAccountingWorkload) -> String { match workload { ChangelogAccountingWorkload::AppendSmall { rows } => { format!("append_small/{}", row_label(rows)) } ChangelogAccountingWorkload::Append1k { rows } => { format!("append_1k/{}", row_label(rows)) } ChangelogAccountingWorkload::Append16k { rows } => { format!("append_16k/{}", row_label(rows)) } ChangelogAccountingWorkload::Tombstones { rows } => { format!("tombstones/{}", row_label(rows)) } ChangelogAccountingWorkload::Metadata1k { rows } => { format!("metadata_1k/{}", row_label(rows)) } ChangelogAccountingWorkload::CompositeEntityIds { rows } => { format!("composite_entity_ids/{}", row_label(rows)) } } } fn untracked_config_for(workload: UntrackedAccountingWorkload) -> StorageBenchConfig { StorageBenchConfig { rows: untracked_workload_rows(workload), blob_bytes: 1024, state_payload_bytes: match workload { UntrackedAccountingWorkload::WriteRows { payload_bytes, .. } => payload_bytes, }, key_pattern: StorageBenchKeyPattern::Sequential, selectivity: StorageBenchSelectivity::Percent100, update_fraction: StorageBenchUpdateFraction::Percent100, } } fn untracked_workload_rows(workload: UntrackedAccountingWorkload) -> usize { match workload { UntrackedAccountingWorkload::WriteRows { rows, .. } => rows, } } fn untracked_workload_label(workload: UntrackedAccountingWorkload) -> String { match workload { UntrackedAccountingWorkload::WriteRows { label, rows, .. } => { format!("{label}/{}", row_label(rows)) } } } fn json_workload_label(workload: JsonAccountingWorkload) -> String { match workload { JsonAccountingWorkload::Raw1k { rows } => { format!("raw_1k/{}", row_label(rows)) } JsonAccountingWorkload::Structured16k { rows } => { format!("structured_16k/{}", row_label(rows)) } JsonAccountingWorkload::Structured128k { rows } => { format!("structured_128k/{}", row_label(rows)) } JsonAccountingWorkload::Array128k { rows } => { format!("array_128k/{}", row_label(rows)) } JsonAccountingWorkload::DedupeSame16k { rows } => { format!("dedupe_same_16k/{}", row_label(rows)) } JsonAccountingWorkload::BaseUpdateObject1Of1000 { rows } => { format!("base_update_object_1_of_1000/{}", row_label(rows)) } JsonAccountingWorkload::BaseUpdateArray1Of1000 { rows } => { format!("base_update_array_1_of_1000/{}", row_label(rows)) } } } fn row_label(rows: usize) -> String { match rows { 100_000 => "100k".to_string(), 10_000 => "10k".to_string(), 1_000 => "1k".to_string(), rows => rows.to_string(), } } impl AccountingBackend { fn lock_store(&self) -> Result, LixError> { self.store .lock() .map_err(|_| LixError::new("LIX_ERROR_UNKNOWN", "accounting store mutex poisoned")) } fn accounting(&self) -> Result { let store = self.lock_store()?; let mut snapshot = AccountingSnapshot::default(); for ((namespace, key), value) in store.iter() { snapshot.entries += 1; snapshot.key_bytes += key.len(); snapshot.value_bytes += value.len(); match namespace.as_str() { "tracked_state.tree.chunk" => { snapshot.tracked_chunk_entries += 1; snapshot.tracked_chunk_value_bytes += value.len(); } "tracked_state.tree.root" => { snapshot.tracked_root_entries += 1; } "tracked_state.tree.root.by_file" => { snapshot.tracked_by_file_root_entries += 1; } "json_store.json" => { snapshot.json_entries += 1; snapshot.json_value_bytes += value.len(); } "json_store.json_chunk" => { snapshot.json_chunk_entries += 1; snapshot.json_chunk_value_bytes += value.len(); } "changelog.change" => { snapshot.changelog_entries += 1; snapshot.changelog_value_bytes += value.len(); } "untracked_state.row" => { snapshot.untracked_entries += 1; snapshot.untracked_value_bytes += value.len(); } _ => {} } } Ok(snapshot) } } #[async_trait] impl Backend for AccountingBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { Ok(Box::new(AccountingTransaction { store: Arc::clone(&self.store), finalized: false, })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { Ok(Box::new(AccountingTransaction { store: Arc::clone(&self.store), finalized: false, })) } } struct AccountingTransaction { store: Arc>, finalized: bool, } impl AccountingTransaction { fn lock_store(&self) -> Result, LixError> { self.store .lock() .map_err(|_| LixError::new("LIX_ERROR_UNKNOWN", "accounting store mutex poisoned")) } fn scan_filtered_pairs( &self, request: &BackendKvScanRequest, ) -> Result, Vec)>, LixError> { let store = self.lock_store()?; let scan_limit = request .limit .checked_add(1 + usize::from(request.after.is_some())) .unwrap_or(request.limit); let mut pairs = scan_store(&store, &request.namespace, &request.range, Some(scan_limit)); pairs.retain(|(key, _)| { request .after .as_deref() .is_none_or(|after| key.as_slice() > after) }); Ok(pairs) } } #[async_trait] impl BackendReadTransaction for AccountingTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { let store = self.lock_store()?; let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { if let Some(value) = store.get(&(namespace.clone(), key)) { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } Ok(BackendKvValueBatch { groups }) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { let store = self.lock_store()?; let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut exists = Vec::with_capacity(group.keys.len()); for key in group.keys { exists.push(store.contains_key(&(namespace.clone(), key))); } groups.push(BackendKvExistsGroup { namespace, exists }); } Ok(BackendKvExistsBatch { groups }) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { let pairs = self.scan_filtered_pairs(&request)?; let has_more = pairs.len() > request.limit; let resume_after = has_more .then(|| { pairs .get(request.limit.saturating_sub(1)) .map(|(key, _)| key.clone()) }) .flatten(); Ok(BackendKvKeyPage { keys: byte_page_from_iter(pairs.into_iter().take(request.limit).map(|(key, _)| key)), resume_after, }) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { let pairs = self.scan_filtered_pairs(&request)?; let has_more = pairs.len() > request.limit; let resume_after = has_more .then(|| { pairs .get(request.limit.saturating_sub(1)) .map(|(key, _)| key.clone()) }) .flatten(); Ok(BackendKvValuePage { values: byte_page_from_iter( pairs .into_iter() .take(request.limit) .map(|(_, value)| value), ), resume_after, }) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { let pairs = self.scan_filtered_pairs(&request)?; let has_more = pairs.len() > request.limit; let resume_after = has_more .then(|| { pairs .get(request.limit.saturating_sub(1)) .map(|(key, _)| key.clone()) }) .flatten(); let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); for (key, value) in pairs.into_iter().take(request.limit) { keys.push(&key); values.push(&value); } Ok(BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, }) } async fn rollback(mut self: Box) -> Result<(), LixError> { self.finalized = true; Ok(()) } } #[async_trait] impl BackendWriteTransaction for AccountingTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let mut stats = BackendKvWriteStats::default(); let mut store = self.lock_store()?; for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); store.insert((namespace.clone(), key.to_vec()), value.to_vec()); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; stats.deletes += 1; stats.bytes_written += key.len(); store.remove(&(namespace.clone(), key.to_vec())); } } Ok(stats) } async fn commit(mut self: Box) -> Result<(), LixError> { self.finalized = true; Ok(()) } } fn scan_store( store: &Store, namespace: &str, range: &BackendKvScanRange, limit: Option, ) -> Vec<(Vec, Vec)> { let mut pairs = Vec::new(); for ((row_namespace, key), value) in store.iter() { if row_namespace != namespace { continue; } let matches = match range { BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix), BackendKvScanRange::Range { start, end } => key >= start && key < end, }; if matches { pairs.push((key.clone(), value.clone())); } if limit.is_some_and(|limit| pairs.len() >= limit) { break; } } pairs } ================================================ FILE: packages/engine/tests/support/mod.rs ================================================ pub mod simulation_test; #[macro_export] macro_rules! simulation_test { ($name:ident, |$sim:ident| $body:expr) => { $crate::simulation_test!( $name, options = $crate::support::simulation_test::engine::SimulationOptions::default(), |$sim| $body ); }; ($name:ident, options = $options:expr, |$sim:ident| $body:expr) => { $crate::simulation_test!( @single $name, base, Base, $options, |$sim| $body ); $crate::simulation_test!( @single $name, tracked_state_rebuild, TrackedStateRebuild, $options, |$sim| $body ); }; (@single $name:ident, $simulation:ident, $mode:ident, $options:expr, |$sim:ident| $body:expr) => { paste::paste! { #[test] fn [<$name _ $simulation>]() { let simulation_mode = $crate::support::simulation_test::engine::SimulationMode::$mode; let simulation_name = stringify!($simulation); let timeout_secs = std::env::var("LIX_SIMULATION_TEST_TIMEOUT_SECS") .ok() .and_then(|raw| raw.parse::().ok()) .unwrap_or(120); let case_id = concat!(module_path!(), "::", stringify!($name)); let (result_tx, result_rx) = std::sync::mpsc::sync_channel(1); let thread = std::thread::Builder::new() .name(format!("{}_{}", stringify!($name), simulation_name)) .stack_size(32 * 1024 * 1024) .spawn(move || { let run_result = std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| { let runtime = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .expect("failed to build tokio runtime"); runtime.block_on(async { $crate::support::simulation_test::engine::run_single_simulation_test( simulation_mode, $options, case_id, |$sim| $body, ) .await; }); })); let _ = result_tx.send(run_result); }) .expect(concat!( "failed to spawn ", stringify!($name), " simulation_test thread" )); match result_rx.recv_timeout(std::time::Duration::from_secs(timeout_secs)) { Ok(Ok(())) => { thread.join().expect(concat!( stringify!($name), " simulation_test thread panicked" )); } Ok(Err(payload)) => { let _ = thread.join(); std::panic::resume_unwind(payload); } Err(std::sync::mpsc::RecvTimeoutError::Timeout) => { panic!( "simulation_test timed out after {}s (simulation={}, case={})", timeout_secs, simulation_name, case_id ); } Err(std::sync::mpsc::RecvTimeoutError::Disconnected) => { if let Err(payload) = thread.join() { std::panic::resume_unwind(payload); } panic!( "simulation_test thread exited without reporting result (simulation={}, case={})", simulation_name, case_id ); } } } } }; } ================================================ FILE: packages/engine/tests/support/simulation_test/engine/expect_same.rs ================================================ use std::collections::HashMap; use std::sync::{Arc, Condvar, Mutex, OnceLock}; use std::time::{Duration, Instant}; use super::mode::SimulationMode; #[derive(Clone)] pub(super) struct SimulationAssertions { shared: SharedExpectSameRun, } impl SimulationAssertions { pub(super) fn shared(run: SharedExpectSameRun) -> Self { Self { shared: run } } pub(super) fn start_mode(&self, _mode: SimulationMode) { self.shared.start_mode(); } pub(super) fn finish_mode(&self, _mode: SimulationMode) { self.shared.finish_mode(); } } #[derive(Clone)] pub(crate) struct SharedExpectSameRun { case_id: String, mode: SimulationMode, call_index: Arc>, case: Arc, } struct SharedExpectSameCase { state: Mutex, condvar: Condvar, } #[derive(Default)] struct SharedExpectSameState { base_finished: bool, base_failed: bool, expected: Vec<(String, String)>, } pub(crate) struct SharedExpectSameRunGuard { run: SharedExpectSameRun, finished: bool, } impl SharedExpectSameRun { pub(crate) fn new(case_id: &str, mode: SimulationMode) -> Self { static CASES: OnceLock>>> = OnceLock::new(); let cases = CASES.get_or_init(|| Mutex::new(HashMap::new())); let case = { let mut guard = cases .lock() .expect("engine shared expectation registry lock poisoned"); guard .entry(case_id.to_string()) .or_insert_with(|| { Arc::new(SharedExpectSameCase { state: Mutex::new(SharedExpectSameState::default()), condvar: Condvar::new(), }) }) .clone() }; Self { case_id: case_id.to_string(), mode, call_index: Arc::new(Mutex::new(0)), case, } } fn start_mode(&self) {} fn next_index(&self) -> usize { let mut guard = self .call_index .lock() .expect("engine shared expectation call index lock poisoned"); let index = *guard; *guard += 1; index } fn call_count(&self) -> usize { *self .call_index .lock() .expect("engine shared expectation call index lock poisoned") } fn assert_same(&self, label: &str, actual: String) { let index = self.next_index(); match self.mode { SimulationMode::Base => { let mut state = self .case .state .lock() .expect("engine shared expectation lock poisoned"); state.expected.push((label.to_string(), actual)); self.case.condvar.notify_all(); } SimulationMode::TrackedStateRebuild => { let expected = self.wait_for_expected(index, label); assert_eq!( expected.0, label, "simulation_test assertion order changed for case `{}` mode `{}` at call #{}", self.case_id, self.mode.name(), index ); assert_eq!( expected.1, actual, "simulation_test assert_same `{label}` differed for case `{}` mode `{}`", self.case_id, self.mode.name() ); } } } fn wait_for_expected(&self, index: usize, label: &str) -> (String, String) { let deadline = Instant::now() + Duration::from_secs(120); let mut state = self .case .state .lock() .expect("engine shared expectation lock poisoned"); loop { if state.base_failed { panic!( "simulation_test case `{}` base failed before `{}` could compare call #{}", self.case_id, label, index ); } if let Some(expected) = state.expected.get(index) { return expected.clone(); } if state.base_finished { panic!( "simulation_test case `{}` mode `{}` called assert_same one extra time at call #{} ({label})", self.case_id, self.mode.name(), index ); } let remaining = deadline.saturating_duration_since(Instant::now()); if remaining.is_zero() { panic!( "simulation_test timed out waiting for base assert_same call #{} in case `{}`", index, self.case_id ); } let (next_state, timeout) = self .case .condvar .wait_timeout(state, remaining) .expect("engine shared expectation condvar wait poisoned"); state = next_state; if timeout.timed_out() { panic!( "simulation_test timed out waiting for base assert_same call #{} in case `{}`", index, self.case_id ); } } } fn finish_mode(&self) { match self.mode { SimulationMode::Base => self.finish_base(std::thread::panicking()), SimulationMode::TrackedStateRebuild => self.finish_compare(), } } fn finish_base(&self, failed: bool) { let mut state = self .case .state .lock() .expect("engine shared expectation lock poisoned"); state.base_finished = true; state.base_failed = failed; self.case.condvar.notify_all(); } fn finish_compare(&self) { let deadline = Instant::now() + Duration::from_secs(120); let mut state = self .case .state .lock() .expect("engine shared expectation lock poisoned"); while !state.base_finished && !state.base_failed { let remaining = deadline.saturating_duration_since(Instant::now()); if remaining.is_zero() { panic!( "simulation_test timed out waiting for base completion in case `{}`", self.case_id ); } let (next_state, timeout) = self .case .condvar .wait_timeout(state, remaining) .expect("engine shared expectation condvar wait poisoned"); state = next_state; if timeout.timed_out() { panic!( "simulation_test timed out waiting for base completion in case `{}`", self.case_id ); } } if state.base_failed { panic!( "simulation_test case `{}` base failed before mode `{}` completed", self.case_id, self.mode.name() ); } assert_eq!( self.call_count(), state.expected.len(), "simulation_test mode `{}` for case `{}` did not execute all assert_same checks", self.mode.name(), self.case_id ); } } impl SharedExpectSameRunGuard { pub(crate) fn new(run: SharedExpectSameRun) -> Self { Self { run, finished: false, } } } impl Drop for SharedExpectSameRunGuard { fn drop(&mut self) { if self.finished || self.run.mode != SimulationMode::Base { return; } self.run.finish_base(std::thread::panicking()); self.finished = true; } } #[cfg(test)] mod tests { use super::*; #[test] fn shared_expect_same_compares_against_base_run() { let case_id = "expect_same_unit_shared"; let base = SharedExpectSameRun::new(case_id, SimulationMode::Base); base.assert_same("value", "1".to_string()); base.finish_mode(); let rebuild = SharedExpectSameRun::new(case_id, SimulationMode::TrackedStateRebuild); rebuild.assert_same("value", "1".to_string()); rebuild.finish_mode(); } } ================================================ FILE: packages/engine/tests/support/simulation_test/engine/kv_backend.rs ================================================ use std::collections::BTreeMap; use std::sync::{Arc, Mutex}; use async_trait::async_trait; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteGroup, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError, }; pub(crate) type KvKey = (String, Vec); pub(crate) type KvMap = BTreeMap>; /// KV-only backend used by simulation tests. #[derive(Clone, Default)] pub(crate) struct InMemoryKvBackend { data: Arc>, } impl InMemoryKvBackend { pub(crate) fn new() -> Self { Self::default() } pub(crate) fn from_snapshot(snapshot: KvMap) -> Self { Self { data: Arc::new(Mutex::new(snapshot)), } } pub(crate) fn snapshot(&self) -> KvMap { self.data .lock() .expect("in-memory backend lock poisoned") .clone() } } #[async_trait] impl Backend for InMemoryKvBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { Ok(Box::new(InMemoryKvTransaction { data: Arc::clone(&self.data), pending: BTreeMap::new(), closed: false, })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { Ok(Box::new(InMemoryKvTransaction { data: Arc::clone(&self.data), pending: BTreeMap::new(), closed: false, })) } } struct InMemoryKvTransaction { data: Arc>, pending: BTreeMap>>, closed: bool, } #[async_trait] impl BackendReadTransaction for InMemoryKvTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { let data = self.data.lock().expect("in-memory backend lock poisoned"); let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { let identity = (namespace.clone(), key.clone()); let value = self .pending .get(&identity) .cloned() .unwrap_or_else(|| data.get(&identity).cloned()); if let Some(value) = value { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } Ok(BackendKvValueBatch { groups }) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { let data = self.data.lock().expect("in-memory backend lock poisoned"); let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut exists = Vec::with_capacity(group.keys.len()); for key in group.keys { let identity = (namespace.clone(), key.clone()); let present = self .pending .get(&identity) .map(|value| value.is_some()) .unwrap_or_else(|| data.contains_key(&identity)); exists.push(present); } groups.push(BackendKvExistsGroup { namespace, exists }); } Ok(BackendKvExistsBatch { groups }) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { let entries = self.scan_visible_entries(request)?; Ok(BackendKvKeyPage { keys: entries.keys, resume_after: entries.resume_after, }) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { let entries = self.scan_visible_entries(request)?; Ok(BackendKvValuePage { values: entries.values, resume_after: entries.resume_after, }) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { self.scan_visible_entries(request) } async fn rollback(mut self: Box) -> Result<(), LixError> { self.pending.clear(); self.closed = true; Ok(()) } } impl InMemoryKvTransaction { fn scan_visible_entries( &self, request: BackendKvScanRequest, ) -> Result { let mut visible = self .data .lock() .expect("in-memory backend lock poisoned") .clone(); for (key, value) in &self.pending { match value { Some(value) => { visible.insert(key.clone(), value.clone()); } None => { visible.remove(key); } } } Ok(scan_map(&visible, &request)) } } #[async_trait] impl BackendWriteTransaction for InMemoryKvTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let mut stats = BackendKvWriteStats::default(); for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); self.pending .insert((namespace.clone(), key.to_vec()), Some(value.to_vec())); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; stats.deletes += 1; stats.bytes_written += key.len(); self.pending.insert((namespace.clone(), key.to_vec()), None); } } Ok(stats) } async fn commit(mut self: Box) -> Result<(), LixError> { if self.closed { return Ok(()); } let mut guard = self.data.lock().expect("in-memory backend lock poisoned"); for (key, value) in std::mem::take(&mut self.pending) { match value { Some(value) => { guard.insert(key, value); } None => { guard.remove(&key); } } } self.closed = true; Ok(()) } } fn scan_map(map: &KvMap, request: &BackendKvScanRequest) -> BackendKvEntryPage { let mut pairs = map .iter() .filter_map(|((entry_namespace, key), value)| { if entry_namespace != &request.namespace || !key_in_range(key, &request.range) { return None; } if request .after .as_deref() .is_some_and(|after| key.as_slice() <= after) { return None; } Some((key.clone(), value.clone())) }) .collect::>(); pairs.sort_by(|left, right| left.0.cmp(&right.0)); let has_more = pairs.len() > request.limit; pairs.truncate(request.limit); let resume_after = has_more .then(|| pairs.last().map(|(key, _)| key.clone())) .flatten(); let mut keys = BytePageBuilder::with_capacity(pairs.len(), 0); let mut values = BytePageBuilder::with_capacity(pairs.len(), 0); for (key, value) in pairs { keys.push(key); values.push(value); } BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, } } fn key_in_range(key: &[u8], range: &BackendKvScanRange) -> bool { match range { BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix), BackendKvScanRange::Range { start, end } => key >= start.as_slice() && key < end.as_slice(), } } #[cfg(test)] mod tests { use super::*; async fn put( tx: &mut Box, namespace: &str, key: &[u8], value: &[u8], ) { tx.write_kv_batch(BackendKvWriteBatch { groups: { let mut group = BackendKvWriteGroup::new(namespace); group.put(key, value); vec![group] }, }) .await .expect("put should succeed"); } async fn delete( tx: &mut Box, namespace: &str, key: &[u8], ) { tx.write_kv_batch(BackendKvWriteBatch { groups: { let mut group = BackendKvWriteGroup::new(namespace); group.delete(key); vec![group] }, }) .await .expect("delete should succeed"); } async fn get( tx: &mut (dyn BackendReadTransaction + Send + Sync), namespace: &str, key: &[u8], ) -> Option> { tx.get_values(BackendKvGetRequest { groups: vec![BackendKvGetGroup { namespace: namespace.to_string(), keys: vec![key.to_vec()], }], }) .await .expect("get should succeed") .groups .remove(0) .value(0) .flatten() .map(<[u8]>::to_vec) } async fn committed_get( backend: &InMemoryKvBackend, namespace: &str, key: &[u8], ) -> Option> { let mut tx = backend .begin_read_transaction() .await .expect("read transaction should open"); let value = get(tx.as_mut(), namespace, key).await; tx.rollback().await.expect("rollback should succeed"); value } async fn scan( tx: &mut (dyn BackendReadTransaction + Send + Sync), namespace: &str, range: BackendKvScanRange, limit: Option, ) -> BackendKvEntryPage { tx.scan_entries(BackendKvScanRequest { namespace: namespace.to_string(), range, after: None, limit: limit.unwrap_or(usize::MAX), }) .await .expect("scan should succeed") } #[tokio::test] async fn transaction_put_commit_makes_value_visible() { let backend = InMemoryKvBackend::new(); let mut tx = backend .begin_write_transaction() .await .expect("transaction should open"); put(&mut tx, "ns", b"a", b"one").await; assert_eq!(get(tx.as_mut(), "ns", b"a").await, Some(b"one".to_vec())); tx.commit().await.expect("commit should succeed"); assert_eq!( committed_get(&backend, "ns", b"a").await, Some(b"one".to_vec()) ); } #[tokio::test] async fn rollback_discards_pending_values() { let backend = InMemoryKvBackend::new(); let mut tx = backend .begin_write_transaction() .await .expect("transaction should open"); put(&mut tx, "ns", b"a", b"one").await; tx.rollback().await.expect("rollback should succeed"); assert_eq!(committed_get(&backend, "ns", b"a").await, None); } #[tokio::test] async fn scan_overlays_pending_write_and_delete() { let backend = InMemoryKvBackend::new(); let mut seed = backend .begin_write_transaction() .await .expect("seed transaction should open"); put(&mut seed, "ns", b"a", b"old").await; put(&mut seed, "ns", b"b", b"two").await; seed.commit().await.unwrap(); let mut tx = backend .begin_write_transaction() .await .expect("transaction should open"); put(&mut tx, "ns", b"a", b"new").await; delete(&mut tx, "ns", b"b").await; put(&mut tx, "ns", b"c", b"three").await; let rows = scan( tx.as_mut(), "ns", BackendKvScanRange::Prefix(Vec::new()), None, ) .await; assert_eq!(rows.key(0).expect("key exists"), b"a"); assert_eq!(rows.value(0).expect("value exists"), b"new"); assert_eq!(rows.key(1).expect("key exists"), b"c"); assert_eq!(rows.value(1).expect("value exists"), b"three"); } } ================================================ FILE: packages/engine/tests/support/simulation_test/engine/macro_runtime.rs ================================================ use std::future::Future; use lix_engine::LixError; use lix_engine::{Engine, InitReceipt}; use super::expect_same::{SharedExpectSameRun, SharedExpectSameRunGuard, SimulationAssertions}; use super::kv_backend::{InMemoryKvBackend, KvMap}; use super::mode::{SimulationMode, SimulationOptions}; use super::rebuild_tracked_state::deterministic_timestamp_shuffle_for; use super::simulation::Simulation; /// Runs one matrix entry for `simulation_test!`. /// /// The macro generates one Rust test per mode. `assert_same` coordinates across /// those test functions through shared state keyed by `case_id`. pub async fn run_single_simulation_test( mode: SimulationMode, options: SimulationOptions, case_id: &str, test_fn: F, ) where F: Fn(Simulation) -> Fut, Fut: Future, { let bootstrap = Bootstrap::create() .await .expect("simulation bootstrap should initialize"); let expect_same = SharedExpectSameRun::new(case_id, mode); let _guard = SharedExpectSameRunGuard::new(expect_same.clone()); let sim = Simulation::from_bootstrap( mode, options, bootstrap.snapshot, bootstrap.receipt, SimulationAssertions::shared(expect_same), ) .await .expect("simulation mode should boot"); test_fn(sim.clone()).await; sim.finish(); } #[derive(Clone)] struct Bootstrap { snapshot: KvMap, receipt: InitReceipt, } impl Bootstrap { async fn create() -> Result { let backend = InMemoryKvBackend::new(); let receipt = Engine::initialize(Box::new(backend.clone())).await?; Ok(Self { snapshot: backend.snapshot(), receipt, }) } } pub(crate) async fn enable_deterministic_mode( engine: &Engine, receipt: &InitReceipt, mode: SimulationMode, ) -> Result<(), LixError> { let timestamp_shuffle = deterministic_timestamp_shuffle_for(mode); let session = engine.open_session(receipt.main_version_id.clone()).await?; session .execute(&deterministic_mode_insert_sql(timestamp_shuffle), &[]) .await?; Ok(()) } fn deterministic_mode_insert_sql(timestamp_shuffle: bool) -> String { format!( "INSERT INTO lix_key_value (key, value, lixcol_global, lixcol_untracked) \ VALUES ('lix_deterministic_mode', \ lix_json('{{\"enabled\":true,\"timestamp_shuffle\":{timestamp_shuffle}}}'), true, true)" ) } #[cfg(test)] mod tests { use super::*; #[test] fn deterministic_mode_sql_carries_timestamp_shuffle_flag() { assert!(deterministic_mode_insert_sql(true).contains("\"timestamp_shuffle\":true")); assert!(deterministic_mode_insert_sql(false).contains("\"timestamp_shuffle\":false")); } } ================================================ FILE: packages/engine/tests/support/simulation_test/engine/mod.rs ================================================ mod expect_same; mod kv_backend; mod macro_runtime; mod mode; mod rebuild_tracked_state; mod simulation; #[allow(unused_imports)] pub use macro_runtime::run_single_simulation_test; #[allow(unused_imports)] pub use mode::{SimulationMode, SimulationOptions}; #[allow(unused_imports)] pub use simulation::{SimSession, Simulation}; ================================================ FILE: packages/engine/tests/support/simulation_test/engine/mode.rs ================================================ /// Runtime mode for the simulation harness. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum SimulationMode { Base, TrackedStateRebuild, } impl SimulationMode { pub fn name(self) -> &'static str { match self { Self::Base => "base", Self::TrackedStateRebuild => "tracked_state_rebuild", } } } /// Options for `simulation_test!`. /// /// Deterministic mode is enabled by default so the base and rebuild runs can be /// compared exactly without per-backend result normalization. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub struct SimulationOptions { pub deterministic: bool, } impl Default for SimulationOptions { fn default() -> Self { Self { deterministic: true, } } } #[cfg(test)] mod tests { use super::*; #[test] fn mode_names_are_stable_for_generated_test_names() { assert_eq!(SimulationMode::Base.name(), "base"); assert_eq!( SimulationMode::TrackedStateRebuild.name(), "tracked_state_rebuild" ); } #[test] fn deterministic_mode_is_enabled_by_default() { assert!(SimulationOptions::default().deterministic); } } ================================================ FILE: packages/engine/tests/support/simulation_test/engine/rebuild_tracked_state.rs ================================================ use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; use lix_engine::Engine; use lix_engine::LixError; use super::mode::SimulationMode; /// Returns whether a simulation mode should shuffle deterministic timestamps. /// /// Rebuild mode intentionally shuffles timestamps so tests do not encode /// assumptions that tracked-state rebuild order and write-time order match. pub(crate) fn deterministic_timestamp_shuffle_for(mode: SimulationMode) -> bool { matches!(mode, SimulationMode::TrackedStateRebuild) } /// Mode-specific read/write hook for tracked-state rebuild simulation. #[derive(Clone)] pub(crate) struct RebuildTrackedStateSimulation { mode: SimulationMode, pending: Arc, } impl RebuildTrackedStateSimulation { pub(crate) fn new(mode: SimulationMode) -> Self { Self { mode, pending: Arc::new(AtomicBool::new(false)), } } pub(crate) fn after_successful_write(&self) { if self.mode == SimulationMode::TrackedStateRebuild { self.pending.store(true, Ordering::SeqCst); } } pub(crate) async fn before_read( &self, engine: &Engine, version_id: &str, ) -> Result<(), LixError> { if self.mode != SimulationMode::TrackedStateRebuild { return Ok(()); } if !self.pending.swap(false, Ordering::SeqCst) { return Ok(()); } engine.rebuild_tracked_state_for_version(version_id).await } #[cfg(test)] fn pending_for_test(&self) -> bool { self.pending.load(Ordering::SeqCst) } } #[cfg(test)] mod tests { use super::*; #[test] fn timestamp_shuffle_is_only_enabled_for_rebuild_mode() { assert!(!deterministic_timestamp_shuffle_for(SimulationMode::Base)); assert!(deterministic_timestamp_shuffle_for( SimulationMode::TrackedStateRebuild )); } #[test] fn successful_write_marks_rebuild_pending_only_in_rebuild_mode() { let base = RebuildTrackedStateSimulation::new(SimulationMode::Base); let rebuild = RebuildTrackedStateSimulation::new(SimulationMode::TrackedStateRebuild); base.after_successful_write(); rebuild.after_successful_write(); assert!(!base.pending_for_test()); assert!(rebuild.pending_for_test()); } } ================================================ FILE: packages/engine/tests/support/simulation_test/engine/simulation.rs ================================================ use lix_engine::{Backend, LixError, Value}; use lix_engine::{ CreateVersionOptions, CreateVersionReceipt, Engine, ExecuteResult, InitReceipt, MergeVersionOptions, MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt, SessionContext, SwitchVersionOptions, SwitchVersionReceipt, }; use super::expect_same::SimulationAssertions; use super::kv_backend::InMemoryKvBackend; use super::mode::{SimulationMode, SimulationOptions}; use super::rebuild_tracked_state::RebuildTrackedStateSimulation; /// Per-mode handle exposed to tests using `simulation_test!`. #[derive(Clone)] pub struct Simulation { mode: SimulationMode, #[allow(dead_code)] backend: InMemoryKvBackend, engine: Engine, receipt: InitReceipt, rebuild_tracked_state: RebuildTrackedStateSimulation, assertions: SimulationAssertions, } #[allow(dead_code)] impl Simulation { pub(super) async fn from_bootstrap( mode: SimulationMode, options: SimulationOptions, snapshot: super::kv_backend::KvMap, receipt: InitReceipt, assertions: SimulationAssertions, ) -> Result { let backend = InMemoryKvBackend::from_snapshot(snapshot); let engine = Engine::new(Box::new(backend.clone())).await?; if options.deterministic { super::macro_runtime::enable_deterministic_mode(&engine, &receipt, mode).await?; } assertions.start_mode(mode); Ok(Self { mode, backend, engine, receipt, rebuild_tracked_state: RebuildTrackedStateSimulation::new(mode), assertions, }) } /// Returns the normal engine runtime for this simulation run. pub async fn boot_engine(&self) -> Engine { self.engine.clone() } /// Boots a fresh engine from the current backend snapshot. /// /// This is the simulation equivalent of closing the app and reopening the /// same repository. It lets tests distinguish persisted workspace state /// from in-memory session state. pub async fn reboot_engine_from_current_snapshot(&self) -> Result { Engine::new(Box::new(InMemoryKvBackend::from_snapshot( self.backend.snapshot(), ))) .await } /// Wraps a normal engine session with simulation hooks. pub fn wrap_session(&self, session: SessionContext, engine: &Engine) -> SimSession { SimSession { sim: self.clone(), engine: engine.clone(), session, } } /// Returns a fresh, empty backend for lifecycle tests. pub fn uninitialized_backend(&self) -> Box { Box::new(InMemoryKvBackend::new()) } /// Returns the initialized Lix id. pub fn lix_id(&self) -> &str { &self.receipt.lix_id } /// Returns the initial commit id. pub fn initial_commit_id(&self) -> &str { &self.receipt.initial_commit_id } /// Returns the initialized main version id. pub fn main_version_id(&self) -> &str { &self.receipt.main_version_id } pub(crate) fn finish(&self) { self.assertions.finish_mode(self.mode); } } /// Session wrapper that injects simulation behavior around normal execution. pub struct SimSession { sim: Simulation, engine: Engine, session: SessionContext, } #[allow(dead_code)] impl SimSession { pub fn wrap_session(&self, session: SessionContext, engine: &Engine) -> SimSession { SimSession { sim: self.sim.clone(), engine: engine.clone(), session, } } pub async fn active_version_id(&self) -> Result { self.session.active_version_id().await } pub async fn execute(&self, sql: &str, params: &[Value]) -> Result { match classify_statement(sql) { StatementKind::Read => { let active_version_id = self.session.active_version_id().await?; self.sim .rebuild_tracked_state .before_read(&self.engine, &active_version_id) .await?; self.session.execute(sql, params).await } StatementKind::Write => { let result = self.session.execute(sql, params).await; if result.is_ok() { self.sim.rebuild_tracked_state.after_successful_write(); } result } StatementKind::Utility => self.session.execute(sql, params).await, } } pub async fn create_version( &self, options: CreateVersionOptions, ) -> Result { let result = self.session.create_version(options).await; if result.is_ok() { self.sim.rebuild_tracked_state.after_successful_write(); } result } pub async fn merge_version( &self, options: MergeVersionOptions, ) -> Result { let result = self.session.merge_version(options).await; if result.is_ok() { self.sim.rebuild_tracked_state.after_successful_write(); } result } pub async fn merge_version_preview( &self, options: MergeVersionPreviewOptions, ) -> Result { self.session.merge_version_preview(options).await } pub async fn switch_version( &self, options: SwitchVersionOptions, ) -> Result<(SimSession, SwitchVersionReceipt), LixError> { let (session, receipt) = self.session.switch_version(options).await?; Ok(( SimSession { sim: self.sim.clone(), engine: self.engine.clone(), session, }, receipt, )) } } #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum StatementKind { Read, Write, Utility, } fn classify_statement(sql: &str) -> StatementKind { let keyword = sql .trim_start() .split(|ch: char| ch.is_whitespace() || ch == '(') .next() .unwrap_or("") .to_ascii_uppercase(); match keyword.as_str() { "SELECT" | "WITH" => StatementKind::Read, "INSERT" | "UPDATE" | "DELETE" => StatementKind::Write, _ => StatementKind::Utility, } } #[cfg(test)] mod tests { use super::*; #[test] fn classify_statement_splits_reads_writes_and_utility() { assert_eq!(classify_statement("SELECT 1"), StatementKind::Read); assert_eq!( classify_statement(" WITH x AS (...) SELECT 1"), StatementKind::Read ); assert_eq!( classify_statement("INSERT INTO t VALUES (1)"), StatementKind::Write ); assert_eq!( classify_statement("UPDATE t SET a = 1"), StatementKind::Write ); assert_eq!(classify_statement("DELETE FROM t"), StatementKind::Write); assert_eq!( classify_statement("EXPLAIN SELECT 1"), StatementKind::Utility ); } } ================================================ FILE: packages/engine/tests/support/simulation_test/mod.rs ================================================ pub mod engine; ================================================ FILE: packages/engine/tests/tmp_lix_key_value_amplification.rs ================================================ use std::collections::BTreeMap; use std::sync::{Arc, Mutex}; use async_trait::async_trait; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRequest, BackendKvValueBatch, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, CreateVersionOptions, Engine, LixError, SessionContext, Value, }; #[allow(dead_code)] #[path = "support/simulation_test/engine/kv_backend.rs"] mod kv_backend; use kv_backend::{InMemoryKvBackend, KvMap}; #[derive(Debug, Clone, Default)] struct AmplificationCounts { begin_read_transactions: usize, begin_write_transactions: usize, commits: usize, rollbacks: usize, write_kv_batch_calls: usize, puts: usize, deletes: usize, write_bytes: usize, get_values_calls: usize, get_values_keys: usize, exists_many_calls: usize, exists_many_keys: usize, scan_keys_calls: usize, scan_keys_rows: usize, scan_values_calls: usize, scan_values_rows: usize, scan_entries_calls: usize, scan_entries_rows: usize, puts_by_namespace: BTreeMap, deletes_by_namespace: BTreeMap, bytes_by_namespace: BTreeMap, } impl AmplificationCounts { fn record_write_batch(&mut self, batch: &BackendKvWriteBatch) { self.write_kv_batch_calls += 1; for group in &batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let Some(key) = group.put_key(index) else { continue; }; let Some(value) = group.put_value(index) else { continue; }; self.puts += 1; self.write_bytes += key.len() + value.len(); *self.puts_by_namespace.entry(namespace.clone()).or_default() += 1; *self .bytes_by_namespace .entry(namespace.clone()) .or_default() += key.len() + value.len(); } for index in 0..group.delete_count() { let Some(key) = group.delete_key(index) else { continue; }; self.deletes += 1; self.write_bytes += key.len(); *self .deletes_by_namespace .entry(namespace.clone()) .or_default() += 1; *self .bytes_by_namespace .entry(namespace.clone()) .or_default() += key.len(); } } } fn read_calls(&self) -> usize { self.get_values_calls + self.exists_many_calls + self.scan_keys_calls + self.scan_values_calls + self.scan_entries_calls } fn read_items(&self) -> usize { self.get_values_keys + self.exists_many_keys + self.scan_keys_rows + self.scan_values_rows + self.scan_entries_rows } fn write_mutations(&self) -> usize { self.puts + self.deletes } fn puts_in(&self, namespace: &str) -> usize { self.puts_by_namespace.get(namespace).copied().unwrap_or(0) } fn deletes_in(&self, namespace: &str) -> usize { self.deletes_by_namespace .get(namespace) .copied() .unwrap_or(0) } fn bytes_in(&self, namespace: &str) -> usize { self.bytes_by_namespace.get(namespace).copied().unwrap_or(0) } } #[derive(Clone, Default)] struct CountingBackend { inner: InMemoryKvBackend, counts: Arc>, } impl CountingBackend { fn reset_counts(&self) { *self.counts.lock().expect("amplification counts lock") = AmplificationCounts::default(); } fn counts(&self) -> AmplificationCounts { self.counts .lock() .expect("amplification counts lock") .clone() } fn snapshot(&self) -> KvMap { self.inner.snapshot() } } #[derive(Debug, Clone, Default)] struct StorageAmplification { before_entries: usize, after_entries: usize, before_key_value_bytes: usize, after_key_value_bytes: usize, before_namespace_key_value_bytes: usize, after_namespace_key_value_bytes: usize, added_entries: usize, updated_entries: usize, removed_entries: usize, added_key_value_bytes: usize, updated_before_key_value_bytes: usize, updated_after_key_value_bytes: usize, removed_key_value_bytes: usize, added_namespace_key_value_bytes: usize, updated_before_namespace_key_value_bytes: usize, updated_after_namespace_key_value_bytes: usize, removed_namespace_key_value_bytes: usize, by_namespace: BTreeMap, } #[derive(Debug, Clone, Default)] struct StorageNamespaceAmplification { added_entries: usize, updated_entries: usize, removed_entries: usize, added_key_value_bytes: usize, updated_before_key_value_bytes: usize, updated_after_key_value_bytes: usize, removed_key_value_bytes: usize, added_namespace_key_value_bytes: usize, updated_before_namespace_key_value_bytes: usize, updated_after_namespace_key_value_bytes: usize, removed_namespace_key_value_bytes: usize, } impl StorageAmplification { fn from_snapshots(before: &KvMap, after: &KvMap) -> Self { let mut result = Self { before_entries: before.len(), after_entries: after.len(), before_key_value_bytes: snapshot_key_value_bytes(before), after_key_value_bytes: snapshot_key_value_bytes(after), before_namespace_key_value_bytes: snapshot_namespace_key_value_bytes(before), after_namespace_key_value_bytes: snapshot_namespace_key_value_bytes(after), ..Self::default() }; for (key, after_value) in after { match before.get(key) { None => { result.added_entries += 1; result.added_key_value_bytes += key_value_bytes(key, after_value); result.added_namespace_key_value_bytes += namespace_key_value_bytes(key, after_value); let namespace = result.by_namespace.entry(key.0.clone()).or_default(); namespace.added_entries += 1; namespace.added_key_value_bytes += key_value_bytes(key, after_value); namespace.added_namespace_key_value_bytes += namespace_key_value_bytes(key, after_value); } Some(before_value) if before_value != after_value => { result.updated_entries += 1; result.updated_before_key_value_bytes += key_value_bytes(key, before_value); result.updated_after_key_value_bytes += key_value_bytes(key, after_value); result.updated_before_namespace_key_value_bytes += namespace_key_value_bytes(key, before_value); result.updated_after_namespace_key_value_bytes += namespace_key_value_bytes(key, after_value); let namespace = result.by_namespace.entry(key.0.clone()).or_default(); namespace.updated_entries += 1; namespace.updated_before_key_value_bytes += key_value_bytes(key, before_value); namespace.updated_after_key_value_bytes += key_value_bytes(key, after_value); namespace.updated_before_namespace_key_value_bytes += namespace_key_value_bytes(key, before_value); namespace.updated_after_namespace_key_value_bytes += namespace_key_value_bytes(key, after_value); } Some(_) => {} } } for (key, before_value) in before { if !after.contains_key(key) { result.removed_entries += 1; result.removed_key_value_bytes += key_value_bytes(key, before_value); result.removed_namespace_key_value_bytes += namespace_key_value_bytes(key, before_value); let namespace = result.by_namespace.entry(key.0.clone()).or_default(); namespace.removed_entries += 1; namespace.removed_key_value_bytes += key_value_bytes(key, before_value); namespace.removed_namespace_key_value_bytes += namespace_key_value_bytes(key, before_value); } } result } fn touched_entries(&self) -> usize { self.added_entries + self.updated_entries + self.removed_entries } fn changed_after_key_value_bytes(&self) -> usize { self.added_key_value_bytes + self.updated_after_key_value_bytes } fn changed_after_namespace_key_value_bytes(&self) -> usize { self.added_namespace_key_value_bytes + self.updated_after_namespace_key_value_bytes } fn net_key_value_bytes_delta(&self) -> isize { self.after_key_value_bytes as isize - self.before_key_value_bytes as isize } fn net_namespace_key_value_bytes_delta(&self) -> isize { self.after_namespace_key_value_bytes as isize - self.before_namespace_key_value_bytes as isize } } impl StorageNamespaceAmplification { fn touched_entries(&self) -> usize { self.added_entries + self.updated_entries + self.removed_entries } fn changed_after_key_value_bytes(&self) -> usize { self.added_key_value_bytes + self.updated_after_key_value_bytes } fn changed_after_namespace_key_value_bytes(&self) -> usize { self.added_namespace_key_value_bytes + self.updated_after_namespace_key_value_bytes } fn net_key_value_bytes_delta(&self) -> isize { (self.added_key_value_bytes + self.updated_after_key_value_bytes) as isize - (self.removed_key_value_bytes + self.updated_before_key_value_bytes) as isize } fn net_namespace_key_value_bytes_delta(&self) -> isize { (self.added_namespace_key_value_bytes + self.updated_after_namespace_key_value_bytes) as isize - (self.removed_namespace_key_value_bytes + self.updated_before_namespace_key_value_bytes) as isize } } fn storage_totals_for( storage: &StorageAmplification, namespaces: &[&str], ) -> StorageNamespaceAmplification { let mut totals = StorageNamespaceAmplification::default(); for namespace in namespaces { let Some(item) = storage.by_namespace.get(*namespace) else { continue; }; totals.added_entries += item.added_entries; totals.updated_entries += item.updated_entries; totals.removed_entries += item.removed_entries; totals.added_key_value_bytes += item.added_key_value_bytes; totals.updated_before_key_value_bytes += item.updated_before_key_value_bytes; totals.updated_after_key_value_bytes += item.updated_after_key_value_bytes; totals.removed_key_value_bytes += item.removed_key_value_bytes; totals.added_namespace_key_value_bytes += item.added_namespace_key_value_bytes; totals.updated_before_namespace_key_value_bytes += item.updated_before_namespace_key_value_bytes; totals.updated_after_namespace_key_value_bytes += item.updated_after_namespace_key_value_bytes; totals.removed_namespace_key_value_bytes += item.removed_namespace_key_value_bytes; } totals } fn print_storage_class_row( rows: usize, category: &str, namespaces: &[&str], totals: &StorageNamespaceAmplification, ) { println!( "AMPLIFICATION_CATEGORY rows={rows} category={category} namespaces={} \ added_entries={} updated_entries={} removed_entries={} touched_entries={} \ net_key_value_bytes_delta={} changed_after_key_value_bytes={} \ net_namespace_key_value_bytes_delta={} changed_after_namespace_key_value_bytes={} \ touched_entries_per_row={:.3} net_key_value_bytes_delta_per_row={:.1} \ changed_after_key_value_bytes_per_row={:.1} \ net_namespace_key_value_bytes_delta_per_row={:.1} \ changed_after_namespace_key_value_bytes_per_row={:.1}", namespaces.join(","), totals.added_entries, totals.updated_entries, totals.removed_entries, totals.touched_entries(), totals.net_key_value_bytes_delta(), totals.changed_after_key_value_bytes(), totals.net_namespace_key_value_bytes_delta(), totals.changed_after_namespace_key_value_bytes(), totals.touched_entries() as f64 / rows as f64, totals.net_key_value_bytes_delta() as f64 / rows as f64, totals.changed_after_key_value_bytes() as f64 / rows as f64, totals.net_namespace_key_value_bytes_delta() as f64 / rows as f64, totals.changed_after_namespace_key_value_bytes() as f64 / rows as f64, ); } #[derive(Debug, Clone)] struct AmplificationRun { counts: AmplificationCounts, storage: StorageAmplification, } fn snapshot_key_value_bytes(snapshot: &KvMap) -> usize { snapshot .iter() .map(|(key, value)| key_value_bytes(key, value)) .sum() } fn snapshot_namespace_key_value_bytes(snapshot: &KvMap) -> usize { snapshot .iter() .map(|(key, value)| namespace_key_value_bytes(key, value)) .sum() } fn key_value_bytes(key: &(String, Vec), value: &[u8]) -> usize { key.1.len() + value.len() } fn namespace_key_value_bytes(key: &(String, Vec), value: &[u8]) -> usize { key.0.len() + key.1.len() + value.len() } async fn setup_counting_engine() -> (CountingBackend, Engine, String) { let backend = CountingBackend::default(); let receipt = Engine::initialize(Box::new(backend.clone())) .await .expect("engine should initialize"); backend.reset_counts(); let engine = Engine::new(Box::new(backend.clone())) .await .expect("initialized engine should open"); backend.reset_counts(); (backend, engine, receipt.main_version_id) } async fn open_main_session(engine: &Engine, main_version_id: &str) -> SessionContext { engine .open_session(main_version_id.to_string()) .await .expect("main session should open") } async fn create_branch(engine: &Engine, main: &SessionContext, id: &str) -> SessionContext { let receipt = main .create_version(CreateVersionOptions { id: Some(id.to_string()), name: format!("Amplification {id}"), from_commit_id: None, }) .await .expect("branch version should be created"); engine .open_session(receipt.id) .await .expect("branch session should open") } fn start_measurement(backend: &CountingBackend) -> KvMap { backend.reset_counts(); backend.snapshot() } fn finish_measurement(backend: &CountingBackend, before: KvMap) -> AmplificationRun { let after = backend.snapshot(); AmplificationRun { counts: backend.counts(), storage: StorageAmplification::from_snapshots(&before, &after), } } #[async_trait] impl Backend for CountingBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { self.counts .lock() .expect("amplification counts lock") .begin_read_transactions += 1; Ok(Box::new(CountingReadTransaction { inner: self.inner.begin_read_transaction().await?, counts: Arc::clone(&self.counts), })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { self.counts .lock() .expect("amplification counts lock") .begin_write_transactions += 1; Ok(Box::new(CountingWriteTransaction { inner: self.inner.begin_write_transaction().await?, counts: Arc::clone(&self.counts), })) } } struct CountingReadTransaction { inner: Box, counts: Arc>, } #[async_trait] impl BackendReadTransaction for CountingReadTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { record_get_values(&self.counts, &request); self.inner.get_values(request).await } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { record_exists_many(&self.counts, &request); self.inner.exists_many(request).await } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { let result = self.inner.scan_keys(request).await?; let mut counts = self.counts.lock().expect("amplification counts lock"); counts.scan_keys_calls += 1; counts.scan_keys_rows += result.keys.len(); Ok(result) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { let result = self.inner.scan_values(request).await?; let mut counts = self.counts.lock().expect("amplification counts lock"); counts.scan_values_calls += 1; counts.scan_values_rows += result.values.len(); Ok(result) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { let result = self.inner.scan_entries(request).await?; let mut counts = self.counts.lock().expect("amplification counts lock"); counts.scan_entries_calls += 1; counts.scan_entries_rows += result.keys.len(); Ok(result) } async fn rollback(self: Box) -> Result<(), LixError> { self.counts .lock() .expect("amplification counts lock") .rollbacks += 1; self.inner.rollback().await } } struct CountingWriteTransaction { inner: Box, counts: Arc>, } #[async_trait] impl BackendReadTransaction for CountingWriteTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { record_get_values(&self.counts, &request); self.inner.get_values(request).await } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { record_exists_many(&self.counts, &request); self.inner.exists_many(request).await } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { let result = self.inner.scan_keys(request).await?; let mut counts = self.counts.lock().expect("amplification counts lock"); counts.scan_keys_calls += 1; counts.scan_keys_rows += result.keys.len(); Ok(result) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { let result = self.inner.scan_values(request).await?; let mut counts = self.counts.lock().expect("amplification counts lock"); counts.scan_values_calls += 1; counts.scan_values_rows += result.values.len(); Ok(result) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { let result = self.inner.scan_entries(request).await?; let mut counts = self.counts.lock().expect("amplification counts lock"); counts.scan_entries_calls += 1; counts.scan_entries_rows += result.keys.len(); Ok(result) } async fn rollback(self: Box) -> Result<(), LixError> { self.counts .lock() .expect("amplification counts lock") .rollbacks += 1; self.inner.rollback().await } } #[async_trait] impl BackendWriteTransaction for CountingWriteTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { self.counts .lock() .expect("amplification counts lock") .record_write_batch(&batch); self.inner.write_kv_batch(batch).await } async fn commit(self: Box) -> Result<(), LixError> { self.counts .lock() .expect("amplification counts lock") .commits += 1; self.inner.commit().await } } fn record_get_values(counts: &Mutex, request: &BackendKvGetRequest) { let mut counts = counts.lock().expect("amplification counts lock"); counts.get_values_calls += 1; counts.get_values_keys += request .groups .iter() .map(|group| group.keys.len()) .sum::(); } fn record_exists_many(counts: &Mutex, request: &BackendKvGetRequest) { let mut counts = counts.lock().expect("amplification counts lock"); counts.exists_many_calls += 1; counts.exists_many_keys += request .groups .iter() .map(|group| group.keys.len()) .sum::(); } fn insert_sql(rows: usize, value_bytes: usize) -> String { let values = (0..rows) .map(|index| { format!( "('amplification-key-{index:08}', '{}')", "v".repeat(value_bytes) ) }) .collect::>() .join(", "); format!("INSERT INTO lix_key_value (key, value) VALUES {values}") } fn update_key_value_sql(rows: usize) -> String { let keys = (0..rows) .map(|index| format!("'amplification-key-{index:08}'")) .collect::>() .join(", "); format!("UPDATE lix_key_value SET value = 'branch-updated' WHERE key IN ({keys})") } fn insert_lix_file_descriptor_sql(rows: usize) -> String { let values = (0..rows) .map(|index| format!("('amplification-file-{index:08}', NULL, 'file-{index:08}.bin')")) .collect::>() .join(", "); format!("INSERT INTO lix_file (id, directory_id, name) VALUES {values}") } fn update_lix_file_hidden_sql(rows: usize) -> String { let ids = (0..rows) .map(|index| format!("'amplification-file-{index:08}'")) .collect::>() .join(", "); format!("UPDATE lix_file SET hidden = true WHERE id IN ({ids})") } async fn run_insert(rows: usize, value_bytes: usize) -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let session = open_main_session(&engine, &main_version_id).await; let storage_before = start_measurement(&backend); session .execute(&insert_sql(rows, value_bytes), &[]) .await .expect("lix_key_value insert should succeed"); finish_measurement(&backend, storage_before) } async fn run_lix_file_insert_data(file_bytes: usize) -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let session = open_main_session(&engine, &main_version_id).await; let storage_before = start_measurement(&backend); let params = [Value::Blob(synthetic_file_bytes(file_bytes))]; session .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('amplification-video-file', '/video.bin', $1)", ¶ms, ) .await .expect("lix_file data insert should succeed"); finish_measurement(&backend, storage_before) } async fn run_branch_from_head_only() -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let main = open_main_session(&engine, &main_version_id).await; let before = start_measurement(&backend); let _branch = create_branch(&engine, &main, "amplification-branch-only").await; finish_measurement(&backend, before) } async fn run_key_value_branch_insert() -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let main = open_main_session(&engine, &main_version_id).await; let branch = create_branch(&engine, &main, "amplification-kv-insert").await; let before = start_measurement(&backend); branch .execute( "INSERT INTO lix_key_value (key, value) \ VALUES ('branch-insert-key', 'branch-value')", &[], ) .await .expect("branch key-value insert should succeed"); finish_measurement(&backend, before) } async fn run_key_value_branch_update(base_rows: usize, update_rows: usize) -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let main = open_main_session(&engine, &main_version_id).await; main.execute(&insert_sql(base_rows, 8), &[]) .await .expect("base key-values should insert"); let branch = create_branch( &engine, &main, &format!("amplification-kv-update-{update_rows}"), ) .await; let before = start_measurement(&backend); branch .execute(&update_key_value_sql(update_rows), &[]) .await .expect("branch key-value update should succeed"); finish_measurement(&backend, before) } async fn run_lix_file_branch_insert(file_bytes: usize) -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let main = open_main_session(&engine, &main_version_id).await; let branch = create_branch(&engine, &main, "amplification-file-insert").await; let before = start_measurement(&backend); let params = [Value::Blob(synthetic_file_bytes(file_bytes))]; branch .execute( "INSERT INTO lix_file (id, path, data) \ VALUES ('branch-file', '/branch-file.bin', $1)", ¶ms, ) .await .expect("branch lix_file insert should succeed"); finish_measurement(&backend, before) } async fn run_lix_file_branch_update_data(base_rows: usize, file_bytes: usize) -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let main = open_main_session(&engine, &main_version_id).await; main.execute(&insert_lix_file_descriptor_sql(base_rows), &[]) .await .expect("base lix_file descriptors should insert"); let branch = create_branch(&engine, &main, "amplification-file-update-data").await; let before = start_measurement(&backend); let params = [Value::Blob(synthetic_file_bytes(file_bytes))]; branch .execute( "UPDATE lix_file SET data = $1 \ WHERE id = 'amplification-file-00000000'", ¶ms, ) .await .expect("branch lix_file data update should succeed"); finish_measurement(&backend, before) } async fn run_lix_file_branch_rename(base_rows: usize) -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let main = open_main_session(&engine, &main_version_id).await; main.execute(&insert_lix_file_descriptor_sql(base_rows), &[]) .await .expect("base lix_file descriptors should insert"); let branch = create_branch(&engine, &main, "amplification-file-rename").await; let before = start_measurement(&backend); branch .execute( "UPDATE lix_file SET path = '/file-00000000-renamed.bin' \ WHERE id = 'amplification-file-00000000'", &[], ) .await .expect("branch lix_file rename should succeed"); finish_measurement(&backend, before) } async fn run_lix_file_branch_update_hidden( base_rows: usize, update_rows: usize, ) -> AmplificationRun { let (backend, engine, main_version_id) = setup_counting_engine().await; let main = open_main_session(&engine, &main_version_id).await; main.execute(&insert_lix_file_descriptor_sql(base_rows), &[]) .await .expect("base lix_file descriptors should insert"); let branch = create_branch(&engine, &main, "amplification-file-update-hidden").await; let before = start_measurement(&backend); branch .execute(&update_lix_file_hidden_sql(update_rows), &[]) .await .expect("branch lix_file hidden update should succeed"); finish_measurement(&backend, before) } fn synthetic_file_bytes(size: usize) -> Vec { let mut bytes = vec![0u8; size]; let mut state = 0x9e37_79b9_7f4a_7c15u64; for (index, byte) in bytes.iter_mut().enumerate() { state ^= state >> 12; state ^= state << 25; state ^= state >> 27; state = state.wrapping_add(index as u64); *byte = (state.wrapping_mul(0x2545_f491_4f6c_dd1d) >> 56) as u8; } bytes } fn stress_file_bytes_from_env() -> usize { std::env::var("LIX_FILE_STRESS_BYTES") .ok() .and_then(|value| parse_size_bytes(&value)) .unwrap_or(100 * 1024 * 1024) } fn parse_size_bytes(value: &str) -> Option { let trimmed = value.trim(); if trimmed.is_empty() { return None; } let lowercase = trimmed.to_ascii_lowercase(); let (number, multiplier) = if let Some(number) = lowercase.strip_suffix("gib") { (number, 1024usize * 1024 * 1024) } else if let Some(number) = lowercase.strip_suffix("gb") { (number, 1000usize * 1000 * 1000) } else if let Some(number) = lowercase.strip_suffix("mib") { (number, 1024usize * 1024) } else if let Some(number) = lowercase.strip_suffix("mb") { (number, 1000usize * 1000) } else if let Some(number) = lowercase.strip_suffix("kib") { (number, 1024usize) } else if let Some(number) = lowercase.strip_suffix("kb") { (number, 1000usize) } else { (trimmed, 1usize) }; number.trim().parse::().ok()?.checked_mul(multiplier) } fn print_amplification_row(rows: usize, value_bytes: usize, run: &AmplificationRun) { let counts = &run.counts; print_category_rows(rows, value_bytes, run); println!( "AMPLIFICATION rows={rows} value_bytes={value_bytes} read_calls={} read_items={} \ get_values_calls={} get_values_keys={} exists_many_calls={} exists_many_keys={} \ scan_calls={} scan_rows={} write_batches={} puts={} deletes={} write_mutations={} \ write_bytes={} read_calls_per_row={:.3} read_items_per_row={:.3} \ write_mutations_per_row={:.3} write_bytes_per_row={:.1}", counts.read_calls(), counts.read_items(), counts.get_values_calls, counts.get_values_keys, counts.exists_many_calls, counts.exists_many_keys, counts.scan_keys_calls + counts.scan_values_calls + counts.scan_entries_calls, counts.scan_keys_rows + counts.scan_values_rows + counts.scan_entries_rows, counts.write_kv_batch_calls, counts.puts, counts.deletes, counts.write_mutations(), counts.write_bytes, counts.read_calls() as f64 / rows as f64, counts.read_items() as f64 / rows as f64, counts.write_mutations() as f64 / rows as f64, counts.write_bytes as f64 / rows as f64, ); for namespace in counts .puts_by_namespace .keys() .chain(counts.deletes_by_namespace.keys()) .chain(counts.bytes_by_namespace.keys()) .collect::>() { println!( "AMPLIFICATION_NAMESPACE rows={rows} namespace={} puts={} deletes={} bytes={}", namespace, counts .puts_by_namespace .get(namespace) .copied() .unwrap_or(0), counts .deletes_by_namespace .get(namespace) .copied() .unwrap_or(0), counts .bytes_by_namespace .get(namespace) .copied() .unwrap_or(0), ); } } fn print_category_rows(rows: usize, value_bytes: usize, run: &AmplificationRun) { let counts = &run.counts; let storage = &run.storage; let canonical_changelog_row_namespaces = ["changelog.change", "changelog.change_pack"]; let canonical_commit_pack_namespaces = ["commit_record", "change_record_pack", "change_ref_pack"]; let canonical_commit_store_namespaces = [ "commit_store.commit", "commit_store.change_pack", "commit_store.membership_pack", ]; let canonical_storage_namespaces = [ "changelog.change", "changelog.change_pack", "commit_record", "change_record_pack", "change_ref_pack", "commit_store.commit", "commit_store.change_pack", "commit_store.membership_pack", ]; let index_storage_namespaces = [ "tracked_state.tree.chunk", "tracked_state.tree.root", "tracked_state.tree.root.by_file", "tracked_state.delta_pack", "change_id_index", ]; let payload_storage_namespaces = [ "json_store.json", "json_store.json_chunk", "binary_cas.manifest", "binary_cas.manifest_chunk", "binary_cas.chunk", ]; let sidecar_storage_namespaces = ["untracked_state.row"]; let canonical_storage = storage_totals_for(storage, &canonical_storage_namespaces); let canonical_changelog_row_storage = storage_totals_for(storage, &canonical_changelog_row_namespaces); let canonical_commit_pack_storage = storage_totals_for(storage, &canonical_commit_pack_namespaces); let canonical_commit_store_storage = storage_totals_for(storage, &canonical_commit_store_namespaces); let index_storage = storage_totals_for(storage, &index_storage_namespaces); let payload_storage = storage_totals_for(storage, &payload_storage_namespaces); let sidecar_storage = storage_totals_for(storage, &sidecar_storage_namespaces); let index_puts = counts.puts_in("tracked_state.tree.chunk") + counts.puts_in("tracked_state.tree.root") + counts.puts_in("tracked_state.tree.root.by_file") + counts.puts_in("tracked_state.delta_pack") + counts.puts_in("change_id_index"); let index_bytes = counts.bytes_in("tracked_state.tree.chunk") + counts.bytes_in("tracked_state.tree.root") + counts.bytes_in("tracked_state.tree.root.by_file") + counts.bytes_in("tracked_state.delta_pack") + counts.bytes_in("change_id_index"); let payload_puts: usize = payload_storage_namespaces .iter() .map(|namespace| counts.puts_in(namespace)) .sum(); let payload_bytes: usize = payload_storage_namespaces .iter() .map(|namespace| counts.bytes_in(namespace)) .sum(); let logical_value_bytes = rows.saturating_mul(value_bytes); let scan_calls = counts.scan_keys_calls + counts.scan_values_calls + counts.scan_entries_calls; let scan_rows = counts.scan_keys_rows + counts.scan_values_rows + counts.scan_entries_rows; let changelog_encoded_objects = counts.puts_in("changelog.change") + counts.puts_in("commit_record") + counts.puts_in("change_record_pack") + counts.puts_in("change_ref_pack") + counts.puts_in("commit_store.commit") + counts.puts_in("commit_store.change_pack") + counts.puts_in("commit_store.membership_pack"); let tracked_encoded_objects = index_puts; let sidecar_encoded_objects = counts.puts_in("untracked_state.row"); println!( "AMPLIFICATION_CATEGORY rows={rows} category=row logical_rows={rows} \ physical_put_rows={} physical_delete_rows={} physical_row_mutations={} \ row_mutations_per_logical_row={:.3}", counts.puts, counts.deletes, counts.write_mutations(), counts.write_mutations() as f64 / rows as f64, ); println!( "AMPLIFICATION_CATEGORY rows={rows} category=write write_transactions={} commits={} \ write_batches={} puts={} deletes={} write_mutations={} write_bytes={} \ write_mutations_per_row={:.3} write_bytes_per_row={:.1}", counts.begin_write_transactions, counts.commits, counts.write_kv_batch_calls, counts.puts, counts.deletes, counts.write_mutations(), counts.write_bytes, counts.write_mutations() as f64 / rows as f64, counts.write_bytes as f64 / rows as f64, ); println!( "AMPLIFICATION_CATEGORY rows={rows} category=storage before_entries={} after_entries={} \ added_entries={} updated_entries={} removed_entries={} touched_entries={} \ before_key_value_bytes={} after_key_value_bytes={} net_key_value_bytes_delta={} \ changed_after_key_value_bytes={} before_namespace_key_value_bytes={} \ after_namespace_key_value_bytes={} net_namespace_key_value_bytes_delta={} \ changed_after_namespace_key_value_bytes={} touched_entries_per_row={:.3} \ net_key_value_bytes_delta_per_row={:.1} changed_after_key_value_bytes_per_row={:.1} \ net_namespace_key_value_bytes_delta_per_row={:.1} \ changed_after_namespace_key_value_bytes_per_row={:.1}", storage.before_entries, storage.after_entries, storage.added_entries, storage.updated_entries, storage.removed_entries, storage.touched_entries(), storage.before_key_value_bytes, storage.after_key_value_bytes, storage.net_key_value_bytes_delta(), storage.changed_after_key_value_bytes(), storage.before_namespace_key_value_bytes, storage.after_namespace_key_value_bytes, storage.net_namespace_key_value_bytes_delta(), storage.changed_after_namespace_key_value_bytes(), storage.touched_entries() as f64 / rows as f64, storage.net_key_value_bytes_delta() as f64 / rows as f64, storage.changed_after_key_value_bytes() as f64 / rows as f64, storage.net_namespace_key_value_bytes_delta() as f64 / rows as f64, storage.changed_after_namespace_key_value_bytes() as f64 / rows as f64, ); print_storage_class_row( rows, "storage_canonical", &canonical_storage_namespaces, &canonical_storage, ); print_storage_class_row( rows, "storage_canonical_changelog_rows", &canonical_changelog_row_namespaces, &canonical_changelog_row_storage, ); print_storage_class_row( rows, "storage_canonical_commit_packs", &canonical_commit_pack_namespaces, &canonical_commit_pack_storage, ); print_storage_class_row( rows, "storage_canonical_commit_store", &canonical_commit_store_namespaces, &canonical_commit_store_storage, ); print_storage_class_row( rows, "storage_index", &index_storage_namespaces, &index_storage, ); print_storage_class_row( rows, "storage_payload", &payload_storage_namespaces, &payload_storage, ); print_storage_class_row( rows, "storage_sidecar", &sidecar_storage_namespaces, &sidecar_storage, ); println!( "AMPLIFICATION_CATEGORY rows={rows} category=read read_transactions={} rollbacks={} \ read_calls={} read_items={} get_values_calls={} get_values_keys={} \ exists_many_calls={} exists_many_keys={} scan_calls={} scan_rows={} \ read_calls_per_row={:.3} read_items_per_row={:.3}", counts.begin_read_transactions, counts.rollbacks, counts.read_calls(), counts.read_items(), counts.get_values_calls, counts.get_values_keys, counts.exists_many_calls, counts.exists_many_keys, scan_calls, scan_rows, counts.read_calls() as f64 / rows as f64, counts.read_items() as f64 / rows as f64, ); println!( "AMPLIFICATION_CATEGORY rows={rows} category=serialization proxy_encoded_put_objects={} \ proxy_changelog_objects={} proxy_json_objects={} proxy_tracked_index_objects={} \ proxy_sidecar_objects={} proxy_encoded_objects_per_row={:.3}", counts.puts, changelog_encoded_objects, payload_puts, tracked_encoded_objects, sidecar_encoded_objects, counts.puts as f64 / rows as f64, ); println!( "AMPLIFICATION_CATEGORY rows={rows} category=index index_puts={} index_deletes={} \ index_mutations={} index_bytes={} tracked_chunk_puts={} tracked_root_puts={} \ tracked_by_file_root_puts={} index_mutations_per_row={:.3} index_bytes_per_row={:.1}", index_puts, 0, index_puts, index_bytes, counts.puts_in("tracked_state.tree.chunk"), counts.puts_in("tracked_state.tree.root"), counts.puts_in("tracked_state.tree.root.by_file"), index_puts as f64 / rows as f64, index_bytes as f64 / rows as f64, ); println!( "AMPLIFICATION_CATEGORY rows={rows} category=payload logical_value_bytes={} \ external_payload_puts={} external_payload_bytes={} external_payload_puts_per_row={:.3} \ external_payload_bytes_per_row={:.1} external_payload_bytes_per_logical_value_byte={:.3}", logical_value_bytes, payload_puts, payload_bytes, payload_puts as f64 / rows as f64, payload_bytes as f64 / rows as f64, payload_bytes as f64 / logical_value_bytes.max(1) as f64, ); println!( "AMPLIFICATION_CATEGORY rows={rows} category=sidecar_overlay untracked_puts={} \ untracked_deletes={} untracked_bytes={} untracked_mutations_per_row={:.3}", counts.puts_in("untracked_state.row"), counts.deletes_in("untracked_state.row"), counts.bytes_in("untracked_state.row"), (counts.puts_in("untracked_state.row") + counts.deletes_in("untracked_state.row")) as f64 / rows as f64, ); for (namespace, namespace_storage) in &storage.by_namespace { println!( "AMPLIFICATION_STORAGE_NAMESPACE rows={rows} namespace={} added_entries={} \ updated_entries={} removed_entries={} touched_entries={} net_key_value_bytes_delta={} \ changed_after_key_value_bytes={} net_namespace_key_value_bytes_delta={} \ changed_after_namespace_key_value_bytes={}", namespace, namespace_storage.added_entries, namespace_storage.updated_entries, namespace_storage.removed_entries, namespace_storage.touched_entries(), namespace_storage.net_key_value_bytes_delta(), namespace_storage.changed_after_key_value_bytes(), namespace_storage.net_namespace_key_value_bytes_delta(), namespace_storage.changed_after_namespace_key_value_bytes(), ); } } fn print_amplification_case( name: &str, base_rows: usize, logical_rows: usize, value_bytes: usize, run: &AmplificationRun, ) { println!( "AMPLIFICATION_CASE name={name} base_rows={base_rows} logical_rows={logical_rows} value_bytes={value_bytes}", ); print_amplification_row(logical_rows, value_bytes, run); } #[tokio::test] #[ignore = "prints read/write amplification north-star metrics for lix_key_value inserts"] async fn lix_key_value_insert_amplification_north_star() { let value_bytes = 8; for rows in [1usize, 100, 1_000] { let run = run_insert(rows, value_bytes).await; print_amplification_row(rows, value_bytes, &run); } } #[tokio::test] #[ignore = "stress test for large lix_file.data inserts; defaults to 100MiB"] async fn lix_file_data_stress_amplification() { let file_bytes = stress_file_bytes_from_env(); println!( "AMPLIFICATION_FILE_STRESS logical_files=1 logical_file_bytes={} env=LIX_FILE_STRESS_BYTES", file_bytes ); let run = run_lix_file_insert_data(file_bytes).await; print_amplification_row(1, file_bytes, &run); } #[tokio::test] #[ignore = "prints branching amplification canaries for lix_key_value"] async fn lix_key_value_branching_amplification_canaries() { let branch_only = run_branch_from_head_only().await; print_amplification_case("kv_branch_from_head_only", 0, 1, 0, &branch_only); let branch_insert = run_key_value_branch_insert().await; print_amplification_case("kv_branch_then_insert_1", 0, 1, 12, &branch_insert); let update_one = run_key_value_branch_update(1_000, 1).await; print_amplification_case("kv_branch_from_1000_update_1", 1_000, 1, 14, &update_one); let update_hundred = run_key_value_branch_update(1_000, 100).await; print_amplification_case( "kv_branch_from_1000_update_100", 1_000, 100, 14, &update_hundred, ); } #[tokio::test] #[ignore = "prints branching amplification canaries for lix_file"] async fn lix_file_branching_amplification_canaries() { let file_bytes = 1024; let branch_insert = run_lix_file_branch_insert(file_bytes).await; print_amplification_case( "file_branch_then_insert_1k_data", 0, 1, file_bytes, &branch_insert, ); let update_data = run_lix_file_branch_update_data(1_000, file_bytes).await; print_amplification_case( "file_branch_from_1000_update_data_1", 1_000, 1, file_bytes, &update_data, ); let rename = run_lix_file_branch_rename(1_000).await; print_amplification_case("file_branch_from_1000_rename_1", 1_000, 1, 0, &rename); let update_hidden = run_lix_file_branch_update_hidden(1_000, 100).await; print_amplification_case( "file_branch_from_1000_update_hidden_100", 1_000, 100, 0, &update_hidden, ); } ================================================ FILE: packages/engine/tests/transaction.rs ================================================ use std::collections::BTreeMap; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::{Arc, Mutex}; use async_trait::async_trait; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, Engine, LixError, }; type KvKey = (String, Vec); type KvMap = BTreeMap>; #[tokio::test] async fn read_sql_rolls_back_read_transaction_when_pre_plan_setup_fails() { let backend = RecordingBackend::new(); let _receipt = Engine::initialize(Box::new(backend.clone())) .await .expect("backend should initialize"); let engine = Engine::new(Box::new(backend.clone())) .await .expect("initialized backend should create an engine"); let session = engine .open_workspace_session() .await .expect("workspace session should open"); session .execute( "UPDATE lix_key_value SET value = 'missing-version' \ WHERE key = 'lix_workspace_version_id'", &[], ) .await .expect("test should corrupt workspace selector"); let before = backend.stats(); let error = session .execute("SELECT 1", &[]) .await .expect_err("missing active version should fail read pre-plan"); assert!( error.message.contains("missing-version"), "unexpected error: {error:?}" ); let delta = backend.stats().delta_since(&before); assert_eq!(delta.read_opened, 1, "read SQL should open one read tx"); assert_eq!( delta.read_rolled_back, 1, "read SQL pre-plan errors must roll back the opened read tx" ); } #[tokio::test] async fn write_transaction_open_rolls_back_when_active_version_resolution_fails() { let backend = RecordingBackend::new(); let _receipt = Engine::initialize(Box::new(backend.clone())) .await .expect("backend should initialize"); let engine = Engine::new(Box::new(backend.clone())) .await .expect("initialized backend should create an engine"); let session = engine .open_workspace_session() .await .expect("workspace session should open"); session .execute( "UPDATE lix_key_value SET value = 'missing-version' \ WHERE key = 'lix_workspace_version_id'", &[], ) .await .expect("test should corrupt workspace selector"); let before = backend.stats(); let error = session .execute( "INSERT INTO lix_key_value (key, value) VALUES ('after-corrupt-selector', 'value')", &[], ) .await .expect_err("missing active version should fail write open"); assert_eq!(error.code, "LIX_VERSION_NOT_FOUND"); let delta = backend.stats().delta_since(&before); assert_eq!(delta.write_opened, 1, "write path should open one write tx"); assert_eq!( delta.write_rolled_back, 1, "write open errors must roll back the opened write tx" ); assert_eq!( delta.write_committed, 0, "failed write open must not commit" ); } #[tokio::test] async fn rebuild_tracked_state_rolls_back_read_and_write_transactions_on_failure() { let backend = RecordingBackend::new(); let receipt = Engine::initialize(Box::new(backend.clone())) .await .expect("backend should initialize"); let engine = Engine::new(Box::new(backend.clone())) .await .expect("initialized backend should create an engine"); backend.fail_read_namespace("commit_store.commit"); let before = backend.stats(); let error = engine .rebuild_tracked_state_for_version(&receipt.main_version_id) .await .expect_err("forced commit-store read failure should fail rebuild"); assert!( error.message.contains("forced read failure"), "unexpected error: {error:?}" ); let delta = backend.stats().delta_since(&before); assert_eq!( delta.read_opened, delta.read_rolled_back, "every read tx opened during failed rebuild must be rolled back" ); assert_eq!(delta.write_opened, 1, "rebuild should open one write tx"); assert_eq!( delta.write_rolled_back, 1, "failed rebuild must roll back the opened write tx" ); assert_eq!(delta.write_committed, 0, "failed rebuild must not commit"); } #[derive(Clone, Default)] struct RecordingBackend { data: Arc>, stats: Arc, fail_read_namespace: Arc>>, } impl RecordingBackend { fn new() -> Self { Self::default() } fn stats(&self) -> TransactionStatsSnapshot { self.stats.snapshot() } fn fail_read_namespace(&self, namespace: &str) { *self .fail_read_namespace .lock() .expect("fail namespace lock should not poison") = Some(namespace.to_string()); } } #[async_trait] impl Backend for RecordingBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { self.stats.read_opened.fetch_add(1, Ordering::SeqCst); Ok(Box::new(RecordingTransaction { data: Arc::clone(&self.data), pending: BTreeMap::new(), stats: Arc::clone(&self.stats), fail_read_namespace: Arc::clone(&self.fail_read_namespace), mode: RecordingTransactionMode::Read, })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { self.stats.write_opened.fetch_add(1, Ordering::SeqCst); Ok(Box::new(RecordingTransaction { data: Arc::clone(&self.data), pending: BTreeMap::new(), stats: Arc::clone(&self.stats), fail_read_namespace: Arc::clone(&self.fail_read_namespace), mode: RecordingTransactionMode::Write, })) } } struct RecordingTransaction { data: Arc>, pending: BTreeMap>>, stats: Arc, fail_read_namespace: Arc>>, mode: RecordingTransactionMode, } #[derive(Clone, Copy)] enum RecordingTransactionMode { Read, Write, } #[async_trait] impl BackendReadTransaction for RecordingTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { self.fail_if_get_namespace_matches(&request)?; let data = self.data.lock().expect("recording backend lock poisoned"); let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { let identity = (namespace.clone(), key.clone()); let value = self .pending .get(&identity) .cloned() .unwrap_or_else(|| data.get(&identity).cloned()); if let Some(value) = value { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } Ok(BackendKvValueBatch { groups }) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { self.fail_if_get_namespace_matches(&request)?; let data = self.data.lock().expect("recording backend lock poisoned"); let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut exists = Vec::with_capacity(group.keys.len()); for key in group.keys { let identity = (namespace.clone(), key.clone()); exists.push( self.pending .get(&identity) .map(|value| value.is_some()) .unwrap_or_else(|| data.contains_key(&identity)), ); } groups.push(BackendKvExistsGroup { namespace, exists }); } Ok(BackendKvExistsBatch { groups }) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { let entries = self.scan_visible_entries(request)?; Ok(BackendKvKeyPage { keys: entries.keys, resume_after: entries.resume_after, }) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { self.fail_if_scan_namespace_matches(&request)?; let entries = self.scan_visible_entries(request)?; Ok(BackendKvValuePage { values: entries.values, resume_after: entries.resume_after, }) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { self.fail_if_scan_namespace_matches(&request)?; self.scan_visible_entries(request) } async fn rollback(self: Box) -> Result<(), LixError> { match self.mode { RecordingTransactionMode::Read => { self.stats.read_rolled_back.fetch_add(1, Ordering::SeqCst); } RecordingTransactionMode::Write => { self.stats.write_rolled_back.fetch_add(1, Ordering::SeqCst); } } Ok(()) } } #[async_trait] impl BackendWriteTransaction for RecordingTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let mut stats = BackendKvWriteStats::default(); for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); self.pending .insert((namespace.clone(), key.to_vec()), Some(value.to_vec())); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; stats.deletes += 1; stats.bytes_written += key.len(); self.pending.insert((namespace.clone(), key.to_vec()), None); } } Ok(stats) } async fn commit(mut self: Box) -> Result<(), LixError> { self.stats.write_committed.fetch_add(1, Ordering::SeqCst); let mut guard = self.data.lock().expect("recording backend lock poisoned"); for (key, value) in std::mem::take(&mut self.pending) { match value { Some(value) => { guard.insert(key, value); } None => { guard.remove(&key); } } } Ok(()) } } impl RecordingTransaction { fn fail_if_get_namespace_matches(&self, request: &BackendKvGetRequest) -> Result<(), LixError> { for group in &request.groups { self.fail_if_namespace_matches(&group.namespace)?; } Ok(()) } fn fail_if_scan_namespace_matches( &self, request: &BackendKvScanRequest, ) -> Result<(), LixError> { self.fail_if_namespace_matches(&request.namespace) } fn fail_if_namespace_matches(&self, namespace: &str) -> Result<(), LixError> { if self .fail_read_namespace .lock() .expect("fail namespace lock should not poison") .as_deref() == Some(namespace) { return Err(LixError::new( "LIX_ERROR_UNKNOWN", format!("forced read failure for namespace {namespace}"), )); } Ok(()) } fn scan_visible_entries( &self, request: BackendKvScanRequest, ) -> Result { let mut visible = self .data .lock() .expect("recording backend lock poisoned") .clone(); for (key, value) in &self.pending { match value { Some(value) => { visible.insert(key.clone(), value.clone()); } None => { visible.remove(key); } } } Ok(scan_map(&visible, &request)) } } fn scan_map(map: &KvMap, request: &BackendKvScanRequest) -> BackendKvEntryPage { let mut pairs = map .iter() .filter_map(|((entry_namespace, key), value)| { if entry_namespace != &request.namespace || !key_in_range(key, &request.range) { return None; } if request .after .as_deref() .is_some_and(|after| key.as_slice() <= after) { return None; } Some((key.clone(), value.clone())) }) .collect::>(); pairs.sort_by(|left, right| left.0.cmp(&right.0)); let has_more = pairs.len() > request.limit; pairs.truncate(request.limit); let resume_after = has_more .then(|| pairs.last().map(|(key, _)| key.clone())) .flatten(); let mut keys = BytePageBuilder::with_capacity(pairs.len(), 0); let mut values = BytePageBuilder::with_capacity(pairs.len(), 0); for (key, value) in pairs { keys.push(key); values.push(value); } BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, } } fn key_in_range(key: &[u8], range: &BackendKvScanRange) -> bool { match range { BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix), BackendKvScanRange::Range { start, end } => key >= start.as_slice() && key < end.as_slice(), } } #[derive(Default)] struct TransactionStats { read_opened: AtomicUsize, read_rolled_back: AtomicUsize, write_opened: AtomicUsize, write_committed: AtomicUsize, write_rolled_back: AtomicUsize, } impl TransactionStats { fn snapshot(&self) -> TransactionStatsSnapshot { TransactionStatsSnapshot { read_opened: self.read_opened.load(Ordering::SeqCst), read_rolled_back: self.read_rolled_back.load(Ordering::SeqCst), write_opened: self.write_opened.load(Ordering::SeqCst), write_committed: self.write_committed.load(Ordering::SeqCst), write_rolled_back: self.write_rolled_back.load(Ordering::SeqCst), } } } #[derive(Clone, Copy)] struct TransactionStatsSnapshot { read_opened: usize, read_rolled_back: usize, write_opened: usize, write_committed: usize, write_rolled_back: usize, } impl TransactionStatsSnapshot { fn delta_since(self, before: &Self) -> Self { Self { read_opened: self.read_opened - before.read_opened, read_rolled_back: self.read_rolled_back - before.read_rolled_back, write_opened: self.write_opened - before.write_opened, write_committed: self.write_committed - before.write_committed, write_rolled_back: self.write_rolled_back - before.write_rolled_back, } } } ================================================ FILE: packages/engine/wit/lix-plugin.wit ================================================ package lix:plugin@0.1.0; interface api { type canonical-json = string; /// Current materialized file payload. Plugins should treat this as an /// implementation detail cache and must not rely on mutation order. record file { id: string, path: string, data: list, } /// Represents the latest visible row for an entity. /// /// `apply-changes` receives an unordered set of these rows for a single file. /// Implementations must be order-independent and produce the same output for /// any ordering of `changes`. /// /// Uniqueness: callers provide at most one row per /// (`schema-key`, `entity-id`) for the same (`file.id`, version). record entity-change { entity-id: string, schema-key: string, /// Deterministically encoded JSON text. snapshot-content: option, } /// Optional active-state row payload passed to detect-changes when requested /// by the plugin manifest. Omitted fields are represented as `none`. /// /// Scope is implicit and engine-defined: same plugin + same file + active rows. record active-state-row { entity-id: string, schema-key: option, /// Deterministically encoded JSON text. snapshot-content: option, file-id: option, plugin-key: option, version-id: option, change-id: option, /// Deterministically encoded JSON text. metadata: option, created-at: option, updated-at: option, } record detect-state-context { active-state: option>, } variant plugin-error { invalid-input(string), internal(string), } /// Computes row-level state transitions between two file payloads. detect-changes: func(before: option, after: file, state-context: option) -> result, plugin-error>; /// Rebuilds file bytes from the unordered latest-state row set. apply-changes: func(file: file, changes: list) -> result, plugin-error>; } world plugin { export api; } ================================================ FILE: packages/js-kysely/.gitignore ================================================ dist/ node_modules/ ================================================ FILE: packages/js-kysely/package.json ================================================ { "name": "@lix-js/kysely", "type": "module", "private": true, "version": "0.1.0", "description": "Compile-only Kysely query builder and Lix schema types for JS SDK v0.6 consumers", "license": "Apache-2.0", "main": "./src/index.ts", "types": "./src/index.ts", "exports": { ".": { "types": "./src/index.ts", "default": "./src/index.ts" } }, "scripts": { "build": "tsc -p tsconfig.json", "typecheck": "tsc -p tsconfig.json --noEmit", "test:types": "tsc -p tsconfig.type-tests.json --noEmit", "test": "pnpm --filter @lix-js/sdk build && vitest run" }, "dependencies": { "json-schema-to-ts": "^3.1.1", "kysely": "^0.28.7" }, "peerDependencies": { "@lix-js/sdk": "^0.6.0" }, "devDependencies": { "@lix-js/sdk": "workspace:*", "typescript": "^5.5.4", "vitest": "^4.0.18" } } ================================================ FILE: packages/js-kysely/src/create-lix-kysely.ts ================================================ import { Kysely, SqliteAdapter, SqliteIntrospector, SqliteQueryCompiler, type CompiledQuery, type DatabaseConnection, type Driver, type QueryCompiler, type QueryResult, } from "kysely"; import type { LixDatabaseSchema } from "./schema.js"; type LixQueryResult = { rows?: unknown; columns?: unknown; statements?: unknown; }; export type LixExecuteOptions = { writerKey?: string | null; }; type LixExecuteLike = { execute( sql: string, params?: ReadonlyArray, options?: LixExecuteOptions, ): Promise; }; type LixDbLike = { db: unknown; }; type LixLike = LixExecuteLike | LixDbLike; export type CreateLixKyselyOptions = { writerKey?: string | null; }; class LixConnection implements DatabaseConnection { readonly #executeSql: ( sql: string, params?: ReadonlyArray, ) => Promise; constructor( executeSql: ( sql: string, params?: ReadonlyArray, ) => Promise, ) { this.#executeSql = executeSql; } async executeQuery(compiledQuery: CompiledQuery): Promise> { const raw = normalizeLixQueryResult( await this.#executeSql( compiledQuery.sql, compiledQuery.parameters, ), ); const decodedRows = decodeRows(raw.rows); const columnNames = decodeColumnNames(raw.columns) ?? (await this.resolveColumnNames(compiledQuery.query)); const rows = columnNames && decodedRows.every((row) => row.length === columnNames.length) ? decodedRows.map((row) => rowToObject(row, columnNames)) : decodedRows; const kind = compiledQuery.query && typeof compiledQuery.query === "object" ? (compiledQuery.query as { kind?: unknown }).kind : undefined; let numAffectedRows: bigint | undefined; let insertId: bigint | undefined; if (kind !== "SelectQueryNode") { numAffectedRows = await this.readIntegerResult("SELECT changes()"); if (kind === "InsertQueryNode") { insertId = await this.readIntegerResult("SELECT last_insert_rowid()"); } } return { rows: rows as R[], numAffectedRows, insertId, }; } async *streamQuery( compiledQuery: CompiledQuery, ): AsyncIterableIterator> { yield await this.executeQuery(compiledQuery); } async readIntegerResult(sql: string): Promise { const raw = normalizeLixQueryResult(await this.#executeSql(sql, undefined)); const rows = decodeRows(raw.rows); if (!rows[0] || rows[0].length === 0) { return undefined; } return extractIntegerValue(rows[0][0]); } async resolveColumnNames(queryNode: unknown): Promise { if (!queryNode || typeof queryNode !== "object") { return undefined; } const query = queryNode as Record; const kind = typeof query.kind === "string" ? query.kind : ""; if (kind === "SelectQueryNode") { const selections = selectSelectionNodes(query); if (selections.length > 0) { return selections.map(selectionNameFromNode); } return undefined; } if ( kind === "InsertQueryNode" || kind === "UpdateQueryNode" || kind === "DeleteQueryNode" ) { const returning = query.returning; if (returning && typeof returning === "object") { const selections = selectSelectionNodes( returning as Record, ); if (selections.length > 0) { return selections.map(selectionNameFromNode); } } } return undefined; } } class LixDriver implements Driver { readonly #lix: LixExecuteLike; readonly #connection: LixConnection; readonly #options?: LixExecuteOptions; #transactionSlotHeld = false; #transactionActive = false; #waiters: Array<() => void> = []; constructor(lix: LixExecuteLike, options?: LixExecuteOptions) { this.#lix = lix; this.#options = options; this.#connection = new LixConnection((sql, params) => this.#executeSql(sql, params), ); } async init(): Promise {} async acquireConnection(): Promise { return this.#connection; } async beginTransaction(): Promise { await this.#acquireTransactionSlot(); try { await this.#executeSql("BEGIN", undefined); this.#transactionActive = true; } catch (error) { this.#releaseTransactionSlot(); throw error; } } async commitTransaction(): Promise { if (!this.#transactionActive) { throw new Error("commitTransaction called without active transaction"); } try { await this.#executeSql("COMMIT", undefined); } finally { this.#transactionActive = false; this.#releaseTransactionSlot(); } } async rollbackTransaction(): Promise { if (!this.#transactionActive) { throw new Error("rollbackTransaction called without active transaction"); } try { await this.#executeSql("ROLLBACK", undefined); } finally { this.#transactionActive = false; this.#releaseTransactionSlot(); } } async savepoint( _connection: DatabaseConnection, _savepointName: string, _compileQuery: QueryCompiler["compileQuery"], ): Promise { throw new Error( "Nested transactions are not supported by createLixKysely() yet", ); } async rollbackToSavepoint( _connection: DatabaseConnection, _savepointName: string, _compileQuery: QueryCompiler["compileQuery"], ): Promise { throw new Error( "Nested transactions are not supported by createLixKysely() yet", ); } async releaseSavepoint( _connection: DatabaseConnection, _savepointName: string, _compileQuery: QueryCompiler["compileQuery"], ): Promise { throw new Error( "Nested transactions are not supported by createLixKysely() yet", ); } async releaseConnection(): Promise {} async destroy(): Promise {} async #executeSql( sql: string, params?: ReadonlyArray, ): Promise { return this.#lix.execute(sql, params, this.#options); } async #acquireTransactionSlot(): Promise { while (this.#transactionSlotHeld) { await new Promise((resolve) => this.#waiters.push(resolve)); } this.#transactionSlotHeld = true; } #releaseTransactionSlot(): void { this.#transactionSlotHeld = false; const waiter = this.#waiters.shift(); if (waiter) { waiter(); } } } class LixQueryCompiler extends SqliteQueryCompiler { protected override getLeftIdentifierWrapper(): string { return ""; } protected override getRightIdentifierWrapper(): string { return ""; } } const cache = new WeakMap>>(); export function createLixKysely( lix: LixLike, options: CreateLixKyselyOptions = {}, ): Kysely { const writerKey = normalizeWriterKey(options.writerKey); const cacheKey = writerKeyCacheKey(writerKey); if (isLixDbLike(lix)) { if (writerKey !== undefined) { throw new TypeError( "createLixKysely writerKey option requires lix.execute(sql, params, options)", ); } return lix.db as Kysely; } if (!isLixExecuteLike(lix)) { throw new TypeError( "createLixKysely requires either lix.execute(sql, params) or lix.db", ); } const entry = cache.get(lix as object); const cached = entry?.get(cacheKey); if (cached) { return cached; } const dialect = { createAdapter: () => new SqliteAdapter(), createDriver: () => new LixDriver(lix, { writerKey }), createIntrospector: (db: Kysely) => new SqliteIntrospector(db), createQueryCompiler: () => new LixQueryCompiler(), }; const db = new Kysely({ dialect }); if (entry) { entry.set(cacheKey, db); } else { cache.set(lix as object, new Map([[cacheKey, db]])); } return db; } function isLixExecuteLike(value: unknown): value is LixExecuteLike { if (!value || typeof value !== "object") { return false; } return typeof (value as { execute?: unknown }).execute === "function"; } function normalizeWriterKey(value: unknown): string | null | undefined { if (value === undefined) { return undefined; } if (value === null) { return null; } if (typeof value === "string") { return value; } throw new TypeError("createLixKysely writerKey must be a string or null"); } function writerKeyCacheKey(writerKey: string | null | undefined): string { if (writerKey === undefined) { return "__default__"; } if (writerKey === null) { return "__null__"; } return `writer:${writerKey}`; } function isLixDbLike(value: unknown): value is LixDbLike { if (!value || typeof value !== "object") { return false; } return ( "db" in (value as object) && Boolean((value as { db?: unknown }).db) && typeof (value as { db?: unknown }).db === "object" ); } function decodeRows(rawRows: unknown): unknown[][] { if (!Array.isArray(rawRows)) { return []; } return rawRows.map((row) => { if (!Array.isArray(row)) { return []; } return [...row]; }); } function normalizeLixQueryResult(raw: LixQueryResult): { rows?: unknown; columns?: unknown; } { if (Array.isArray(raw.statements)) { const [statement] = raw.statements; if (statement && typeof statement === "object") { const candidate = statement as { rows?: unknown; columns?: unknown }; return { rows: candidate.rows, columns: candidate.columns, }; } } return raw; } function decodeColumnNames(rawColumns: unknown): string[] | undefined { if (!Array.isArray(rawColumns)) { return undefined; } const names = rawColumns.filter( (value): value is string => typeof value === "string", ); return names.length > 0 ? names : undefined; } function extractIntegerValue(value: unknown): bigint | undefined { if (typeof value === "number" && Number.isInteger(value)) { return BigInt(value); } if (typeof value === "bigint") { return value; } if (typeof value === "string" && /^-?\d+$/.test(value)) { return BigInt(value); } return undefined; } function rowToObject( row: unknown[], columns: string[], ): Record { const out: Record = {}; for (let i = 0; i < columns.length; i++) { const column = columns[i]; if (!column) { continue; } out[column] = row[i]; } return out; } function selectSelectionNodes( node: Record, ): Record[] { const selections = node.selections; if (!Array.isArray(selections)) { return []; } return selections.filter( (selection): selection is Record => Boolean(selection) && typeof selection === "object", ); } function selectTableNames(node: Record): string[] { const from = node.from; if (!from || typeof from !== "object") { return []; } const froms = (from as Record).froms; if (!Array.isArray(froms)) { return []; } const names: string[] = []; for (const fromNode of froms) { if (!fromNode || typeof fromNode !== "object") { continue; } const table = (fromNode as Record).table; const name = identifierNameFromTableNode(table); if (name) { names.push(name); } } return names; } function selectionNameFromNode(selectionNode: Record): string { const selection = selectionNode.selection; if (!selection || typeof selection !== "object") { return "column"; } return ( identifierNameFromSelection(selection as Record) ?? "column" ); } function identifierNameFromSelection( node: Record, ): string | undefined { const kind = typeof node.kind === "string" ? node.kind : ""; if (kind === "AliasNode") { const alias = node.alias; const aliasName = identifierName(alias); if (aliasName) return aliasName; } if (kind === "ReferenceNode") { const column = node.column; if (!column || typeof column !== "object") { return undefined; } const nested = (column as Record).column; const name = identifierName(nested); if (name) return name; } if (kind === "ColumnNode") { const name = identifierName(node.column); if (name) return name; } if (kind === "IdentifierNode") { const name = identifierName(node); if (name) return name; } return undefined; } function identifierNameFromTableNode(node: unknown): string | undefined { if (!node || typeof node !== "object") { return undefined; } const tableNode = node as Record; if (tableNode.kind === "SchemableIdentifierNode") { return identifierName(tableNode.identifier); } return undefined; } function identifierName(node: unknown): string | undefined { if (!node || typeof node !== "object") { return undefined; } const name = (node as Record).name; return typeof name === "string" ? name : undefined; } ================================================ FILE: packages/js-kysely/src/eb-entity.ts ================================================ import type { ExpressionBuilder, ExpressionWrapper, SqlBool } from "kysely"; import type { LixDatabaseSchema } from "./schema.js"; type LixEntityId = string[]; type LixEntityCanonical = { schema_key: string; file_id: string | null; entity_id: LixEntityId; }; type LixEntity = { lixcol_schema_key: string; lixcol_file_id: string | null; lixcol_entity_id: LixEntityId; }; const CANONICAL_TABLES = [ "lix_state", "lix_state_by_version", ] as const; export function ebEntity< TB extends keyof LixDatabaseSchema = keyof LixDatabaseSchema, >(entityType?: TB) { const isCanonicalTable = entityType ? CANONICAL_TABLES.includes(entityType as any) : undefined; const detectColumnType = ( entity: LixEntity | LixEntityCanonical, ): boolean => { return ( "entity_id" in entity && "schema_key" in entity && "file_id" in entity ); }; const getColumnNames = (entity?: LixEntity | LixEntityCanonical) => { if (entityType !== undefined) { return { entityIdCol: isCanonicalTable ? "entity_id" : "lixcol_entity_id", schemaKeyCol: isCanonicalTable ? "schema_key" : "lixcol_schema_key", fileIdCol: isCanonicalTable ? "file_id" : "lixcol_file_id", }; } if (entity) { const useCanonical = detectColumnType(entity); return { entityIdCol: useCanonical ? "entity_id" : "lixcol_entity_id", schemaKeyCol: useCanonical ? "schema_key" : "lixcol_schema_key", fileIdCol: useCanonical ? "file_id" : "lixcol_file_id", }; } return { entityIdCol: "lixcol_entity_id", schemaKeyCol: "lixcol_schema_key", fileIdCol: "lixcol_file_id", }; }; const getColumnRefs = (entity?: LixEntity | LixEntityCanonical) => { const { entityIdCol, schemaKeyCol, fileIdCol } = getColumnNames(entity); return { entityIdRef: entityType ? `${entityType}.${entityIdCol}` : entityIdCol, schemaKeyRef: entityType ? `${entityType}.${schemaKeyCol}` : schemaKeyCol, fileIdRef: entityType ? `${entityType}.${fileIdCol}` : fileIdCol, }; }; const getTargetValues = (entity: LixEntity | LixEntityCanonical) => { return { targetEntityId: "entity_id" in entity ? entity.entity_id : entity.lixcol_entity_id, targetSchemaKey: "schema_key" in entity ? entity.schema_key : entity.lixcol_schema_key, targetFileId: "file_id" in entity ? entity.file_id : entity.lixcol_file_id, }; }; const equalsExpression = ( eb: ExpressionBuilder, entity: LixEntity | LixEntityCanonical, ): ExpressionWrapper => { const { targetEntityId, targetSchemaKey, targetFileId } = getTargetValues(entity); const { entityIdRef, schemaKeyRef, fileIdRef } = getColumnRefs(entity); return eb.and([ eb(eb.ref(entityIdRef as any), "=", targetEntityId), eb(eb.ref(schemaKeyRef as any), "=", targetSchemaKey), targetFileId === null ? eb(eb.ref(fileIdRef as any), "is", null) : eb(eb.ref(fileIdRef as any), "=", targetFileId), ]); }; return { hasLabel( label: { id: string; name?: string } | { name: string; id?: string }, ) { return ( eb: ExpressionBuilder, ): ExpressionWrapper => { const { entityIdRef, schemaKeyRef, fileIdRef } = getColumnRefs(); const labelQuery = eb .selectFrom("lix_label_assignment" as any) .innerJoin( "lix_label" as any, "lix_label.id" as any, "lix_label_assignment.label_id" as any, ) as any; return eb.exists( labelQuery .select("lix_label_assignment.target_entity_id" as any) .whereRef( "lix_label_assignment.target_entity_id" as any, "=", entityIdRef as any, ) .whereRef( "lix_label_assignment.target_schema_key" as any, "=", schemaKeyRef as any, ) .whereRef( "lix_label_assignment.target_file_id" as any, "is", fileIdRef as any, ) .$if("name" in label, (qb: any) => qb.where("lix_label.name", "=", label.name!), ) .$if("id" in label, (qb: any) => qb.where("lix_label.id", "=", label.id!), ), ); }; }, equals(entity: LixEntity | LixEntityCanonical) { return ( eb: ExpressionBuilder, ): ExpressionWrapper => { return equalsExpression(eb, entity); }; }, in(entities: Array) { return ( eb: ExpressionBuilder, ): ExpressionWrapper => { if (entities.length === 0) { return eb.val(false); } return eb.or(entities.map((entity) => equalsExpression(eb, entity))); }; }, }; } ================================================ FILE: packages/js-kysely/src/index.ts ================================================ export { qb } from "./qb.js"; export { ebEntity } from "./eb-entity.js"; export type { LixDatabaseSchema } from "./schema.js"; export type { CreateLixKyselyOptions, LixExecuteOptions, } from "./create-lix-kysely.js"; export { sql } from "kysely"; export { jsonArrayFrom, jsonObjectFrom } from "kysely/helpers/sqlite"; ================================================ FILE: packages/js-kysely/src/qb.test-d.ts ================================================ import type { Insertable, Selectable } from "kysely"; import { ebEntity, qb } from "./index.js"; import type { LixDatabaseSchema } from "./schema.js"; type Equal = (() => T extends A ? 1 : 2) extends () => T extends B ? 1 : 2 ? true : false; type Expect = T; type FileRow = Selectable; type _FilePathIsString = Expect>; const fileHiddenBoolean: FileRow["hidden"] = true; const fileHiddenUndefined: FileRow["hidden"] = undefined; // @ts-expect-error wrong hidden type const fileHiddenString: FileRow["hidden"] = "true"; void fileHiddenBoolean; void fileHiddenUndefined; void fileHiddenString; type KeyValueByVersionInsert = Insertable< LixDatabaseSchema["lix_key_value_by_version"] >; type _InsertHasKey = Expect>; const db = qb({ execute: async () => ({ rows: [] }), }); const dbWithWriter = qb( { execute: async () => ({ rows: [] }), }, { writerKey: "writer-a" }, ); dbWithWriter.selectFrom("lix_file").select("id").compile(); db.selectFrom("lix_file").select(["id", "path", "hidden"]).compile(); db.selectFrom("lix_directory").select(["id", "path"]).compile(); db.selectFrom("lix_key_value_by_version") .select(["key", "value", "lixcol_version_id"]) .compile(); db.selectFrom("lix_commit") .where(ebEntity("lix_commit").hasLabel({ name: "checkpoint" })) .select("id") .compile(); db.insertInto("lix_key_value_by_version") .values({ key: "flashtype_active_file_id", value: "file-1", lixcol_version_id: "global", lixcol_untracked: true, }) .compile(); db.updateTable("lix_key_value_by_version") .set({ value: "file-2" }) .where("key", "=", "flashtype_active_file_id") .compile(); db.deleteFrom("lix_key_value_by_version") .where("key", "=", "flashtype_active_file_id") .compile(); const withDb = qb({ db }); withDb.selectFrom("lix_file").select("id"); // @ts-expect-error unknown table db.selectFrom("not_a_table").selectAll().compile(); // @ts-expect-error unknown column db.selectFrom("lix_file").select(["not_a_column"]).compile(); const badInsert: Insertable = { key: "x", value: "y", lixcol_untracked: true, }; void badInsert; ================================================ FILE: packages/js-kysely/src/qb.ts ================================================ import { createLixKysely } from "./create-lix-kysely.js"; import type { CreateLixKyselyOptions } from "./create-lix-kysely.js"; type QbInput = Parameters[0]; type QbOptions = CreateLixKyselyOptions; /** * Kysely entrypoint for Lix. * * Usage: * await qb(lix).selectFrom("lix_file").selectAll().execute() */ export const qb = (lix: QbInput, options?: QbOptions) => createLixKysely(lix, options); ================================================ FILE: packages/js-kysely/src/schema.ts ================================================ import { LixAccountSchema, LixActiveAccountSchema, LixChangeAuthorSchema, LixChangeSchema, LixChangeSetElementSchema, LixChangeSetSchema, LixCommitEdgeSchema, LixCommitSchema, LixDirectoryDescriptorSchema, LixFileDescriptorSchema, LixKeyValueSchema, LixLabelAssignmentSchema, LixLabelSchema, LixRegisteredSchemaSchema, LixVersionDescriptorSchema, } from "@lix-js/sdk"; import type { JsonValue as LixJsonValue } from "@lix-js/sdk"; import type { Generated } from "kysely"; import type { FromSchema, JSONSchema } from "json-schema-to-ts"; type LixPropertySchema = JSONSchema & { "x-lix-default"?: string; }; type LixSchemaDefinition = JSONSchema & { type: "object"; additionalProperties: false; properties?: Record; }; type LixJsonObject = { [key: string]: LixJsonValue }; type LixEntityId = string[]; export type LixGenerated = T & { readonly __lixGenerated?: true; }; type IsLixGenerated = T extends { readonly __lixGenerated?: true } ? true : false; type ExtractFromGenerated = T extends LixGenerated ? U : T; type IsNever = [T] extends [never] ? true : false; type IsAny = 0 extends 1 & T ? true : false; type TransformEmptyObject = IsAny extends true ? any : IsNever extends true ? never : T extends object ? keyof T extends never ? LixJsonObject : T : T; type IsEmptyObjectSchema

= P extends { type: "object" } ? P extends { properties: any } ? false : true : false; type GetNullablePart

= P extends { nullable: true } ? null : never; type PropertyHasDefault

= P extends { "x-lix-default": any } ? true : P extends { default: any } ? true : false; type ApplyLixGenerated = TSchema extends { properties: infer Props; } ? { [K in keyof FromSchema]: K extends keyof Props ? PropertyHasDefault extends true ? LixGenerated[K]>> : IsEmptyObjectSchema extends true ? LixJsonObject | GetNullablePart : TransformEmptyObject[K]> : TransformEmptyObject[K]>; } : never; export type FromLixSchemaDefinition = ApplyLixGenerated; type ToKysely = { [K in keyof T]: IsLixGenerated extends true ? Generated> : T[K]; }; type EntityStateColumns = { lixcol_entity_id: LixGenerated; lixcol_schema_key: LixGenerated; lixcol_file_id: LixGenerated; lixcol_plugin_key: LixGenerated; lixcol_inherited_from_version_id: LixGenerated; lixcol_created_at: LixGenerated; lixcol_updated_at: LixGenerated; lixcol_change_id: LixGenerated; lixcol_untracked: LixGenerated; lixcol_commit_id: LixGenerated; lixcol_writer_key: LixGenerated; }; type EntityStateByVersionColumns = EntityStateColumns & { lixcol_version_id: LixGenerated; lixcol_metadata: LixGenerated; }; type EntityStateHistoryColumns = { lixcol_entity_id: LixGenerated; lixcol_schema_key: LixGenerated; lixcol_file_id: LixGenerated; lixcol_plugin_key: LixGenerated; lixcol_change_id: LixGenerated; lixcol_commit_id: LixGenerated; lixcol_root_commit_id: LixGenerated; lixcol_depth: LixGenerated; lixcol_metadata: LixGenerated; }; type EntityStateView = T & EntityStateColumns; type EntityStateByVersionView = T & EntityStateByVersionColumns; type EntityStateHistoryView = T & EntityStateHistoryColumns; type EntityViews< TSchema extends LixSchemaDefinition, TViewName extends string, TOverride = object, > = { [K in TViewName]: ToKysely< EntityStateView & TOverride> >; } & { [K in `${TViewName}_by_version`]: ToKysely< EntityStateByVersionView & TOverride> >; } & { [K in `${TViewName}_history`]: ToKysely< EntityStateHistoryView & TOverride> >; }; type StateByVersionView = { entity_id: LixEntityId; schema_key: string; file_id: string | null; plugin_key: string; snapshot_content: LixJsonValue; version_id: string; created_at: Generated; updated_at: Generated; inherited_from_version_id: string | null; change_id: Generated; untracked: Generated; commit_id: Generated; writer_key: string | null; metadata: Generated; }; type StateView = Omit; type StateWithTombstonesView = { entity_id: LixEntityId; schema_key: string; file_id: string | null; plugin_key: string; snapshot_content: LixJsonValue | null; version_id: string; created_at: Generated; updated_at: Generated; inherited_from_version_id: string | null; change_id: Generated; untracked: Generated; commit_id: Generated; writer_key: string | null; metadata: Generated; }; type StateHistoryView = { entity_id: LixEntityId; schema_key: string; file_id: string | null; plugin_key: string; snapshot_content: LixJsonValue; metadata: LixJsonValue | null; change_id: string; commit_id: string; root_commit_id: string; depth: number; }; type WorkingChangesView = { entity_id: LixEntityId; schema_key: string; file_id: string | null; before_change_id: string | null; after_change_id: string | null; before_commit_id: string | null; after_commit_id: string | null; status: "added" | "modified" | "removed" | "unchanged"; }; type LixActiveVersion = { version_id: string; }; type LixKeyValue = FromLixSchemaDefinition & { value: LixJsonValue; }; type ChangeView = ToKysely< FromLixSchemaDefinition & { entity_id: LixEntityId; metadata: LixJsonValue | null; snapshot_content: LixJsonValue | null; } >; type DirectoryDescriptorView = ToKysely< EntityStateView< FromLixSchemaDefinition & { path: LixGenerated; } > >; type DirectoryDescriptorByVersionView = ToKysely< EntityStateByVersionView< FromLixSchemaDefinition & { path: LixGenerated; } > >; type DirectoryDescriptorHistoryView = ToKysely< EntityStateHistoryView< FromLixSchemaDefinition & { path: LixGenerated; } > >; export type LixDatabaseSchema = { lix_active_account: EntityViews< typeof LixActiveAccountSchema, "lix_active_account" >["lix_active_account"]; lix_active_version: ToKysely; lix_state: StateView; lix_state_by_version: StateByVersionView; lix_state_history: StateHistoryView; lix_working_changes: WorkingChangesView; lix_change: ChangeView; lix_directory: DirectoryDescriptorView; lix_directory_by_version: DirectoryDescriptorByVersionView; lix_directory_history: DirectoryDescriptorHistoryView; } & EntityViews< typeof LixKeyValueSchema, "lix_key_value", { value: LixKeyValue["value"] } > & EntityViews & EntityViews & EntityViews< typeof LixChangeSetElementSchema, "lix_change_set_element", { entity_id: LixEntityId } > & EntityViews & EntityViews< typeof LixFileDescriptorSchema, "lix_file", { data: Uint8Array; path: LixGenerated; directory_id: LixGenerated; name: LixGenerated; extension: LixGenerated; } > & EntityViews & EntityViews< typeof LixLabelAssignmentSchema, "lix_label_assignment", { target_entity_id: LixEntityId } > & EntityViews< typeof LixRegisteredSchemaSchema, "lix_registered_schema", { value: LixJsonValue } > & EntityViews< typeof LixVersionDescriptorSchema, "lix_version", { commit_id: LixGenerated; working_commit_id: LixGenerated } > & EntityViews & EntityViews; ================================================ FILE: packages/js-kysely/tests/eb-entity.test.ts ================================================ import { expect, test } from "vitest"; import { ebEntity, qb } from "../src/index.js"; const db = qb({ execute: async () => ({ rows: [] }), }); test("hasLabel compiles to the label assignment state-address tuple for entity tables", () => { const compiled = db .selectFrom("lix_commit") .where(ebEntity("lix_commit").hasLabel({ name: "checkpoint" })) .select("id") .compile(); expect(compiled.sql).toContain("from lix_label_assignment"); expect(compiled.sql).toContain( "lix_label_assignment.target_entity_id = lix_commit.lixcol_entity_id", ); expect(compiled.sql).toContain( "lix_label_assignment.target_schema_key = lix_commit.lixcol_schema_key", ); expect(compiled.sql).toContain( "lix_label_assignment.target_file_id is lix_commit.lixcol_file_id", ); expect(compiled.sql).toContain("lix_label.name = ?"); expect(compiled.sql).not.toContain("lix_entity_label"); expect(compiled.parameters).toEqual(["checkpoint"]); }); test("hasLabel compiles to the label assignment state-address tuple for canonical state tables", () => { const compiled = db .selectFrom("lix_state") .where(ebEntity("lix_state").hasLabel({ id: "label-a" })) .select("entity_id") .compile(); expect(compiled.sql).toContain( "lix_label_assignment.target_entity_id = lix_state.entity_id", ); expect(compiled.sql).toContain( "lix_label_assignment.target_schema_key = lix_state.schema_key", ); expect(compiled.sql).toContain( "lix_label_assignment.target_file_id is lix_state.file_id", ); expect(compiled.sql).toContain("lix_label.id = ?"); expect(compiled.parameters).toEqual(["label-a"]); }); ================================================ FILE: packages/js-kysely/tests/transaction.test.ts ================================================ import { afterEach, expect, test } from "vitest"; import { openLix, type Lix } from "@lix-js/sdk"; import { qb } from "../src/index.js"; const encoder = new TextEncoder(); let lix: Lix | undefined; afterEach(async () => { if (lix) { await lix.close(); lix = undefined; } }); test("qb(lix).transaction works with openLix()", async () => { lix = await openLix(); await qb(lix) .transaction() .execute(async (trx) => { await trx .insertInto("lix_file") .values({ path: "/tx-basic.md", data: encoder.encode("ok"), }) .execute(); }); const row = await qb(lix) .selectFrom("lix_file") .where("path", "=", "/tx-basic.md") .select(["path"]) .executeTakeFirst(); expect(row?.path).toBe("/tx-basic.md"); }); test("qb(lix) serializes concurrent transactions on one Lix instance", async () => { lix = await openLix(); const wait = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms)); const txA = qb(lix) .transaction() .execute(async (trx) => { await trx .insertInto("lix_file") .values({ path: "/tx-concurrent-a.md", data: encoder.encode("A"), }) .execute(); await wait(30); }); const txB = qb(lix) .transaction() .execute(async (trx) => { await trx .insertInto("lix_file") .values({ path: "/tx-concurrent-b.md", data: encoder.encode("B"), }) .execute(); }); await Promise.all([txA, txB]); const rows = await qb(lix) .selectFrom("lix_file") .where("path", "in", ["/tx-concurrent-a.md", "/tx-concurrent-b.md"]) .select(["path"]) .execute(); const paths = rows.map((row) => row.path).sort(); expect(paths).toEqual(["/tx-concurrent-a.md", "/tx-concurrent-b.md"]); }); ================================================ FILE: packages/js-kysely/tsconfig.json ================================================ { "compilerOptions": { "target": "ES2022", "module": "NodeNext", "moduleResolution": "NodeNext", "strict": true, "declaration": true, "outDir": "dist", "skipLibCheck": true }, "include": ["src"] } ================================================ FILE: packages/js-kysely/tsconfig.type-tests.json ================================================ { "extends": "./tsconfig.json", "compilerOptions": { "noEmit": true }, "include": ["src", "tests"] } ================================================ FILE: packages/js-kysely/vitest.config.ts ================================================ import { defineConfig } from "vitest/config"; export default defineConfig({ test: { environment: "node", include: ["tests/**/*.test.ts"], }, }); ================================================ FILE: packages/js-sdk/.gitignore ================================================ /dist /dist-engine-src /engine-src # wasm-bindgen generated engine outputs /src/engine-wasm/wasm /src/engine-wasm/engine-wasm-binary.js /src/engine-wasm/engine-wasm-binary.d.ts # legacy embedded wasm wrappers (generated artifacts) /src/backend/wasm-sqlite.wasm.ts /src/backend/wasm-sqlite.wasm.ts.d.ts # wasm modules and generated declarations *.wasm *.wasm.d.ts # generated from engine builtin schemas /src/generated/ ================================================ FILE: packages/js-sdk/Cargo.toml ================================================ [package] name = "lix_engine_wasm_bindgen" version = "0.1.0" edition = "2021" [lib] path = "wasm-bindgen.rs" crate-type = ["cdylib"] [dependencies] lix_rs_sdk = { path = "../rs-sdk" } wasm-bindgen = "0.2" wasm-bindgen-futures = "0.4" js-sys = "0.3" async-trait = "0.1" getrandom = { version = "0.3", features = ["wasm_js"] } serde = "1" serde_json = "1" serde-wasm-bindgen = "0.6" base64 = "0.22" ================================================ FILE: packages/js-sdk/README.md ================================================ # @lix-js/sdk WASM-backed JavaScript SDK for Lix. ## Agent Guidance If you are an AI coding agent using this package, read [`SKILL.md`](./SKILL.md) before building examples, demos, tests, or applications with `@lix-js/sdk`. The skill documents the current preview API, recommended SQLite backend setup, schema registration flow, entity-table writes, version workflows, merge behavior, and known sharp edges. ================================================ FILE: packages/js-sdk/SKILL.md ================================================ --- name: lix-js-sdk description: Use this skill when building examples, demos, tests, or applications with @lix-js/sdk: opening a Lix, registering schemas, writing entities through generated SQL tables, creating named versions, merging, and querying change history. --- # Lix JS SDK Skill ## What Is Lix Lix is an embeddable version control system for structured application state. It gives apps named versions, merge, and an immutable SQL-queryable change journal without asking the app to build those systems from scratch. Current `@lix-js/sdk` capabilities: - Register JSON schemas as tracked entity tables. - Read and write entities through generated SQL tables. - Create named versions of state and write/read across versions. - Merge one version into the active version. - Query `lix_change` for history, audit, activity feeds, and undo-style features. - Store files as bytes with `lix_file` and version them like other entities. Product direction: - Lix is designed to version files of any kind by parsing them into typed entities on write. - Parser plugins that turn file contents into app entities are not shipped through the JS SDK yet. Do not promise this behavior in demos. Today, `lix_file` versions bytes, while app entities are modeled directly through registered schemas. Every row in every registered schema is a tracked entity. Merge granularity is currently per-entity, not per-field: two versions editing different rows merge cleanly; two versions editing the same row conflict, even if the fields are disjoint. Model collaborative domains as many small entities, such as sections, blocks, paragraphs, message keys, or line items. Use Lix vocabulary in user-facing copy. What Git calls a branch is called a **version** in Lix because that language makes sense to non-developers. ## When To Use This Skill Use this skill when you need to write or debug consumer code using `@lix-js/sdk`: - Opening a persistent `.lix` file. - Registering schemas. - Writing and reading generated SQL entity tables. - Reading `execute()` results. - Creating, switching, previewing, and merging versions. - Querying history through `lix_change`. - Building app demos, examples, smoke tests, or product flows around the SDK. Do not use this skill for raw SQLite access, private engine/wasm internals, SDK publishing, SDK build pipelines, or unreleased file-parser plugin behavior. ## Agent Quick Start 1. Install `@lix-js/sdk` and `better-sqlite3`. 2. Open with `createBetterSqlite3Backend({ path })`; do not open `.lix` with raw SQLite. 3. Register a schema with `x-lix-key`, `x-lix-primary-key`, and `additionalProperties: false`. 4. Write rows through the generated table named by `x-lix-key`. 5. Use `_by_version` plus `lixcol_version_id` for side-by-side version reads/writes. 6. Query `lix_change` for audit/history instead of hand-rolling audit tables. 7. Wrap `mergeVersion()` in `try/catch` whenever conflicts are possible. ## Core Rules - Use the public `@lix-js/sdk` API only. - Use `createBetterSqlite3Backend()` for persistent apps, demos, and tests. - Use numbered SQL placeholders: `$1`, `$2`, `$3`; bare `?` is rejected. - Use `lix_json($1)` when inserting JSON text into JSON-typed columns. - Use scalar SQL functions `SELECT lix_uuid_v7()` and `SELECT lix_timestamp()` when consumer code needs Lix-generated UUID v7 ids or ISO timestamps. Do not call them as table functions with `SELECT * FROM ...`. - Use stable, namespaced, lowercase schema keys like `acme_section`, not generic names like `task`. - Always include `x-lix-primary-key` and `additionalProperties: false` on app schemas. - Use version names from the user's vocabulary, such as `"Marketing edit"` or `"Q3 pricing draft"`. - Model concurrent-edit domains as collections of small rows because merge is per-row today. - Prefer `_by_version` tables for demos, sync, agent inspection, and side-by-side diffs. - Close handles in scripts and tests with `await lix.close()`. ## Install And Open ```sh npm i @lix-js/sdk better-sqlite3 ``` ```ts import { openLix } from "@lix-js/sdk"; import { createBetterSqlite3Backend } from "@lix-js/sdk/sqlite"; const lix = await openLix({ backend: createBetterSqlite3Backend({ path: "/path/to/app.lix" }), }); ``` `better-sqlite3` is an optional peer dependency. Install it in projects that import `@lix-js/sdk/sqlite`. `openLix()` without a backend is in-memory and dies with the process. For anything that should persist, pass a real `.lix` path. Reopening the same path picks up existing state. For tests and demos, use an isolated temp directory per run: ```ts import { mkdtempSync } from "node:fs"; import { tmpdir } from "node:os"; import path from "node:path"; import { openLix } from "@lix-js/sdk"; import { createBetterSqlite3Backend } from "@lix-js/sdk/sqlite"; const dir = mkdtempSync(path.join(tmpdir(), "lix-")); const lix = await openLix({ backend: createBetterSqlite3Backend({ path: path.join(dir, "demo.lix") }), }); ``` Use the version of this skill that ships with the installed `@lix-js/sdk` package. If behavior is unclear, inspect the installed package before guessing. The npm package bundles matching engine source under `node_modules/@lix-js/sdk/dist-engine-src/`. Useful installed-package references: - `dist-engine-src/src/sql2/entity_provider.rs` - registered schema SQL surfaces. - `dist-engine-src/src/sql2/change_provider.rs` - `lix_change` projection. - `dist-engine-src/src/sql2/version_provider.rs` - writable `lix_version` surface. - `dist-engine-src/src/transaction/validation.rs` - primary-key, unique, foreign-key, and shape validation. - `dist-engine-src/src/schema/definition.json` - Lix schema-definition meta-schema. - `dist-engine-src/src/schema/builtin/` - built-in entity table shapes. - `dist-engine-src/src/sql2/udfs/` - registered SQL functions. Do not import from `@lix-js/sdk/engine-wasm`, do not call private wasm helpers, and do not open the `.lix` SQLite file directly. ## Minimal Entity Example This is the smallest useful consumer pattern: open, register a schema, write a row, read it back, and close. ```ts import { mkdtempSync } from "node:fs"; import { tmpdir } from "node:os"; import path from "node:path"; import { openLix } from "@lix-js/sdk"; import { createBetterSqlite3Backend } from "@lix-js/sdk/sqlite"; const dir = mkdtempSync(path.join(tmpdir(), "lix-")); const lix = await openLix({ backend: createBetterSqlite3Backend({ path: path.join(dir, "demo.lix") }), }); await lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", [ JSON.stringify({ $schema: "https://json-schema.org/draft/2020-12/schema", "x-lix-key": "acme_note", "x-lix-primary-key": ["/id"], type: "object", required: ["id", "title", "done"], properties: { id: { type: "string" }, title: { type: "string" }, done: { type: "boolean" }, }, additionalProperties: false, }), ], ); await lix.execute( "INSERT INTO acme_note (id, title, done) VALUES ($1, $2, $3)", ["n1", "Draft launch copy", false], ); const result = await lix.execute( "SELECT title, done FROM acme_note WHERE id = $1", ["n1"], ); const row = result.rows[0]!; console.log(row.value("title").asText(), row.value("done").asBoolean()); await lix.close(); ``` ## Reading Results `lix.execute()` returns one shape for every statement: ```ts type ExecuteResult = { columns: string[]; rows: Row[]; rowsAffected: number; notices: LixNotice[]; }; ``` There is no `result.kind`. `SELECT` fills `columns` and `rows`; `INSERT`, `UPDATE`, and `DELETE` usually return `rows: []` and set `rowsAffected`. Each row is a `Row` object. Use `row.value("column")` or `row.valueAt(index)` to get a `Value`, then call typed accessors: ```ts const r = await lix.execute("SELECT id, title, done FROM acme_note"); for (const row of r.rows) { const id = row.value("id").asText(); const title = row.value("title").asText(); const done = row.value("done").asBoolean(); } ``` | Method | Returns | Use for | | ------------- | ------------------------- | ----------------------------------------- | | `asText()` | `string \| undefined` | strings; note `asText`, not `asString` | | `asBoolean()` | `boolean \| undefined` | booleans | | `asInteger()` | `number \| undefined` | integer fields | | `asReal()` | `number \| undefined` | decimal/real fields | | `asJson()` | `JsonValue \| undefined` | objects and arrays | | `asBlob()` | `Uint8Array \| undefined` | binary data | Accessors return `undefined` when the cell kind does not match. Branch on `value.kind` if a column can hold multiple types. Public kind strings are `"null"`, `"boolean"`, `"integer"`, `"real"`, `"text"`, `"json"`, and `"blob"`. `Row` also has convenience methods when native JS values are enough: `get(name)`, `tryGet(name)`, `getAt(index)`, `toObject()`, and `toValueMap()`. ## Registering Schemas Register app schemas by inserting JSON into `lix_registered_schema.value`: ```ts await lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", [JSON.stringify(schema)], ); ``` Schema basics: - `x-lix-key` becomes the generated SQL table name. - Compatible schema amendments are keyed by `x-lix-key`. - `x-lix-primary-key` tells Lix how to derive entity identity. - Primary-key entries are JSON Pointers with a leading slash, such as `["/id"]` or `["/owner/email"]`. - Use `additionalProperties: false` so accidental fields fail fast. Without `x-lix-primary-key`, table-style INSERTs fail with an error like `requires lixcol_entity_id because the schema has no x-lix-primary-key`. Uniqueness is not inferred from ordinary JSON Schema fields. If a non-primary-key field must be unique, declare it explicitly: ```ts const companyDomainSchema = { "x-lix-key": "crm_company_domain", "x-lix-primary-key": ["/id"], "x-lix-unique": [["/domain"]], type: "object", required: ["id", "domain"], properties: { id: { type: "string" }, domain: { type: "string" }, }, additionalProperties: false, }; ``` Do not add generic `created_at` or `updated_at` fields by default. Lix already records lifecycle history through `lix_change` and `lixcol_*` metadata. Add timestamp fields only when they are domain data, such as `due_at`, `published_at`, or `occurred_at`. Discover live schemas before guessing: ```ts const schemas = await lix.execute( "SELECT lixcol_entity_id, value FROM lix_registered_schema ORDER BY lixcol_entity_id", ); for (const row of schemas.rows) { const schema = row.get("value") as { "x-lix-key"?: string }; console.log(schema["x-lix-key"]); } ``` ## Versions And `_by_version` Capture the initial active version id instead of hardcoding `"main"`: ```ts const published = await lix.activeVersionId(); ``` Create versions with names from the user's domain: ```ts const marketing = await lix.createVersion({ name: "Marketing edit" }); const legal = await lix.createVersion({ name: "Legal review" }); ``` Every registered schema `X` gets a sibling table `X_by_version` with `lixcol_version_id`. Use it for side-by-side reads and for writes to non-active versions. ```ts await lix.execute( `UPDATE acme_note_by_version SET title = $1 WHERE id = $2 AND lixcol_version_id = $3`, ["Sharper launch copy", "n1", marketing.id], ); const sideBySide = await lix.execute( `SELECT v.name, n.title FROM acme_note_by_version n JOIN lix_version v ON v.id = n.lixcol_version_id WHERE n.id = $1 AND n.lixcol_version_id IN ($2, $3) ORDER BY v.name`, ["n1", published, marketing.id], ); ``` Rules for `_by_version`: - Reads filter by `lixcol_version_id`, or omit the filter to scan all versions. - INSERTs require `lixcol_version_id`. - UPDATEs and DELETEs must include `lixcol_version_id` in the WHERE clause. - The non-suffixed table is the active-version view. `switchVersion()` is for app code with a current working version concept. `mergeVersion()` always merges into the active version, so switch first if you need a different target. ## Merging `mergeVersion()` merges the source version into the currently active version: ```ts try { const merge = await lix.mergeVersion({ sourceVersionId: marketing.id }); console.log(merge.outcome, merge.changeStats.total); } catch (error) { console.error("Merge conflict", error); } ``` Common outcomes: - `"alreadyUpToDate"` - source has no commits the target lacks. - `"fastForward"` - target advanced to source without a merge commit. - `"mergeCommitted"` - a new merge commit was created. `mergeVersionPreview()` reports the same merge decision without advancing refs, staging changes, or creating commits. Merge conflicts are returned as preview data. Conflicts throw from `mergeVersion()`. If both versions modified the same entity since their merge base, Lix raises a `LixError`. Conflict detection is row-level today, not field-level. To reproduce a conflict in a demo, fork all contending versions from the same base before merging any of them. ## Demo Pattern To Imitate For richer demos, show these four things: 1. Isolation: one SELECT against `_by_version` shows several versions side by side. 2. Clean parallel merges: two reviewers edit different entities and both land. 3. Audit history: `lix_change` is queryable SQL. 4. Conflict handling: two versions edit the same entity and `mergeVersion()` throws. Shape the domain as a collection of small entities: - Good: brochure sections, document blocks, paragraph rows, message keys, line items. - Risky: one huge document row with many editable fields. Demo recipe: 1. Register a schema such as `acme_section`. 2. Seed several rows in the published version. 3. Create all reviewer versions up front from the same base. 4. Write each reviewer's changes through `acme_section_by_version`. 5. Read side by side by joining `acme_section_by_version` to `lix_version`. 6. Merge non-overlapping row edits successfully. 7. Query `lix_change`. 8. Catch the deliberate same-row conflict. ## Files With `lix_file` `lix_file` stores files as versioned bytes. Parent directories are created automatically. ```ts await lix.execute("INSERT INTO lix_file (id, path, data) VALUES ($1, $2, $3)", [ "file-readme", "/docs/readme.md", new TextEncoder().encode("# Hello\n"), ]); const result = await lix.execute( "SELECT path, data FROM lix_file WHERE id = $1", ["file-readme"], ); const file = result.rows[0]!; console.log( file.value("path").asText(), new TextDecoder().decode(file.value("data").asBlob()!), ); ``` Columns consumers usually need: | Column | What it is | | ---------- | --------------------------------------------------------------------- | | `id` | Stable identity for the file. | | `path` | Absolute path like `/docs/readme.md`. | | `data` | File contents as bytes. | | `hidden` | UI hint; does not affect storage. | | `lixcol_*` | Version/change metadata, including `lixcol_version_id` where exposed. | `lix_file_by_version` exists for cross-version file reads and writes. Files-as-parsed-entities are product direction, not current JS SDK behavior. ## The Change Journal `lix_change` is an immutable SQL table of changes across registered schemas and versions. Use it for audit logs, blame, history, activity feeds, and undo-style UI. Important columns include `id`, `entity_id`, `schema_key`, `snapshot_content`, `created_at`, and `lixcol_*` metadata. ```ts // Audit log for one entity, oldest to newest. await lix.execute( `SELECT created_at, snapshot_content FROM lix_change WHERE schema_key = $1 AND entity_id = $2 ORDER BY created_at`, ["acme_note", "n1"], ); // Latest activity across a schema. await lix.execute( `SELECT created_at, entity_id, snapshot_content FROM lix_change WHERE schema_key = $1 ORDER BY created_at DESC LIMIT 20`, ["acme_note"], ); ``` `snapshot_content` can be null or absent for tombstones, removals, or rows where content was not materialized. In the JS SDK, read it with `row.value("snapshot_content").asJson()` or `row.get("snapshot_content")`, then handle null. Do not blindly `JSON.parse` it as text. ## Built-In Tables And UDFs Common tables: | Table | What it gives consumers | | ----------------------- | ------------------------------------------------------------------------------------------------------- | | `lix_version` | Writable version surface: `id`, `name`, `hidden`, `commit_id`. | | `lix_change` | Immutable change journal. | | `lix_file` | Versioned byte storage for files. | | `lix_registered_schema` | Registry of app schemas plus built-ins; also exposes the Lix schema-definition meta-schema at runtime. | `lix_version` can be updated for admin flows: ```ts await lix.execute("UPDATE lix_version SET hidden = true WHERE id = $1", [ marketing.id, ]); ``` There is no documented `deleteVersion()` helper in this preview. If the product wants reversible cleanup, hide the version. If it wants removal, `DELETE FROM lix_version WHERE id = $1` is the SQL surface; the engine rejects deleting the global version and active version. Use `lix_json($1)` to parse JSON text parameters when writing JSON-typed columns: ```ts await lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", [JSON.stringify(schema)], ); ``` Other UDFs, such as `lix_json_get`, `lix_uuid_v7`, `lix_text_encode`, and `lix_empty_blob`, live in `dist-engine-src/src/sql2/udfs/` in the installed package. ## Do And Avoid | Do | Avoid | | --- | --- | | Use `createBetterSqlite3Backend({ path })` for persistent state. | Opening `.lix` files with raw SQLite libraries. | | Use public imports from `@lix-js/sdk` and `@lix-js/sdk/sqlite`. | Importing `engine-wasm` or private internals. | | Use `$1`, `$2`, `$3` placeholders. | Bare `?` placeholders. | | Use `lix_json($1)` for JSON parameters. | Inlining stringified JSON directly into SQL. | | Use `_by_version` for cross-version reads/writes. | Switching versions just to render a side-by-side view. | | Name versions in user vocabulary. | User-facing words like branch, branch-1, or generic Draft. | | Model collaborative data as small rows. | One giant row when multiple reviewers edit different parts. | | Add `x-lix-unique` for non-primary unique fields. | Assuming JSON Schema property metadata creates uniqueness. | | Read `snapshot_content` as JSON/native and handle null. | Blindly `JSON.parse(row.value(...).asText())`. | | Wrap `mergeVersion()` in `try/catch`. | Assuming merges cannot conflict. | ## Reporting SDK Friction If you encounter an SDK bug, missing API, confusing error, documentation gap, or large implementation friction while using this skill, pause and ask the user whether they want you to open a GitHub issue via the `gh` CLI installed on their computer. Do not file an issue without confirmation. Before filing, scan existing issues to avoid duplicates. If the user approves a report, include a minimal reproduction, expected behavior, actual behavior, the installed `@lix-js/sdk` version, runtime details, and relevant error output. Do not include private data, customer content, credentials, tokens, local paths, database contents, or proprietary schemas. ================================================ FILE: packages/js-sdk/package.json ================================================ { "name": "@lix-js/sdk", "type": "module", "version": "0.6.0-preview.2", "main": "./dist/index.js", "types": "./dist/index.d.ts", "files": [ "dist", "dist-engine-src", "SKILL.md" ], "exports": { ".": { "types": "./dist/index.d.ts", "default": "./dist/index.js" }, "./sqlite": { "types": "./dist/sqlite/index.d.ts", "default": "./dist/sqlite/index.js" } }, "description": "WASM-backed JS SDK wrapper for Lix", "scripts": { "build": "node ./scripts/build.js", "sync:builtin-schemas": "node ./scripts/sync-builtin-schemas.js", "sync:engine-src": "node ./scripts/sync-engine-src.js", "prepack": "node ./scripts/sync-engine-src.js", "typecheck": "pnpm run sync:builtin-schemas && tsc -p tsconfig.json --noEmit", "test": "node ./scripts/build.js && vitest run", "test:watch": "node ./scripts/build.js && vitest" }, "peerDependencies": { "better-sqlite3": "^12.9.0" }, "peerDependenciesMeta": { "better-sqlite3": { "optional": true } }, "devDependencies": { "better-sqlite3": "^12.9.0", "typescript": "^5.5.4", "vitest": "^4.0.18" }, "nx": { "targets": { "build": { "inputs": [ "default", "^default", "publicEnv", "nodeVersion", "platform", "{workspaceRoot}/Cargo.toml", "{workspaceRoot}/Cargo.lock", "{workspaceRoot}/packages/engine/**/*", "{workspaceRoot}/packages/rs-sdk/**/*", "{workspaceRoot}/packages/js-sdk/Cargo.toml", "{workspaceRoot}/packages/js-sdk/wasm-bindgen.rs" ], "outputs": [ "{projectRoot}/dist", "{projectRoot}/dist-engine-src", "{projectRoot}/src/engine-wasm/wasm" ] } } } } ================================================ FILE: packages/js-sdk/scripts/build.js ================================================ #!/usr/bin/env node import { spawn } from "node:child_process"; import { dirname, join } from "node:path"; import { fileURLToPath } from "node:url"; import { cp, mkdir, readFile, rename, rm, writeFile } from "node:fs/promises"; const __dirname = dirname(fileURLToPath(import.meta.url)); const repoRoot = join(__dirname, "..", "..", ".."); const jsSdkDir = join(repoRoot, "packages", "js-sdk"); const wasmProfile = process.env.LIX_WASM_PROFILE ?? "release"; const useWasmSizeOptimizations = wasmProfile === "release" && process.env.LIX_WASM_SIZE_OPT !== "0"; const targetDir = join( repoRoot, "target", "wasm32-unknown-unknown", wasmProfile, ); const engineWasmPath = join(targetDir, "lix_engine_wasm_bindgen.wasm"); const engineOutDir = join(jsSdkDir, "src", "engine-wasm", "wasm"); const engineDistOutDir = join(jsSdkDir, "dist", "engine-wasm", "wasm"); const distDir = join(jsSdkDir, "dist"); const wasmBindgenOutName = "lix_engine"; function run(cmd, args, opts = {}) { return new Promise((resolve, reject) => { const child = spawn(cmd, args, { stdio: "inherit", ...opts }); child.on("error", reject); child.on("exit", (code) => { if (code === 0) resolve(); else reject(new Error(`${cmd} exited with code ${code ?? 1}`)); }); }); } async function buildEngineWasm() { const existingRustFlags = process.env.RUSTFLAGS ?? ""; const wasmSizeRustFlags = useWasmSizeOptimizations ? " -C opt-level=z -C lto=fat -C embed-bitcode=yes -C codegen-units=1 -C panic=abort" : ""; const wasmRustFlags = `${existingRustFlags} --cfg getrandom_backend="wasm_js"${wasmSizeRustFlags}`.trim(); const cargoArgs = [ "build", "-p", "lix_engine_wasm_bindgen", "--target", "wasm32-unknown-unknown", ]; if (wasmProfile === "release") { cargoArgs.push("--release"); } await run("cargo", cargoArgs, { env: { ...process.env, RUSTFLAGS: wasmRustFlags, }, }); await rm(engineOutDir, { recursive: true, force: true }); await run("wasm-bindgen", [ engineWasmPath, "--target", "web", "--out-dir", engineOutDir, "--out-name", wasmBindgenOutName, ]); await normalizeWasmBindgenOutput(engineOutDir); await stripWasmCustomSections(engineOutDir); await mkdir(engineDistOutDir, { recursive: true }); await cp(engineOutDir, engineDistOutDir, { recursive: true, force: true }); } async function normalizeWasmBindgenOutput(outputDir) { const generatedWasm = join(outputDir, `${wasmBindgenOutName}_bg.wasm`); const generatedWasmTypes = join(outputDir, `${wasmBindgenOutName}_bg.wasm.d.ts`); const normalizedWasm = join(outputDir, `${wasmBindgenOutName}.wasm`); const normalizedWasmTypes = join(outputDir, `${wasmBindgenOutName}.wasm.d.ts`); const fsmod = await import("node:fs"); if (fsmod.existsSync(generatedWasm)) await rename(generatedWasm, normalizedWasm); if (fsmod.existsSync(generatedWasmTypes)) await rename(generatedWasmTypes, normalizedWasmTypes); const jsPath = join(outputDir, `${wasmBindgenOutName}.js`); const js = await readFile(jsPath, "utf8"); await writeFile( jsPath, js.replaceAll(`${wasmBindgenOutName}_bg.wasm`, `${wasmBindgenOutName}.wasm`), ); } async function stripWasmCustomSections(outputDir) { const wasmPath = join(outputDir, `${wasmBindgenOutName}.wasm`); const strippedWasmPath = join(outputDir, `${wasmBindgenOutName}.stripped.wasm`); await run("wasm-tools", ["strip", "--all", wasmPath, "-o", strippedWasmPath]); await rename(strippedWasmPath, wasmPath); } async function syncBuiltinSchemas() { await run("node", ["./scripts/sync-builtin-schemas.js"], { cwd: jsSdkDir }); } async function syncEngineSource() { await run("node", ["./scripts/sync-engine-src.js"], { cwd: jsSdkDir }); } async function buildTypescriptDist() { await run("tsc", ["-p", "tsconfig.json"], { cwd: jsSdkDir }); } async function main() { await rm(distDir, { recursive: true, force: true }); await syncBuiltinSchemas(); await syncEngineSource(); await buildEngineWasm(); await buildTypescriptDist(); } main().catch((error) => { console.error("[build-wasm] Failed to generate wasm payloads:\n", error); process.exit(1); }); ================================================ FILE: packages/js-sdk/scripts/sync-builtin-schemas.js ================================================ #!/usr/bin/env node import { readdir, readFile, writeFile, mkdir } from "node:fs/promises"; import { dirname, join, extname, basename } from "node:path"; import { fileURLToPath } from "node:url"; const __dirname = dirname(fileURLToPath(import.meta.url)); const repoRoot = join(__dirname, "..", "..", ".."); const engineBuiltinDir = join( repoRoot, "packages", "engine", "src", "schema", "builtin", ); const outDir = join(repoRoot, "packages", "js-sdk", "src", "generated"); const outFile = join(outDir, "builtin-schemas.ts"); const toPascal = (value) => value .split("_") .filter(Boolean) .map((part) => part[0].toUpperCase() + part.slice(1)) .join(""); async function main() { const entries = await readdir(engineBuiltinDir, { withFileTypes: true }); const jsonFiles = entries .filter((entry) => entry.isFile() && extname(entry.name) === ".json") .map((entry) => entry.name) .sort(); const exportBlocks = []; for (const file of jsonFiles) { const schemaBase = basename(file, ".json"); const exportName = `${toPascal(schemaBase)}Schema`; const raw = await readFile(join(engineBuiltinDir, file), "utf8"); const parsed = JSON.parse(raw); exportBlocks.push( `export const ${exportName} = ${JSON.stringify(parsed, null, 2)} as const;`, ); } const content = `// AUTO-GENERATED by scripts/sync-builtin-schemas.js\n// Source of truth: packages/engine/src/schema/builtin/*.json\n\n${exportBlocks.join("\n\n")}\n`; await mkdir(outDir, { recursive: true }); await writeFile(outFile, content); } main().catch((error) => { console.error("[sync-builtin-schemas] failed", error); process.exit(1); }); ================================================ FILE: packages/js-sdk/scripts/sync-engine-src.js ================================================ #!/usr/bin/env node import { cp, mkdir, rm, writeFile } from "node:fs/promises"; import { dirname, join } from "node:path"; import { fileURLToPath } from "node:url"; const __dirname = dirname(fileURLToPath(import.meta.url)); const repoRoot = join(__dirname, "..", "..", ".."); const jsSdkDir = join(repoRoot, "packages", "js-sdk"); const engineSrcDir = join(repoRoot, "packages", "engine", "src"); const bundledDir = join(jsSdkDir, "dist-engine-src"); const bundledSrcDir = join(bundledDir, "src"); async function main() { await rm(bundledDir, { recursive: true, force: true }); await mkdir(bundledDir, { recursive: true }); await cp(engineSrcDir, bundledSrcDir, { recursive: true, force: true, }); await writeFile( join(bundledDir, "README.md"), [ "# Bundled Lix Engine Source", "", "This directory is a generated snapshot of the Rust engine source that backs this @lix-js/sdk release.", "", "Source in the Lix monorepo: `packages/engine/src`", "", "Agents should inspect these files when SDK behavior is unclear instead of relying only on SKILL.md prose.", "", "Useful entry points:", "", "- `src/sql2/entity_provider.rs` - registered schema SQL surfaces", "- `src/sql2/change_provider.rs` - `lix_change` projection", "- `src/sql2/version_provider.rs` - writable `lix_version` surface", "- `src/transaction/validation.rs` - primary-key, unique, foreign-key, and shape validation", "- `src/schema/definition.json` - Lix schema-definition meta-schema", "- `src/schema/builtin/` - built-in schema definitions", "", "Regenerate with `pnpm --filter @lix-js/sdk sync:engine-src` from the repo root.", "", ].join("\n"), ); } main().catch((error) => { console.error("[sync-engine-src] Failed to sync engine source:\n", error); process.exit(1); }); ================================================ FILE: packages/js-sdk/src/builtin-schemas.ts ================================================ export * from "./generated/builtin-schemas.js"; ================================================ FILE: packages/js-sdk/src/engine-wasm/index.ts ================================================ export { default } from "./wasm/lix_engine.js"; export * from "./wasm/lix_engine.js"; import type { InitInput } from "./wasm/lix_engine.js"; export type JsonValue = | null | boolean | number | string | JsonValue[] | { [key: string]: JsonValue }; export type ValueKind = | "null" | "boolean" | "integer" | "real" | "text" | "json" | "blob"; export type LixValue = | { kind: "null"; value: null } | { kind: "boolean"; value: boolean } | { kind: "integer"; value: number } | { kind: "real"; value: number } | { kind: "text"; value: string } | { kind: "json"; value: JsonValue } | { kind: "blob"; base64: string }; export class Value { kind: ValueKind; value: null | boolean | number | string | JsonValue | undefined; base64: string | undefined; constructor( kind: ValueKind, value: null | boolean | number | string | JsonValue | undefined, base64?: string, ) { this.kind = kind; this.value = value; this.base64 = base64; } static null(): Value { return new Value("null", null); } static integer(value: number): Value { if (!Number.isFinite(value) || !Number.isInteger(value)) { throw new TypeError("Value.integer() requires a finite integer number"); } return new Value("integer", value); } static boolean(value: boolean): Value { return new Value("boolean", value); } static real(value: number): Value { if (!Number.isFinite(value)) { throw new TypeError("Value.real() requires a finite number"); } return new Value("real", value); } static text(value: string): Value { if (!isWellFormedUtf16(value)) { throw new TypeError("Value.text() requires a well-formed UTF-16 string"); } return new Value("text", value); } static json(value: JsonValue): Value { return new Value("json", normalizeJsonValue(value)); } static blob(value: Uint8Array): Value { return new Value("blob", undefined, bytesToBase64(value)); } static from(raw: unknown): Value { if (raw instanceof Value) return raw; if (isLixValue(raw)) { switch (raw.kind) { case "null": return Value.null(); case "boolean": return Value.boolean(raw.value); case "integer": return Value.integer(raw.value); case "real": return Value.real(raw.value); case "text": return Value.text(raw.value); case "json": return Value.json(normalizeJsonValue(raw.value)); case "blob": return new Value("blob", undefined, raw.base64); } } if (raw === null) return Value.null(); if (raw === undefined) { throw new TypeError("undefined is not a valid SQL parameter"); } if (typeof raw === "number") { return Number.isInteger(raw) ? Value.integer(raw) : Value.real(raw); } if (typeof raw === "boolean") return Value.boolean(raw); if (typeof raw === "string") return Value.text(raw); if (raw instanceof Uint8Array) return Value.blob(raw); if (raw instanceof ArrayBuffer) return Value.blob(new Uint8Array(raw)); if (ArrayBuffer.isView(raw)) { throw new TypeError( "typed array SQL parameters must be Uint8Array; other ArrayBuffer views are ambiguous", ); } if (raw instanceof Date) { throw new TypeError( "Date is not a valid SQL parameter; pass date.toISOString() or date.getTime() explicitly", ); } if (raw && typeof raw === "object") { return Value.json(normalizeJsonValue(raw)); } throw new TypeError( "Value.from() requires a LixValue, JSON value, or binary value", ); } asInteger(): number | undefined { return this.kind === "integer" ? (this.value as number) : undefined; } asBoolean(): boolean | undefined { return this.kind === "boolean" ? (this.value as boolean) : undefined; } asReal(): number | undefined { return this.kind === "real" ? (this.value as number) : undefined; } asText(): string | undefined { return this.kind === "text" ? (this.value as string) : undefined; } asJson(): JsonValue | undefined { return this.kind === "json" ? normalizeJsonValue(this.value) : undefined; } asBlob(): Uint8Array | undefined { return this.kind === "blob" && this.base64 !== undefined ? base64ToBytes(this.base64) : undefined; } toJSON(): LixValue { switch (this.kind) { case "null": return { kind: "null", value: null }; case "boolean": return { kind: "boolean", value: this.asBoolean() ?? false }; case "integer": return { kind: "integer", value: this.asInteger() ?? 0 }; case "real": return { kind: "real", value: this.asReal() ?? 0 }; case "text": return { kind: "text", value: this.asText() ?? "" }; case "json": return { kind: "json", value: this.asJson() ?? null }; case "blob": return { kind: "blob", base64: this.base64 ?? "" }; } } } export type ExecuteResult = { columns: string[]; rows: LixValue[][]; rowsAffected: number; notices: LixNotice[]; }; export type LixNotice = { code: string; message: string; hint?: string; }; /** * Error thrown by the Lix engine. Extends the standard `Error` with a * machine-readable `code`, optional `hint`, and optional structured `details`. * * Hints follow the Postgres/rustc convention: `message` states what went * wrong in factual terms; `hint` offers a fix when one is known. Consumers * typically render the hint alongside the primary message (e.g. as * `hint: ` in a CLI, secondary text in a UI). */ export interface LixError extends Error { code: string; hint?: string; details?: unknown; } type Assert = T; type _LixErrorHasDetails = Assert< LixError extends { details?: unknown } ? true : false >; type _LixErrorDoesNotHaveData = Assert< "data" extends keyof LixError ? false : true >; type _LixErrorDoesNotHaveDescription = Assert< "description" extends keyof LixError ? false : true >; /** * Type guard: returns `true` when `err` is a Lix-produced error carrying a * structured `code` field (all engine codes start with `LIX_`). */ export function isLixError(err: unknown): err is LixError { return ( err instanceof Error && typeof (err as Partial).code === "string" && (err as LixError).code.startsWith("LIX_") ); } function isLixValue(value: unknown): value is LixValue { if (!value || typeof value !== "object") { return false; } const kind = (value as { kind?: unknown }).kind; if (kind === "null") { return (value as { value?: unknown }).value === null; } if (kind === "boolean") { return typeof (value as { value?: unknown }).value === "boolean"; } if (kind === "integer" || kind === "real") { const raw = (value as { value?: unknown }).value; if (typeof raw !== "number" || !Number.isFinite(raw)) { return false; } if (kind === "integer" && !Number.isInteger(raw)) { return false; } return true; } if (kind === "text") { const raw = (value as { value?: unknown }).value; return typeof raw === "string" && isWellFormedUtf16(raw); } if (kind === "json") { return isJsonValue((value as { value?: unknown }).value); } if (kind === "blob") { return typeof (value as { base64?: unknown }).base64 === "string"; } return false; } function isJsonValue(value: unknown): value is JsonValue { try { normalizeJsonValue(value); return true; } catch { return false; } } function normalizeJsonValue(value: unknown, seen = new WeakSet()): JsonValue { if ( value === null || typeof value === "boolean" ) { return value; } if (typeof value === "string") { if (!isWellFormedUtf16(value)) { throw new TypeError("JSON strings must be well-formed UTF-16"); } return value; } if (typeof value === "number") { if (!Number.isFinite(value)) { throw new TypeError("JSON numbers must be finite"); } return value; } if (Array.isArray(value)) { if (seen.has(value)) { throw new TypeError("JSON values must not contain circular references"); } seen.add(value); const normalized = value.map((item) => normalizeJsonValue(item, seen)); seen.delete(value); return normalized; } if (!value || typeof value !== "object") { throw new TypeError("expected a JSON-compatible value"); } if (value instanceof Date) { throw new TypeError("Date is not a JSON value"); } const prototype = Object.getPrototypeOf(value); if (prototype !== Object.prototype && prototype !== null) { throw new TypeError("JSON objects must be plain objects"); } if (seen.has(value)) { throw new TypeError("JSON values must not contain circular references"); } seen.add(value); const normalized: { [key: string]: JsonValue } = {}; for (const [key, entry] of Object.entries(value)) { if (!isWellFormedUtf16(key)) { throw new TypeError("JSON object keys must be well-formed UTF-16"); } normalized[key] = normalizeJsonValue(entry, seen); } seen.delete(value); return normalized; } function isWellFormedUtf16(value: string): boolean { for (let index = 0; index < value.length; index += 1) { const code = value.charCodeAt(index); if (code >= 0xd800 && code <= 0xdbff) { const next = value.charCodeAt(index + 1); if (next < 0xdc00 || next > 0xdfff) { return false; } index += 1; continue; } if (code >= 0xdc00 && code <= 0xdfff) { return false; } } return true; } function bytesToBase64(bytes: Uint8Array): string { const maybeBuffer = ( globalThis as { Buffer?: { from(value: Uint8Array): { toString(encoding: string): string }; }; } ).Buffer; if (maybeBuffer) { return maybeBuffer.from(bytes).toString("base64"); } let binary = ""; const chunkSize = 0x8000; for (let index = 0; index < bytes.length; index += chunkSize) { const chunk = bytes.subarray(index, index + chunkSize); binary += String.fromCharCode(...chunk); } return btoa(binary); } function base64ToBytes(base64: string): Uint8Array { const maybeBuffer = ( globalThis as { Buffer?: { from(value: string, encoding: string): Uint8Array; }; } ).Buffer; if (maybeBuffer) { return new Uint8Array(maybeBuffer.from(base64, "base64")); } const binary = atob(base64); const bytes = new Uint8Array(binary.length); for (let index = 0; index < binary.length; index += 1) { bytes[index] = binary.charCodeAt(index); } return bytes; } const engineWasmUrl = new URL( "./wasm/lix_engine.wasm", import.meta.url, ); function isNodeRuntime(): boolean { const processLike = ( globalThis as { process?: { versions?: { node?: string } } } ).process; return ( !!processLike && typeof processLike.versions === "object" && !!processLike.versions?.node ); } async function tryReadNodeFileFromViteHttpUrl( url: URL, ): Promise { if (url.protocol !== "http:" && url.protocol !== "https:") { return undefined; } // Vitest/Vite in Node often rewrites module URLs to http://localhost with /@fs/. const decodedPathname = decodeURIComponent(url.pathname); let filePath: string | undefined; if (decodedPathname.startsWith("/@fs/")) { filePath = decodedPathname.slice("/@fs".length); } else if ( url.hostname === "localhost" || url.hostname === "127.0.0.1" || url.hostname === "::1" ) { // Some setups expose absolute filesystem paths directly on localhost. filePath = decodedPathname; } if (!filePath) { return undefined; } const fsModuleName = "node:fs/promises"; const { readFile } = await import(fsModuleName); try { return new Uint8Array(await readFile(filePath)); } catch { return undefined; } } /** * Returns a wasm-bindgen-compatible init input that works in both browser and Node. * * - Browser: use a URL so the runtime fetches the `.wasm` asset. * - Node: read bytes from disk because `fetch(file://...)` is not supported. */ export async function resolveEngineWasmModuleOrPath(): Promise { if (!isNodeRuntime()) { return engineWasmUrl; } if (engineWasmUrl.protocol === "file:") { const fsModuleName = "node:fs/promises"; const urlModuleName = "node:url"; const [{ readFile }, { fileURLToPath }] = await Promise.all([ import(fsModuleName), import(urlModuleName), ]); return readFile(fileURLToPath(engineWasmUrl)); } if ( engineWasmUrl.protocol === "http:" || engineWasmUrl.protocol === "https:" ) { const localBytes = await tryReadNodeFileFromViteHttpUrl(engineWasmUrl); if (localBytes) { return localBytes; } const response = await fetch(engineWasmUrl); if (!response.ok) { throw new Error( `failed to fetch wasm module from '${engineWasmUrl.toString()}': ${response.status} ${response.statusText}`, ); } return new Uint8Array(await response.arrayBuffer()); } return engineWasmUrl; } ================================================ FILE: packages/js-sdk/src/engine-wasm/value.test.ts ================================================ import { expect, test } from "vitest"; import { Value } from "./index.js"; test("Value.asBlob returns empty Uint8Array for canonical empty blob", () => { const decoded = Value.from({ kind: "blob", base64: "" }).asBlob(); expect(decoded).toBeInstanceOf(Uint8Array); expect(decoded?.byteLength).toBe(0); }); test("Value.asBlob roundtrips non-empty canonical blob", () => { const decoded = Value.from({ kind: "blob", base64: "AQID" }).asBlob(); expect(decoded).toEqual(new Uint8Array([1, 2, 3])); }); ================================================ FILE: packages/js-sdk/src/index.ts ================================================ export * from "./open-lix.js"; export * from "./builtin-schemas.js"; export { Value, isLixError } from "./engine-wasm/index.js"; export type { LixError, LixValue } from "./engine-wasm/index.js"; export type { JsonValue, LixRuntimeValue } from "./types.js"; ================================================ FILE: packages/js-sdk/src/open-lix.test.ts ================================================ import { execFile } from "node:child_process"; import { promisify } from "node:util"; import { fileURLToPath } from "node:url"; import { expect, test } from "vitest"; import { openLix, Value, type BackendKvEntryPage, type BackendKvExistsBatch, type BackendKvGetRequest, type BackendKvKeyPage, type BackendKvScanRange, type BackendKvScanRequest, type BackendKvValueBatch, type BackendKvValuePage, type BackendKvWriteBatch, type BackendKvWriteStats, type ExecuteResult, type LixBackend, type LixBackendReadTransaction, type LixBackendWriteTransaction, type LixError, type Lix, isLixError, } from "./index.js"; const execFileAsync = promisify(execFile); const jsSdkRoot = fileURLToPath(new URL("..", import.meta.url)); test("openLix exposes the rs-sdk e2e flow", async () => { const lix = await openLix(); const mainVersionId = await lix.activeVersionId(); await registerCrmTaskSchema(lix); await lix.execute( "INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))", [ "task-1", "Draft JS SDK flow", false, JSON.stringify({ priority: "high", tags: ["sdk", "json"] }), ], ); const projected = await lix.execute( "SELECT title, meta FROM crm_task WHERE id = $1", ["task-1"], ); const projectedRow = projected.rows[0]!; expect(projectedRow.get("title")).toBe("Draft JS SDK flow"); expect(projectedRow.value("title")).toBeInstanceOf(Value); expect(projectedRow.get("meta")).toEqual({ priority: "high", tags: ["sdk", "json"], }); expect(projectedRow.value("meta").kind).toBe("json"); expect(projectedRow.value("meta").asJson()).toEqual({ priority: "high", tags: ["sdk", "json"], }); expect(projectedRow.toObject()).toEqual({ title: "Draft JS SDK flow", meta: { priority: "high", tags: ["sdk", "json"] }, }); expect(projectedRow.toValueMap().title).toBeInstanceOf(Value); expect(() => projectedRow.get("missing")).toThrow( /Available columns: title, meta/, ); expect(await taskDone(lix, "task-1")).toBe(false); const mainHead = await lix.execute("SELECT lix_active_version_commit_id()"); const mainHeadCommitId = mainHead.rows[0]!.get("lix_active_version_commit_id()"); expect(typeof mainHeadCommitId).toBe("string"); const draft = await lix.createVersion({ id: "draft-version", name: "Draft", }); expect(draft).toMatchObject({ id: "draft-version", name: "Draft", hidden: false, commitId: mainHeadCommitId, }); await lix.switchVersion({ versionId: draft.id }); await lix.execute("UPDATE crm_task SET done = $1 WHERE id = $2", [ true, "task-1", ]); expect(await taskDone(lix, "task-1")).toBe(true); await lix.switchVersion({ versionId: mainVersionId }); expect(await taskDone(lix, "task-1")).toBe(false); const preview = await lix.mergeVersionPreview({ sourceVersionId: draft.id, }); expect(preview.outcome).toBe("fastForward"); expect(preview.targetVersionId).toBe(mainVersionId); expect(preview.sourceVersionId).toBe(draft.id); expect(preview.changeStats).toEqual({ total: 1, added: 0, modified: 1, removed: 0, }); expect(preview.conflicts).toEqual([]); expect(await taskDone(lix, "task-1")).toBe(false); const merge = await lix.mergeVersion({ sourceVersionId: draft.id, }); expect(merge.outcome).toBe("fastForward"); expect(merge.targetVersionId).toBe(mainVersionId); expect(merge.changeStats).toEqual({ total: 1, added: 0, modified: 1, removed: 0, }); expect(merge.createdMergeCommitId).toBeNull(); expect(await taskDone(lix, "task-1")).toBe(true); await lix.close(); await lix.close(); await expect(lix.activeVersionId()).rejects.toMatchObject({ code: "LIX_ERROR_CLOSED", }); await expect(lix.execute("SELECT 1")).rejects.toMatchObject({ code: "LIX_ERROR_CLOSED", }); }); test("openLix accepts an explicit backend", async () => { const backend = createMemoryBackend(); const first = await openLix({ backend }); await registerCrmTaskSchema(first); await first.execute( "INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))", [ "backend-task", "Stored through explicit backend", false, JSON.stringify({ priority: "normal" }), ], ); await first.close(); const second = await openLix({ backend }); expect(await taskDone(second, "backend-task")).toBe(false); await second.close(); }); test("execute supports UNION ALL without trapping wasm", async () => { const lix = await openLix(); const result = await lix.execute("SELECT 1 UNION ALL SELECT 2"); expect(result.rows.map((row) => row.get("Int64(1)"))).toEqual([1, 2]); await lix.close(); }); test("unsupported UNION DISTINCT returns a JS error without trapping wasm", async () => { const { stdout } = await execFileAsync( process.execPath, [ "--input-type=module", "-e", ` import { openLix } from './dist/index.js'; const lix = await openLix(); try { await lix.execute('SELECT 1 UNION SELECT 1'); console.log('unexpected-success'); } catch (error) { console.log(error.code, error.message); } finally { await lix.close().catch(() => {}); } `, ], { cwd: jsSdkRoot }, ); expect(stdout).toContain("LIX_UNSUPPORTED_SQL_RUNTIME_PLAN"); expect(stdout).toContain("CoalescePartitionsExec"); }); test("INSERT SELECT UNION ALL executes without trapping wasm", async () => { const { stdout } = await execFileAsync( process.execPath, [ "--input-type=module", "-e", ` import { openLix } from './dist/index.js'; const lix = await openLix(); try { const result = await lix.execute("INSERT INTO lix_directory (path) SELECT '/u1/' UNION ALL SELECT '/u2/'"); console.log(result.rowsAffected); } finally { await lix.close().catch(() => {}); } `, ], { cwd: jsSdkRoot }, ); expect(stdout.trim()).toBe("2"); }); test("createVersion can start from an explicit commit id", async () => { const lix = await openLix(); await registerCrmTaskSchema(lix); const baseHead = await lix.execute("SELECT lix_active_version_commit_id()"); const fromCommitId = baseHead.rows[0]!.get("lix_active_version_commit_id()"); expect(typeof fromCommitId).toBe("string"); await lix.execute( "INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))", [ "after-base", "Written after base", false, JSON.stringify({ priority: "normal" }), ], ); const version = await lix.createVersion({ id: "from-explicit-commit", name: "From explicit commit", fromCommitId: fromCommitId as string, }); expect(version).toMatchObject({ id: "from-explicit-commit", name: "From explicit commit", hidden: false, commitId: fromCommitId, }); await lix.switchVersion({ versionId: version.id }); const projected = await lix.execute( "SELECT id FROM crm_task WHERE id = $1", ["after-base"], ); expect(projected.rows).toHaveLength(0); await lix.close(); }); test("merge conflicts expose structured details", async () => { const lix = await openLix(); const mainVersionId = await lix.activeVersionId(); await registerCrmTaskSchema(lix); await lix.execute( "INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))", [ "conflict-task", "Base", false, JSON.stringify({ priority: "normal" }), ], ); const draft = await lix.createVersion({ id: "conflict-draft", name: "Conflict draft", }); await lix.switchVersion({ versionId: draft.id }); await lix.execute("UPDATE crm_task SET title = $1 WHERE id = $2", [ "Draft", "conflict-task", ]); await lix.switchVersion({ versionId: mainVersionId }); await lix.execute("UPDATE crm_task SET title = $1 WHERE id = $2", [ "Main", "conflict-task", ]); try { await lix.mergeVersion({ sourceVersionId: draft.id }); throw new Error("expected merge conflict"); } catch (error) { expect(isLixError(error)).toBe(true); if (!isLixError(error)) throw error; expect(error.code).toBe("LIX_MERGE_CONFLICT"); expect(error.message).toContain("tracked-state conflict"); expect(error.details).toBeDefined(); expect((error as LixError & { data?: unknown }).data).toBeUndefined(); expect( "description" in (error as LixError & { description?: unknown }), ).toBe(false); const details = error.details as { conflicts?: Array<{ schemaKey?: string; entityId?: string[]; target?: unknown; source?: unknown; }>; }; expect(details.conflicts).toHaveLength(1); expect(details.conflicts?.[0]).toMatchObject({ schemaKey: "crm_task", entityId: ["conflict-task"], }); expect(details.conflicts?.[0]?.target).toBeDefined(); expect(details.conflicts?.[0]?.source).toBeDefined(); } await lix.close(); }); test("lix.close delegates backend close through the engine bridge", async () => { let closeCount = 0; const backend = { ...createMemoryBackend(), close() { closeCount += 1; }, }; const lix = await openLix({ backend }); await lix.close(); await lix.close(); expect(closeCount).toBe(1); }); test("engine errors expose structured hints", async () => { const lix = await openLix(); try { await lix.execute("SELECT entity_id FROM lix_state_history"); throw new Error("expected history query to fail"); } catch (error) { expect(isLixError(error)).toBe(true); if (!isLixError(error)) throw error; expect(error.code).toBe("LIX_HISTORY_FILTER_REQUIRED"); expect(error.hint).toContain("lix_active_version_commit_id()"); } await lix.close(); }); test("execute rejects invalid runtime arguments before wasm", async () => { const lix = await openLix(); const unsafeLix = lix as unknown as { execute(sql: unknown, params?: unknown): Promise; }; await expect(unsafeLix.execute(123, [])).rejects.toMatchObject({ name: "LixError", code: "LIX_INVALID_ARGUMENT", message: "lix.execute() expected sql to be a string", details: { operation: "execute", argument: "sql", expected: "string", actual: "number", }, }); await expect(unsafeLix.execute("SELECT 1", 123)).rejects.toMatchObject({ name: "LixError", code: "LIX_INVALID_ARGUMENT", message: "lix.execute() expected params to be an array", details: { operation: "execute", argument: "params", expected: "array", actual: "number", }, }); await lix.close(); }); test("execute rejects lossy JavaScript parameter coercions", async () => { const lix = await openLix(); const circular: Record = {}; circular.self = circular; const invalidCases: Array<{ name: string; value: unknown; message: string | RegExp; actual?: string; }> = [ { name: "Date", value: new Date("2026-01-02T03:04:05.000Z"), message: /Date is not a valid SQL parameter/, actual: "Date", }, { name: "Int32Array", value: new Int32Array([1, 2, 3]), message: /typed array SQL parameters must be Uint8Array/, actual: "Int32Array", }, { name: "lone surrogate", value: "X\uD83DY", message: /well-formed UTF-16/, actual: "string", }, { name: "undefined", value: undefined, message: /undefined is not a valid SQL parameter/, actual: "undefined", }, { name: "BigInt", value: 10n, message: /requires a LixValue, JSON value, or binary value/, actual: "bigint", }, { name: "NaN", value: Number.NaN, message: /finite number/, actual: "number", }, { name: "Infinity", value: Number.POSITIVE_INFINITY, message: /finite number/, actual: "number", }, { name: "circular object", value: circular, message: /circular references/, actual: "object", }, { name: "Symbol", value: Symbol("x"), message: /requires a LixValue, JSON value, or binary value/, actual: "symbol", }, { name: "function", value: () => undefined, message: /requires a LixValue, JSON value, or binary value/, actual: "function", }, ]; for (const testCase of invalidCases) { try { await lix.execute("SELECT $1 AS v", [testCase.value as never]); throw new Error(`expected ${testCase.name} to fail`); } catch (error) { expect(error, testCase.name).toMatchObject({ name: "LixError", code: "LIX_INVALID_PARAM", details: { operation: "execute", parameter_index: 1, argument: "params[0]", actual: testCase.actual, }, }); if (!(error instanceof Error)) throw error; expect(error.message, testCase.name).toMatch(testCase.message); } } await lix.close(); }); test("execute rejects extra SQL parameters", async () => { const lix = await openLix(); try { await lix.execute("SELECT $1 AS v", [1, 2]); throw new Error("expected extra params to fail"); } catch (error) { expect(error).toMatchObject({ code: "LIX_INVALID_PARAM", details: { operation: "execute", expected_param_count: 1, provided_param_count: 2, placeholders: ["$1"], }, }); if (!(error instanceof Error)) throw error; expect(error.message).toBe( "SQL expected 1 parameter(s), but 2 parameter(s) were provided", ); } await lix.close(); }); test("lix_state_history snapshot_content preserves JSON null for binary file rows", async () => { const lix = await openLix(); await lix.execute( "INSERT INTO lix_file (id, path, data, hidden) VALUES ($1, $2, $3, false)", [ "history-binary-js-repro", "/history/repro.bin", new Uint8Array([0x80, 0xff, 0x00]), ], ); const result = await lix.execute( "SELECT schema_key, snapshot_content \ FROM lix_state_history \ WHERE start_commit_id = lix_active_version_commit_id()", ); const directoryRow = result.rows.find( (row) => row.get("schema_key") === "lix_directory_descriptor", ); expect(directoryRow?.get("snapshot_content")).toMatchObject({ parent_id: null, }); await lix.close(); }); async function registerCrmTaskSchema(lix: Lix) { const schema = { $schema: "https://json-schema.org/draft/2020-12/schema", "x-lix-key": "crm_task", "x-lix-primary-key": ["/id"], type: "object", required: ["id", "title", "done", "meta"], properties: { id: { type: "string" }, title: { type: "string" }, done: { type: "boolean" }, meta: { type: "object" }, }, additionalProperties: false, } as const; await lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", [JSON.stringify(schema)], ); } async function taskDone(lix: Lix, taskId: string): Promise { const result = await lix.execute( "SELECT done FROM crm_task WHERE id = $1", [taskId], ); const rows = expectRows(result); expect(rows.rows).toHaveLength(1); const done = rows.rows[0]?.get("done"); expect(typeof done).toBe("boolean"); return done as boolean; } function expectRows(result: ExecuteResult) { return result; } type StoredKvPair = { namespace: string; key: Uint8Array; value: Uint8Array; }; function createMemoryBackend(): LixBackend { let rows: StoredKvPair[] = []; function createTransaction(): LixBackendWriteTransaction { let transactionRows = rows.map(cloneStoredPair); let closed = false; const ensureOpen = () => { if (closed) { throw new Error("transaction is closed"); } }; return { getValues(request): BackendKvValueBatch { ensureOpen(); return { groups: request.groups.map((group) => ({ namespace: group.namespace, values: group.keys.map((key) => { const row = transactionRows.find( (row) => row.namespace === group.namespace && compareBytes(row.key, key) === 0, ); return row ? new Uint8Array(row.value) : null; }), })), }; }, existsMany(request): BackendKvExistsBatch { ensureOpen(); return { groups: request.groups.map((group) => ({ namespace: group.namespace, exists: group.keys.map((key) => transactionRows.some( (row) => row.namespace === group.namespace && compareBytes(row.key, key) === 0, ), ), })), }; }, scanKeys(request): BackendKvKeyPage { ensureOpen(); const { pairs, resumeAfter } = scanPage(transactionRows, request); return { keys: pairs.map((row) => new Uint8Array(row.key)), resumeAfter, }; }, scanValues(request): BackendKvValuePage { ensureOpen(); const { pairs, resumeAfter } = scanPage(transactionRows, request); return { values: pairs.map((row) => new Uint8Array(row.value)), resumeAfter, }; }, scanEntries(request): BackendKvEntryPage { ensureOpen(); const { pairs, resumeAfter } = scanPage(transactionRows, request); return { keys: pairs.map((row) => new Uint8Array(row.key)), values: pairs.map((row) => new Uint8Array(row.value)), resumeAfter, }; }, writeKvBatch(batch): BackendKvWriteStats { ensureOpen(); const stats: BackendKvWriteStats = { puts: 0, deletes: 0, bytesWritten: 0, }; for (const group of batch.groups) { for (const put of group.puts) { stats.puts += 1; stats.bytesWritten += put.key.length + put.value.length; transactionRows = transactionRows.filter( (row) => row.namespace !== group.namespace || compareBytes(row.key, put.key) !== 0, ); transactionRows.push({ namespace: group.namespace, key: new Uint8Array(put.key), value: new Uint8Array(put.value), }); } for (const key of group.deletes) { stats.deletes += 1; stats.bytesWritten += key.length; transactionRows = transactionRows.filter( (row) => row.namespace !== group.namespace || compareBytes(row.key, key) !== 0, ); } } return stats; }, commit() { ensureOpen(); rows = transactionRows.map(cloneStoredPair); closed = true; }, rollback() { ensureOpen(); closed = true; }, }; } return { beginReadTransaction(): LixBackendReadTransaction { return createTransaction(); }, beginWriteTransaction(): LixBackendWriteTransaction { return createTransaction(); }, }; } function cloneStoredPair(row: StoredKvPair): StoredKvPair { return { namespace: row.namespace, key: new Uint8Array(row.key), value: new Uint8Array(row.value), }; } function scanPage( rows: StoredKvPair[], request: BackendKvScanRequest, ): { pairs: StoredKvPair[]; resumeAfter: Uint8Array | null } { const matches = rows .filter( (row) => row.namespace === request.namespace && keyMatchesRange(row.key, request.range) && (!request.after || compareBytes(row.key, request.after) > 0), ) .sort((left, right) => compareBytes(left.key, right.key)); const hasMore = matches.length > request.limit; const pairs = matches.slice(0, request.limit); return { pairs, resumeAfter: hasMore ? (pairs.at(-1)?.key ?? null) : null, }; } function keyMatchesRange(key: Uint8Array, range: BackendKvScanRange): boolean { if (range.kind === "prefix") { if (key.length < range.prefix.length) return false; return range.prefix.every((byte, index) => key[index] === byte); } return ( compareBytes(key, range.start) >= 0 && compareBytes(key, range.end) < 0 ); } function compareBytes(left: Uint8Array, right: Uint8Array): number { const length = Math.min(left.length, right.length); for (let index = 0; index < length; index++) { const delta = left[index]! - right[index]!; if (delta !== 0) return delta; } return left.length - right.length; } ================================================ FILE: packages/js-sdk/src/open-lix.ts ================================================ import init, { resolveEngineWasmModuleOrPath, Value, type LixError, } from "./engine-wasm/index.js"; import * as wasmModule from "./engine-wasm/index.js"; export type JsonValue = | null | boolean | number | string | JsonValue[] | { [key: string]: JsonValue }; export type LixRuntimeValue = JsonValue | Uint8Array | ArrayBuffer | Value; export type LixNativeValue = JsonValue | Uint8Array; export type ExecuteResult = { columns: string[]; rows: Row[]; rowsAffected: number; notices: LixNotice[]; }; export type LixNotice = { code: string; message: string; hint?: string; }; export class Row { readonly columns: string[]; private readonly valuesByIndex: Value[]; constructor(columns: string[], values: Value[]) { this.columns = columns; this.valuesByIndex = values; } get(columnName: string): LixNativeValue { return valueToNative(this.value(columnName)); } tryGet(columnName: string): LixNativeValue | undefined { const value = this.tryValue(columnName); return value === undefined ? undefined : valueToNative(value); } value(columnName: string): Value { const index = this.columns.indexOf(columnName); if (index === -1) { throw createLixError( "LIX_COLUMN_NOT_FOUND", `Column "${columnName}" does not exist. Available columns: ${this.availableColumns()}`, ); } const value = this.valuesByIndex[index]; if (value === undefined) { throw createLixError( "LIX_COLUMN_NOT_FOUND", `Column "${columnName}" is outside row width ${this.valuesByIndex.length}.`, ); } return value; } tryValue(columnName: string): Value | undefined { const index = this.columns.indexOf(columnName); return index === -1 ? undefined : this.valuesByIndex[index]; } getAt(index: number): LixNativeValue { return valueToNative(this.valueAt(index)); } valueAt(index: number): Value { const value = this.valuesByIndex[index]; if (value === undefined) { throw createLixError( "LIX_COLUMN_NOT_FOUND", `Column index ${index} is outside row width ${this.valuesByIndex.length}.`, ); } return value; } values(): Value[] { return [...this.valuesByIndex]; } toObject(): Record { return Object.fromEntries( this.columns.map((column, index) => [ column, valueToNative(this.valueAt(index)), ]), ); } toValueMap(): Record { return Object.fromEntries( this.columns.map((column, index) => [column, this.valueAt(index)]), ); } private availableColumns(): string { return this.columns.length === 0 ? "" : this.columns.join(", "); } } function valueToNative(value: Value): LixNativeValue { switch (value.kind) { case "null": return null; case "boolean": case "integer": case "real": case "text": case "json": return value.value as JsonValue; case "blob": return value.asBlob() ?? new Uint8Array(); } } export type BackendKvScanRange = | { kind: "prefix"; prefix: Uint8Array } | { kind: "range"; start: Uint8Array; end: Uint8Array }; export type BackendKvGetRequest = { groups: BackendKvGetGroup[]; }; export type BackendKvGetGroup = { namespace: string; keys: Uint8Array[]; }; export type BackendKvValueBatch = { groups: BackendKvValueGroup[]; }; export type BackendKvValueGroup = { namespace: string; values: Array; }; export type BackendKvExistsBatch = { groups: BackendKvExistsGroup[]; }; export type BackendKvExistsGroup = { namespace: string; exists: boolean[]; }; export type BackendKvScanRequest = { namespace: string; range: BackendKvScanRange; after?: Uint8Array | null; limit: number; }; export type BackendKvKeyPage = { keys: Uint8Array[]; resumeAfter?: Uint8Array | null; }; export type BackendKvValuePage = { values: Uint8Array[]; resumeAfter?: Uint8Array | null; }; export type BackendKvEntryPage = { keys: Uint8Array[]; values: Uint8Array[]; resumeAfter?: Uint8Array | null; }; export type BackendKvPut = { key: Uint8Array; value: Uint8Array; }; export type BackendKvWriteBatch = { groups: BackendKvWriteGroup[]; }; export type BackendKvWriteGroup = { namespace: string; puts: BackendKvPut[]; deletes: Uint8Array[]; }; export type BackendKvWriteStats = { puts: number; deletes: number; bytesWritten: number; }; export type LixBackendReadTransaction = { getValues(request: BackendKvGetRequest): BackendKvValueBatch; existsMany(request: BackendKvGetRequest): BackendKvExistsBatch; scanKeys(request: BackendKvScanRequest): BackendKvKeyPage; scanValues(request: BackendKvScanRequest): BackendKvValuePage; scanEntries(request: BackendKvScanRequest): BackendKvEntryPage; rollback(): void; }; export type LixBackendWriteTransaction = LixBackendReadTransaction & { writeKvBatch(batch: BackendKvWriteBatch): BackendKvWriteStats; commit(): void; }; export type LixBackend = { beginReadTransaction(): LixBackendReadTransaction; beginWriteTransaction(): LixBackendWriteTransaction; close?(): void; }; export type OpenLixOptions = { backend?: LixBackend; }; export type CreateVersionOptions = { id?: string; name: string; fromCommitId?: string; }; export type CreateVersionResult = { id: string; name: string; hidden: boolean; commitId: string; }; export type SwitchVersionOptions = { versionId: string; }; export type SwitchVersionResult = { versionId: string; }; export type MergeVersionOptions = { sourceVersionId: string; }; export type MergeVersionOutcome = | "alreadyUpToDate" | "fastForward" | "mergeCommitted"; export type MergeVersionResult = { /** * How the merge was applied. `fastForward` advances the target ref without * creating a merge commit, but can still make source changes visible. */ outcome: MergeVersionOutcome; targetVersionId: string; sourceVersionId: string; baseCommitId: string; targetHeadBeforeCommitId: string; sourceHeadBeforeCommitId: string; targetHeadAfterCommitId: string; createdMergeCommitId: string | null; changeStats: MergeChangeStats; }; export type MergeVersionPreviewResult = { outcome: MergeVersionOutcome; targetVersionId: string; sourceVersionId: string; baseCommitId: string; targetHeadCommitId: string; sourceHeadCommitId: string; changeStats: MergeChangeStats; conflicts: MergeConflict[]; }; export type MergeChangeStats = { total: number; added: number; modified: number; removed: number; }; export type MergeConflict = { kind: "sameEntityChanged"; schemaKey: string; entityId: string[]; fileId: string | null; target: MergeConflictSide; source: MergeConflictSide; }; export type MergeConflictSide = { kind: "added" | "modified" | "removed"; beforeChangeId: string | null; afterChangeId: string | null; }; export type Lix = { /** * Executes one DataFusion SQL statement against this Lix session. * * This is not SQLite SQL. Use the DataFusion SQL dialect; positional * placeholders are `$1`, `$2`, and so on. SQLite-specific catalog tables and * transaction statements such as `sqlite_master`, `BEGIN`, and `COMMIT` are * not available. Use `information_schema` for catalog inspection. */ execute( sql: string, params?: ReadonlyArray, ): Promise; activeVersionId(): Promise; createVersion(options: CreateVersionOptions): Promise; switchVersion(options: SwitchVersionOptions): Promise; mergeVersionPreview( options: MergeVersionOptions, ): Promise; mergeVersion(options: MergeVersionOptions): Promise; close(): Promise; }; let wasmReady: Promise | null = null; type WasmExecuteResult = { columns: string[]; rows: unknown[][]; rowsAffected: number; notices?: LixNotice[]; }; type WasmLix = { /** * Executes one DataFusion SQL statement. See `Lix.execute` for the public * SQL contract. */ execute(sql: string, params: unknown[]): Promise; activeVersionId(): Promise; createVersion(options: CreateVersionOptions): Promise; switchVersion(options: SwitchVersionOptions): Promise; mergeVersionPreview( options: MergeVersionOptions, ): Promise; mergeVersion(options: MergeVersionOptions): Promise; close(): Promise; }; async function ensureWasmReady(): Promise { if (!wasmReady) { wasmReady = resolveEngineWasmModuleOrPath() .then((module_or_path) => init({ module_or_path })) .then(() => undefined); } await wasmReady; } export async function openLix( options: OpenLixOptions = {}, ): Promise { await ensureWasmReady(); try { const wasmLix = (await (wasmModule as unknown as { openLix(options: OpenLixOptions): Promise; }).openLix(options)) as WasmLix; return createLixHandle(wasmLix); } catch (error) { try { options.backend?.close?.(); } catch { // Preserve the original open failure. } throw normalizeThrownError(error); } } function createLixHandle(wasmLix: WasmLix): Lix { let operationQueue: Promise = Promise.resolve(); const acquireOperationSlot = async (): Promise<() => void> => { const previous = operationQueue; let releaseCurrent: (() => void) | undefined; const current = new Promise((resolve) => { releaseCurrent = resolve; }); operationQueue = previous.then(() => current); await previous; return () => releaseCurrent?.(); }; const runQueued = async (operation: () => Promise): Promise => { const release = await acquireOperationSlot(); try { return await operation(); } catch (error) { throw normalizeThrownError(error); } finally { release(); } }; return { async execute( sql: string, params: ReadonlyArray = [], ): Promise { validateExecuteArguments(sql, params); const values = params.map((param, index) => valueFromExecuteParam(param, index), ); const result = await runQueued(() => wasmLix.execute(sql, values), ); return normalizeExecuteResult(result); }, async activeVersionId(): Promise { return await runQueued(() => wasmLix.activeVersionId()); }, async createVersion( options: CreateVersionOptions, ): Promise { return await runQueued(() => wasmLix.createVersion(options)); }, async switchVersion( options: SwitchVersionOptions, ): Promise { return await runQueued(() => wasmLix.switchVersion(options)); }, async mergeVersionPreview( options: MergeVersionOptions, ): Promise { return await runQueued(() => wasmLix.mergeVersionPreview(options)); }, async mergeVersion(options: MergeVersionOptions): Promise { return await runQueued(() => wasmLix.mergeVersion(options)); }, async close(): Promise { await runQueued(() => wasmLix.close()); }, }; } function validateExecuteArguments( sql: unknown, params: unknown, ): asserts sql is string { if (typeof sql !== "string") { throw invalidArgumentError("execute", "sql", "string", sql); } if (!Array.isArray(params)) { throw invalidArgumentError("execute", "params", "array", params); } } function invalidArgumentError( operation: string, argument: string, expected: string, actualValue: unknown, ): LixError { return createLixError( "LIX_INVALID_ARGUMENT", `lix.${operation}() expected ${argument} to be ${expectedArticle(expected)} ${expected}`, { details: { operation, argument, expected, actual: runtimeTypeName(actualValue), }, }, ); } function valueFromExecuteParam(param: LixRuntimeValue, index: number): Value { try { return Value.from(param); } catch (error) { throw invalidParamError(index, param, error); } } function invalidParamError( index: number, actualValue: unknown, cause: unknown, ): LixError { const message = cause instanceof Error && cause.message ? cause.message : "parameter is not a valid Lix SQL value"; return createLixError( "LIX_INVALID_PARAM", `lix.execute() invalid parameter $${index + 1}: ${message}`, { details: { operation: "execute", parameter_index: index + 1, argument: `params[${index}]`, actual: runtimeTypeName(actualValue), }, cause, }, ); } function expectedArticle(expected: string): "a" | "an" { return /^[aeiou]/i.test(expected) ? "an" : "a"; } function runtimeTypeName(value: unknown): string { if (value === null) return "null"; if (Array.isArray(value)) return "array"; if (value instanceof Date) return "Date"; if (value instanceof ArrayBuffer) return "ArrayBuffer"; if (ArrayBuffer.isView(value)) return value.constructor.name; return typeof value; } function normalizeExecuteResult(result: WasmExecuteResult): ExecuteResult { const columns = [...result.columns]; return { columns, rows: result.rows.map( (row) => new Row(columns, row.map((value) => Value.from(value))), ), rowsAffected: result.rowsAffected, notices: result.notices ?? [], }; } function createLixError( code: string, message: string, options: { hint?: string; details?: unknown; cause?: unknown } = {}, ): LixError { const error = new Error(message) as LixError; error.name = "LixError"; error.code = code; if (options.hint !== undefined) { error.hint = options.hint; } if (options.details !== undefined) { error.details = options.details; } if (options.cause !== undefined) { (error as Error & { cause?: unknown }).cause = options.cause; } return error; } function normalizeThrownError(error: unknown): LixError { if (isLixErrorLike(error)) { const hint = typeof error.hint === "string" ? error.hint : extractHintFromMessage(error.message); const details = "details" in error ? error.details : undefined; if (error instanceof Error) { if (hint !== undefined && error.hint === undefined) { error.hint = hint; } if (details !== undefined && error.details === undefined) { error.details = details; } return error; } const message = typeof error.message === "string" ? error.message : error.code; return createLixError(error.code, message, { hint, details }); } if (error instanceof WebAssembly.RuntimeError) { return createLixError("LIX_WASM_RUNTIME_ERROR", error.message, { hint: "The Lix engine encountered a WebAssembly runtime trap. Please report this as an engine bug with the SQL statement or API call that triggered it.", cause: error, }); } if (error instanceof Error) { return createLixError("LIX_ERROR_UNKNOWN", error.message, { cause: error }); } return createLixError("LIX_ERROR_UNKNOWN", String(error)); } function extractHintFromMessage(message: unknown): string | undefined { if (typeof message !== "string") return undefined; const match = message.match(/(?:^|\n)hint:\s*(.+)$/s); return match?.[1]?.trim(); } function isLixErrorLike(error: unknown): error is { code: string; message?: string; hint?: string; details?: unknown; } { return ( typeof error === "object" && error !== null && typeof (error as { code?: unknown }).code === "string" && (error as { code: string }).code.startsWith("LIX_") ); } ================================================ FILE: packages/js-sdk/src/sqlite/better-sqlite3.d.ts ================================================ declare module "better-sqlite3" { export type DatabaseOptions = { readonly?: boolean; fileMustExist?: boolean; timeout?: number; verbose?: (message?: unknown, ...additional: unknown[]) => void; }; export type Statement = { get(...params: unknown[]): unknown; all(...params: unknown[]): unknown[]; run(...params: unknown[]): unknown; }; export type Database = { readonly inTransaction: boolean; exec(sql: string): Database; prepare(sql: string): Statement; pragma(source: string, options?: unknown): unknown; close(): void; }; type DatabaseConstructor = { new (filename: string, options?: DatabaseOptions): Database; (filename: string, options?: DatabaseOptions): Database; }; const Database: DatabaseConstructor; export default Database; } ================================================ FILE: packages/js-sdk/src/sqlite/index.test.ts ================================================ import { expect, test } from "vitest"; import { openLix, Value, type ExecuteResult, type Lix } from "../index.js"; const hasBetterSqlite3 = await import("better-sqlite3").then( () => true, () => false, ); test.runIf(hasBetterSqlite3)( "createBetterSqlite3Backend can back a Lix session", async () => { const { createBetterSqlite3Backend } = await import("./index.js"); const backend = createBetterSqlite3Backend({ path: ":memory:" }); const lix = await openLix({ backend }); await registerCrmTaskSchema(lix); await lix.execute( "INSERT INTO crm_task (id, title, done) VALUES ($1, $2, $3)", ["sqlite-task", "Ship better-sqlite3 backend", false], ); expect(await taskTitle(lix, "sqlite-task")).toBe( "Ship better-sqlite3 backend", ); await lix.close(); }, ); test.runIf(hasBetterSqlite3)( "committed writes survive close and reopen", async () => { const { createBetterSqlite3Backend } = await import("./index.js"); const file = tempLixPath(); const first = await openLix({ backend: createBetterSqlite3Backend({ path: file }), }); await registerCrmTaskSchema(first); await first.execute( "INSERT INTO crm_task (id, title, done) VALUES ($1, $2, $3)", ["persistent-task", "Persist before close", false], ); await first.close(); const second = await openLix({ backend: createBetterSqlite3Backend({ path: file }), }); expect(await taskTitle(second, "persistent-task")).toBe( "Persist before close", ); await second.close(); }, ); test.runIf(hasBetterSqlite3)( "createBetterSqlite3Backend rejects a second handle for the same file", async () => { const { createBetterSqlite3Backend } = await import("./index.js"); const file = tempLixPath(); const firstBackend = createBetterSqlite3Backend({ path: file }); const first = await openLix({ backend: firstBackend }); expect(() => createBetterSqlite3Backend({ path: file })).toThrow( /already has an open handle/, ); await first.close(); const second = await openLix({ backend: createBetterSqlite3Backend({ path: file }), }); await second.close(); }, ); async function registerCrmTaskSchema(lix: Lix) { const schema = { $schema: "https://json-schema.org/draft/2020-12/schema", "x-lix-key": "crm_task", "x-lix-primary-key": ["/id"], type: "object", required: ["id", "title", "done"], properties: { id: { type: "string" }, title: { type: "string" }, done: { type: "boolean" }, }, additionalProperties: false, } as const; await lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", [JSON.stringify(schema)], ); } async function taskTitle(lix: Lix, taskId: string): Promise { const result = await lix.execute( "SELECT title FROM crm_task WHERE id = $1", [taskId], ); const rows = expectRows(result); expect(rows.rows).toHaveLength(1); const title = rows.rows[0]?.get("title"); expect(typeof title).toBe("string"); return title as string; } function tempLixPath(): string { return `/tmp/lix-sqlite-test-${Date.now()}-${Math.random() .toString(16) .slice(2)}.lix`; } function expectRows(result: ExecuteResult) { return result; } ================================================ FILE: packages/js-sdk/src/sqlite/index.ts ================================================ import DatabaseConstructor, { type Database } from "better-sqlite3"; import type { BackendKvEntryPage, BackendKvExistsBatch, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, LixBackend, LixBackendReadTransaction, LixBackendWriteTransaction, } from "../open-lix.js"; export type BetterSqlite3BackendOptions = { path: string; databaseOptions?: BetterSqlite3DatabaseOptions; }; export type BetterSqlite3DatabaseOptions = { readonly?: boolean; fileMustExist?: boolean; timeout?: number; verbose?: (message?: unknown, ...additional: unknown[]) => void; }; const openFileHandles = new Set(); export function createBetterSqlite3Backend( options: BetterSqlite3BackendOptions, ): LixBackend { if (!options.path) { throw new Error("createBetterSqlite3Backend() requires a non-empty path"); } const registryKey = registryKeyForPath(options.path); if (registryKey && openFileHandles.has(registryKey)) { throw doubleOpenError(options.path); } let activeRegistryKey: string | null = registryKey; let db: Database | undefined; if (activeRegistryKey) { openFileHandles.add(activeRegistryKey); } try { db = new DatabaseConstructor(options.path, options.databaseOptions); initializeDatabase(db); return new BetterSqlite3Backend(db, activeRegistryKey); } catch (error) { if (activeRegistryKey) { openFileHandles.delete(activeRegistryKey); } if (db) { try { db.close(); } catch { // Ignore close errors while preserving the original open failure. } } throw error; } } function initializeDatabase(db: Database): void { db.exec(` CREATE TABLE IF NOT EXISTS lix_kv ( namespace TEXT NOT NULL, key BLOB NOT NULL, value BLOB NOT NULL, PRIMARY KEY (namespace, key) ) WITHOUT ROWID `); } class BetterSqlite3Backend implements LixBackend { readonly #db: Database; readonly #registryKey: string | null; #closed = false; constructor(db: Database, registryKey: string | null) { this.#db = db; this.#registryKey = registryKey; } beginReadTransaction(): LixBackendReadTransaction { this.#ensureOpen(); if (this.#db.inTransaction) { throw new Error("cannot open nested Lix backend transaction"); } this.#db.exec("BEGIN DEFERRED"); return new BetterSqlite3Transaction(this.#db); } beginWriteTransaction(): LixBackendWriteTransaction { this.#ensureOpen(); if (this.#db.inTransaction) { throw new Error("cannot open nested Lix backend transaction"); } this.#db.exec("BEGIN IMMEDIATE"); return new BetterSqlite3Transaction(this.#db); } close(): void { if (this.#closed) return; try { this.#db.close(); } finally { this.#closed = true; if (this.#registryKey) { openFileHandles.delete(this.#registryKey); } } } #ensureOpen(): void { if (this.#closed) { throw new Error("better-sqlite3 Lix backend is closed"); } } } class BetterSqlite3Transaction implements LixBackendWriteTransaction { readonly #db: Database; #closed = false; constructor(db: Database) { this.#db = db; } getValues(request: BackendKvGetRequest): BackendKvValueBatch { this.#ensureOpen(); return getValues(this.#db, request); } existsMany(request: BackendKvGetRequest): BackendKvExistsBatch { this.#ensureOpen(); return existsMany(this.#db, request); } scanKeys(request: BackendKvScanRequest): BackendKvKeyPage { this.#ensureOpen(); const { pairs, resumeAfter } = scanPage(this.#db, request); return { keys: pairs.map(({ key }) => key), resumeAfter, }; } scanValues(request: BackendKvScanRequest): BackendKvValuePage { this.#ensureOpen(); const { pairs, resumeAfter } = scanPage(this.#db, request); return { values: pairs.map(({ value }) => value), resumeAfter, }; } scanEntries(request: BackendKvScanRequest): BackendKvEntryPage { this.#ensureOpen(); const { pairs, resumeAfter } = scanPage(this.#db, request); return { keys: pairs.map(({ key }) => key), values: pairs.map(({ value }) => value), resumeAfter, }; } writeKvBatch(batch: BackendKvWriteBatch): BackendKvWriteStats { this.#ensureOpen(); const stats: BackendKvWriteStats = { puts: 0, deletes: 0, bytesWritten: 0, }; for (const group of batch.groups) { for (const put of group.puts) { stats.puts += 1; stats.bytesWritten += put.key.length + put.value.length; kvPut(this.#db, group.namespace, put.key, put.value); } for (const key of group.deletes) { stats.deletes += 1; stats.bytesWritten += key.length; kvDelete(this.#db, group.namespace, key); } } return stats; } commit(): void { this.#ensureOpen(); this.#db.exec("COMMIT"); this.#closed = true; } rollback(): void { this.#ensureOpen(); this.#db.exec("ROLLBACK"); this.#closed = true; } #ensureOpen(): void { if (this.#closed) { throw new Error("Lix backend transaction is closed"); } } } type KvPair = { key: Uint8Array; value: Uint8Array; }; function getValues( db: Database, request: BackendKvGetRequest, ): BackendKvValueBatch { return { groups: request.groups.map((group) => ({ namespace: group.namespace, values: group.keys.map((key) => kvGet(db, group.namespace, key)), })), }; } function existsMany( db: Database, request: BackendKvGetRequest, ): BackendKvExistsBatch { return { groups: request.groups.map((group) => ({ namespace: group.namespace, exists: group.keys.map( (key) => kvGet(db, group.namespace, key) !== null, ), })), }; } function scanPage( db: Database, request: BackendKvScanRequest, ): { pairs: KvPair[]; resumeAfter: Uint8Array | null } { const scanLimit = request.limit + 1 + (request.after ? 1 : 0); const pairs = kvScan( db, request.namespace, request.range, scanLimit, ).filter( (pair) => !request.after || compareBytes(pair.key, request.after) > 0, ); const hasMore = pairs.length > request.limit; const pagePairs = pairs.slice(0, request.limit); return { pairs: pagePairs, resumeAfter: hasMore ? (pagePairs.at(-1)?.key ?? null) : null, }; } function kvGet( db: Database, namespace: string, key: Uint8Array, ): Uint8Array | null { const row = db .prepare("SELECT value FROM lix_kv WHERE namespace = ? AND key = ?") .get(namespace, sqliteBytes(key)); if (!isObject(row) || !("value" in row)) { return null; } return bytesFromUnknown(row.value, "lix_kv.value"); } function kvPut( db: Database, namespace: string, key: Uint8Array, value: Uint8Array, ): void { db.prepare( `INSERT INTO lix_kv (namespace, key, value) VALUES (?, ?, ?) ON CONFLICT(namespace, key) DO UPDATE SET value = excluded.value`, ).run(namespace, sqliteBytes(key), sqliteBytes(value)); } function kvDelete(db: Database, namespace: string, key: Uint8Array): void { db.prepare("DELETE FROM lix_kv WHERE namespace = ? AND key = ?").run( namespace, sqliteBytes(key), ); } function kvScan( db: Database, namespace: string, range: BackendKvScanRange, limit?: number | null, ): KvPair[] { const { sql, params } = scanQuery(namespace, range, limit); return db.prepare(sql).all(...params).map((row) => { if (!isObject(row) || !("key" in row) || !("value" in row)) { throw new Error("invalid lix_kv scan row"); } return { key: bytesFromUnknown(row.key, "lix_kv.key"), value: bytesFromUnknown(row.value, "lix_kv.value"), }; }); } function scanQuery( namespace: string, range: BackendKvScanRange, limit?: number | null, ): { sql: string; params: unknown[] } { const params: unknown[] = [namespace]; const clauses = ["namespace = ?"]; if (range.kind === "prefix") { clauses.push("key >= ?"); params.push(sqliteBytes(range.prefix)); const end = prefixUpperBound(range.prefix); if (end) { clauses.push("key < ?"); params.push(sqliteBytes(end)); } } else { clauses.push("key >= ?", "key < ?"); params.push(sqliteBytes(range.start), sqliteBytes(range.end)); } let sql = `SELECT key, value FROM lix_kv WHERE ${clauses.join( " AND ", )} ORDER BY key`; if (limit != null) { sql += " LIMIT ?"; params.push(limit); } return { sql, params }; } function compareBytes(left: Uint8Array, right: Uint8Array): number { const length = Math.min(left.length, right.length); for (let index = 0; index < length; index++) { const delta = left[index]! - right[index]!; if (delta !== 0) return delta; } return left.length - right.length; } function prefixUpperBound(prefix: Uint8Array): Uint8Array | null { const end = new Uint8Array(prefix); for (let index = end.length - 1; index >= 0; index--) { if (end[index] !== 0xff) { end[index]! += 1; return end.slice(0, index + 1); } } return null; } function bytesFromUnknown(value: unknown, context: string): Uint8Array { if (value instanceof Uint8Array) { return new Uint8Array(value); } throw new Error(`${context} must be bytes`); } function sqliteBytes(bytes: Uint8Array): Uint8Array { const buffer = ( globalThis as typeof globalThis & { Buffer?: { from(bytes: Uint8Array): Uint8Array }; } ).Buffer; return buffer ? buffer.from(bytes) : bytes; } function registryKeyForPath(filename: string): string | null { if (filename === ":memory:") { return null; } if (filename.startsWith("/")) { return normalizeAbsolutePath(filename); } const cwd = ( globalThis as typeof globalThis & { process?: { cwd?: () => string }; } ).process?.cwd?.() ?? "/"; return normalizeAbsolutePath(`${cwd}/${filename}`); } function normalizeAbsolutePath(filename: string): string { const segments: string[] = []; for (const segment of filename.split("/")) { if (!segment || segment === ".") { continue; } if (segment === "..") { segments.pop(); continue; } segments.push(segment); } return `/${segments.join("/")}`; } function doubleOpenError(filename: string): Error { return new Error( `createBetterSqlite3Backend() already has an open handle for ${filename}; close the existing Lix handle before opening this file again`, ); } function isObject(value: unknown): value is Record { return typeof value === "object" && value !== null; } ================================================ FILE: packages/js-sdk/src/types.ts ================================================ export type { JsonValue, LixRuntimeValue } from "./open-lix.js"; ================================================ FILE: packages/js-sdk/tsconfig.json ================================================ { "compilerOptions": { "target": "ES2022", "module": "NodeNext", "moduleResolution": "NodeNext", "allowJs": true, "strict": true, "declaration": true, "outDir": "dist", "skipLibCheck": true }, "include": ["src"], "exclude": [ "src/**/*.test.ts", "src/engine-wasm/wasm/lix_engine_wasm_bindgen_bg.wasm.d.ts" ] } ================================================ FILE: packages/js-sdk/vitest.config.ts ================================================ import { defineConfig } from "vitest/config"; export default defineConfig({ test: { environment: "node", include: ["src/**/*.test.ts"], exclude: ["dist/**"], }, }); ================================================ FILE: packages/js-sdk/wasm-bindgen.rs ================================================ #[cfg(target_arch = "wasm32")] mod wasm { use async_trait::async_trait; use js_sys::{Array, Object, Reflect}; use lix_rs_sdk::{ open_lix as open_lix_rs, Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, CreateVersionOptions, ExecuteResult, Lix as RsLix, LixError, MergeVersionOptions, MergeVersionPreviewOptions, OpenLixOptions, SwitchVersionOptions, Value, }; use serde::Serialize; use serde_json::json; use wasm_bindgen::prelude::*; use wasm_bindgen::JsCast; #[wasm_bindgen(typescript_custom_section)] const LIX_TYPES: &str = r#" export type JsonValue = | null | boolean | number | string | JsonValue[] | { [key: string]: JsonValue }; export type LixValue = | { kind: "null"; value: null } | { kind: "boolean"; value: boolean } | { kind: "integer"; value: number } | { kind: "real"; value: number } | { kind: "text"; value: string } | { kind: "json"; value: JsonValue } | { kind: "blob"; base64: string }; export type ExecuteResult = { columns: string[]; rows: LixValue[][]; rowsAffected: number; notices: LixNotice[]; }; export type LixNotice = { code: string; message: string; hint?: string; }; export type BackendKvScanRange = | { kind: "prefix"; prefix: Uint8Array } | { kind: "range"; start: Uint8Array; end: Uint8Array }; export type BackendKvGetRequest = { groups: BackendKvGetGroup[]; }; export type BackendKvGetGroup = { namespace: string; keys: Uint8Array[]; }; export type BackendKvValueBatch = { groups: BackendKvValueGroup[]; }; export type BackendKvValueGroup = { namespace: string; values: Array; }; export type BackendKvExistsBatch = { groups: BackendKvExistsGroup[]; }; export type BackendKvExistsGroup = { namespace: string; exists: boolean[]; }; export type BackendKvScanRequest = { namespace: string; range: BackendKvScanRange; after?: Uint8Array | null; limit: number; }; export type BackendKvKeyPage = { keys: Uint8Array[]; resumeAfter?: Uint8Array | null; }; export type BackendKvValuePage = { values: Uint8Array[]; resumeAfter?: Uint8Array | null; }; export type BackendKvEntryPage = { keys: Uint8Array[]; values: Uint8Array[]; resumeAfter?: Uint8Array | null; }; export type BackendKvPut = { key: Uint8Array; value: Uint8Array; }; export type BackendKvWriteBatch = { groups: BackendKvWriteGroup[]; }; export type BackendKvWriteGroup = { namespace: string; puts: BackendKvPut[]; deletes: Uint8Array[]; }; export type BackendKvWriteStats = { puts: number; deletes: number; bytesWritten: number; }; export type BackendReadTransaction = { getValues(request: BackendKvGetRequest): BackendKvValueBatch; existsMany(request: BackendKvGetRequest): BackendKvExistsBatch; scanKeys(request: BackendKvScanRequest): BackendKvKeyPage; scanValues(request: BackendKvScanRequest): BackendKvValuePage; scanEntries(request: BackendKvScanRequest): BackendKvEntryPage; rollback(): void; }; export type BackendWriteTransaction = BackendReadTransaction & { writeKvBatch(batch: BackendKvWriteBatch): BackendKvWriteStats; commit(): void; }; export type Backend = { beginReadTransaction(): BackendReadTransaction; beginWriteTransaction(): BackendWriteTransaction; close?(): void; }; export type OpenLixOptions = { backend?: Backend; }; export type CreateVersionOptions = { id?: string; name: string; fromCommitId?: string; }; export type CreateVersionResult = { id: string; name: string; hidden: boolean; commitId: string; }; export type SwitchVersionOptions = { versionId: string; }; export type SwitchVersionResult = { versionId: string; }; export type MergeVersionOptions = { sourceVersionId: string; }; export type MergeVersionOutcome = | "alreadyUpToDate" | "fastForward" | "mergeCommitted"; export type MergeVersionResult = { outcome: MergeVersionOutcome; targetVersionId: string; sourceVersionId: string; baseCommitId: string; targetHeadBeforeCommitId: string; sourceHeadBeforeCommitId: string; targetHeadAfterCommitId: string; createdMergeCommitId: string | null; changeStats: MergeChangeStats; }; export type MergeVersionPreviewResult = { outcome: MergeVersionOutcome; targetVersionId: string; sourceVersionId: string; baseCommitId: string; targetHeadCommitId: string; sourceHeadCommitId: string; changeStats: MergeChangeStats; conflicts: MergeConflict[]; }; export type MergeChangeStats = { total: number; added: number; modified: number; removed: number; }; export type MergeConflict = { kind: "sameEntityChanged"; schemaKey: string; entityId: string[]; fileId: string | null; target: MergeConflictSide; source: MergeConflictSide; }; export type MergeConflictSide = { kind: "added" | "modified" | "removed"; beforeChangeId: string | null; afterChangeId: string | null; }; "#; #[wasm_bindgen] pub struct Lix { inner: RsLix, } #[wasm_bindgen] impl Lix { /// Executes one DataFusion SQL statement against this Lix session. /// /// The SQL dialect is DataFusion SQL, not SQLite SQL. Positional /// placeholders use `$1`, `$2`, and so on. SQLite-specific catalog /// tables and transaction statements such as `sqlite_master`, `BEGIN`, /// and `COMMIT` are not part of this contract; use /// `information_schema` for catalog inspection. #[wasm_bindgen(js_name = execute)] pub async fn execute(&self, sql: JsValue, params: JsValue) -> Result { let sql = sql .as_string() .ok_or_else(|| invalid_argument_error("execute", "sql", "string", &sql)) .map_err(js_error)?; if !Array::is_array(¶ms) { return Err(js_error(invalid_argument_error( "execute", "params", "array", ¶ms, ))); } let params = Array::from(¶ms); let values = params .iter() .map(value_from_js) .collect::, _>>() .map_err(js_error)?; let result = self.inner.execute(&sql, &values).await.map_err(js_error)?; execute_result_to_js(result).map_err(js_error) } #[wasm_bindgen(js_name = activeVersionId)] pub async fn active_version_id(&self) -> Result { self.inner.active_version_id().await.map_err(js_error) } #[wasm_bindgen(js_name = createVersion)] pub async fn create_version(&self, args: JsValue) -> Result { let options = parse_create_version_options(args).map_err(js_error)?; let result = self.inner.create_version(options).await.map_err(js_error)?; let object = Object::new(); set_string(&object, "id", &result.id).map_err(js_error)?; set_string(&object, "name", &result.name).map_err(js_error)?; Reflect::set( &object, &JsValue::from_str("hidden"), &JsValue::from_bool(result.hidden), ) .map_err(|_| js_error(js_sdk_error("could not set hidden")))?; set_string(&object, "commitId", &result.commit_id).map_err(js_error)?; Ok(object.into()) } #[wasm_bindgen(js_name = switchVersion)] pub async fn switch_version(&self, args: JsValue) -> Result { let options = parse_switch_version_options(args).map_err(js_error)?; let result = self.inner.switch_version(options).await.map_err(js_error)?; let object = Object::new(); set_string(&object, "versionId", &result.version_id).map_err(js_error)?; Ok(object.into()) } #[wasm_bindgen(js_name = mergeVersionPreview)] pub async fn merge_version_preview(&self, args: JsValue) -> Result { let options = parse_merge_version_preview_options(args).map_err(js_error)?; let result = self .inner .merge_version_preview(options) .await .map_err(js_error)?; merge_version_preview_to_js(result).map_err(js_error) } #[wasm_bindgen(js_name = mergeVersion)] pub async fn merge_version(&self, args: JsValue) -> Result { let options = parse_merge_version_options(args).map_err(js_error)?; let result = self.inner.merge_version(options).await.map_err(js_error)?; let object = Object::new(); let outcome = match result.outcome { lix_rs_sdk::MergeVersionOutcome::AlreadyUpToDate => "alreadyUpToDate", lix_rs_sdk::MergeVersionOutcome::FastForward => "fastForward", lix_rs_sdk::MergeVersionOutcome::MergeCommitted => "mergeCommitted", }; set_string(&object, "outcome", outcome).map_err(js_error)?; set_string(&object, "targetVersionId", &result.target_version_id).map_err(js_error)?; set_string(&object, "sourceVersionId", &result.source_version_id).map_err(js_error)?; set_string(&object, "baseCommitId", &result.base_commit_id).map_err(js_error)?; set_string( &object, "targetHeadBeforeCommitId", &result.target_head_before_commit_id, ) .map_err(js_error)?; set_string( &object, "sourceHeadBeforeCommitId", &result.source_head_before_commit_id, ) .map_err(js_error)?; set_string( &object, "targetHeadAfterCommitId", &result.target_head_after_commit_id, ) .map_err(js_error)?; set_optional_string( &object, "createdMergeCommitId", result.created_merge_commit_id.as_deref(), ) .map_err(js_error)?; Reflect::set( &object, &JsValue::from_str("changeStats"), &merge_change_stats_to_js(&result.change_stats).map_err(js_error)?, ) .map_err(|_| js_error(js_sdk_error("could not set changeStats")))?; Ok(object.into()) } #[wasm_bindgen(js_name = close)] pub async fn close(&self) -> Result<(), JsValue> { self.inner.close().await.map_err(js_error) } } #[wasm_bindgen(js_name = openLix)] pub async fn open_lix(args: Option) -> Result { let options = parse_open_lix_options(args).map_err(js_error)?; let inner = open_lix_rs(options).await.map_err(js_error)?; Ok(Lix { inner }) } fn parse_open_lix_options(args: Option) -> Result { let Some(value) = args else { return Ok(OpenLixOptions::default()); }; if value.is_undefined() || value.is_null() { return Ok(OpenLixOptions::default()); } if !value.is_object() { return Err(LixError::new( "LIX_ERROR_JS_SDK", "openLix() options must be an object", )); } let backend = Reflect::get(&value, &JsValue::from_str("backend")) .map_err(|_| js_sdk_error("openLix() could not read backend"))?; if backend.is_undefined() || backend.is_null() { return Ok(OpenLixOptions::default()); } if !backend.is_object() { return Err(LixError::new( "LIX_ERROR_JS_SDK", "openLix() backend must be an object", )); } Ok(OpenLixOptions { backend: Some(Box::new(JsBackend::new(backend))), }) } struct JsBackend { inner: JsValue, } impl JsBackend { fn new(inner: JsValue) -> Self { Self { inner } } } unsafe impl Send for JsBackend {} unsafe impl Sync for JsBackend {} #[async_trait] impl Backend for JsBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { let transaction = call_method0(&self.inner, "beginReadTransaction")?; if transaction.is_null() || transaction.is_undefined() || !transaction.is_object() { return Err(js_sdk_error( "backend.beginReadTransaction() must return a transaction object", )); } Ok(Box::new(JsBackendTransaction { inner: transaction })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { let transaction = call_method0(&self.inner, "beginWriteTransaction")?; if transaction.is_null() || transaction.is_undefined() || !transaction.is_object() { return Err(js_sdk_error( "backend.beginWriteTransaction() must return a transaction object", )); } Ok(Box::new(JsBackendTransaction { inner: transaction })) } async fn close(&self) -> Result<(), LixError> { let method = Reflect::get(&self.inner, &JsValue::from_str("close")) .map_err(|_| js_sdk_error("backend.close could not be read"))?; if method.is_undefined() || method.is_null() { return Ok(()); } call_function0(&method, &self.inner)?; Ok(()) } } struct JsBackendTransaction { inner: JsValue, } unsafe impl Send for JsBackendTransaction {} unsafe impl Sync for JsBackendTransaction {} #[async_trait] impl BackendReadTransaction for JsBackendTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { js_value_to_value_batch( call_method1(&self.inner, "getValues", &kv_get_request_to_js(&request)?)?, "transaction.getValues", ) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { js_value_to_exists_batch( call_method1(&self.inner, "existsMany", &kv_get_request_to_js(&request)?)?, "transaction.existsMany", ) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { js_value_to_key_page( call_method1(&self.inner, "scanKeys", &kv_scan_request_to_js(&request)?)?, "transaction.scanKeys", ) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { js_value_to_value_page( call_method1(&self.inner, "scanValues", &kv_scan_request_to_js(&request)?)?, "transaction.scanValues", ) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { js_value_to_entry_page( call_method1( &self.inner, "scanEntries", &kv_scan_request_to_js(&request)?, )?, "transaction.scanEntries", ) } async fn rollback(self: Box) -> Result<(), LixError> { call_method0(&self.inner, "rollback")?; Ok(()) } } #[async_trait] impl BackendWriteTransaction for JsBackendTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { js_value_to_write_stats( call_method1(&self.inner, "writeKvBatch", &kv_write_batch_to_js(&batch)?)?, "transaction.writeKvBatch", ) } async fn commit(self: Box) -> Result<(), LixError> { call_method0(&self.inner, "commit")?; Ok(()) } } fn call_method0(receiver: &JsValue, method_name: &str) -> Result { let method = Reflect::get(receiver, &JsValue::from_str(method_name)) .map_err(|_| js_sdk_error(format!("{method_name} could not be read")))?; call_function0(&method, receiver) } fn call_method1( receiver: &JsValue, method_name: &str, arg1: &JsValue, ) -> Result { let method = Reflect::get(receiver, &JsValue::from_str(method_name)) .map_err(|_| js_sdk_error(format!("{method_name} could not be read")))?; call_function1(&method, receiver, arg1) } fn call_function0(function: &JsValue, receiver: &JsValue) -> Result { let function = function .dyn_ref::() .ok_or_else(|| js_sdk_error("backend method must be a function"))?; reject_promise(function.call0(receiver).map_err(js_to_lix_error)?) } fn call_function1( function: &JsValue, receiver: &JsValue, arg1: &JsValue, ) -> Result { let function = function .dyn_ref::() .ok_or_else(|| js_sdk_error("backend method must be a function"))?; reject_promise(function.call1(receiver, arg1).map_err(js_to_lix_error)?) } fn reject_promise(value: JsValue) -> Result { if value.is_instance_of::() { return Err(js_sdk_error( "JavaScript Backend methods must return synchronously", )); } Ok(value) } fn bytes_to_js(bytes: &[u8]) -> JsValue { js_sys::Uint8Array::from(bytes).into() } fn js_value_to_bytes(value: JsValue, context: &str) -> Result, LixError> { if !value.is_instance_of::() { return Err(js_sdk_error(format!("{context} must return Uint8Array"))); } Ok(js_sys::Uint8Array::from(value).to_vec()) } fn usize_to_js(value: usize) -> JsValue { JsValue::from_f64(value as f64) } fn kv_get_request_to_js(request: &BackendKvGetRequest) -> Result { let object = Object::new(); let groups = Array::new(); for group in &request.groups { let group_object = Object::new(); set_string(&group_object, "namespace", &group.namespace)?; let keys = Array::new(); for key in &group.keys { keys.push(&bytes_to_js(key)); } Reflect::set(&group_object, &JsValue::from_str("keys"), &keys) .map_err(|_| js_sdk_error("could not set get request keys"))?; groups.push(&group_object); } Reflect::set(&object, &JsValue::from_str("groups"), &groups) .map_err(|_| js_sdk_error("could not set get request groups"))?; Ok(object.into()) } fn kv_scan_range_to_js(range: &BackendKvScanRange) -> Result { let object = Object::new(); match range { BackendKvScanRange::Prefix(prefix) => { set_string(&object, "kind", "prefix")?; Reflect::set(&object, &JsValue::from_str("prefix"), &bytes_to_js(prefix)) .map_err(|_| js_sdk_error("could not set range.prefix"))?; } BackendKvScanRange::Range { start, end } => { set_string(&object, "kind", "range")?; Reflect::set(&object, &JsValue::from_str("start"), &bytes_to_js(start)) .map_err(|_| js_sdk_error("could not set range.start"))?; Reflect::set(&object, &JsValue::from_str("end"), &bytes_to_js(end)) .map_err(|_| js_sdk_error("could not set range.end"))?; } } Ok(object.into()) } fn kv_scan_request_to_js(request: &BackendKvScanRequest) -> Result { let object = Object::new(); set_string(&object, "namespace", &request.namespace)?; Reflect::set( &object, &JsValue::from_str("range"), &kv_scan_range_to_js(&request.range)?, ) .map_err(|_| js_sdk_error("could not set scan request range"))?; let after = request .after .as_deref() .map(bytes_to_js) .unwrap_or(JsValue::NULL); Reflect::set(&object, &JsValue::from_str("after"), &after) .map_err(|_| js_sdk_error("could not set scan request after"))?; Reflect::set( &object, &JsValue::from_str("limit"), &usize_to_js(request.limit), ) .map_err(|_| js_sdk_error("could not set scan request limit"))?; Ok(object.into()) } fn kv_write_batch_to_js(batch: &BackendKvWriteBatch) -> Result { let object = Object::new(); let groups = Array::new(); for group in &batch.groups { let group_object = Object::new(); set_string(&group_object, "namespace", group.namespace())?; let puts = Array::new(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; let put = Object::new(); Reflect::set(&put, &JsValue::from_str("key"), &bytes_to_js(key)) .map_err(|_| js_sdk_error("could not set write put key"))?; Reflect::set(&put, &JsValue::from_str("value"), &bytes_to_js(value)) .map_err(|_| js_sdk_error("could not set write put value"))?; puts.push(&put); } Reflect::set(&group_object, &JsValue::from_str("puts"), &puts) .map_err(|_| js_sdk_error("could not set write puts"))?; let deletes = Array::new(); for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; deletes.push(&bytes_to_js(key)); } Reflect::set(&group_object, &JsValue::from_str("deletes"), &deletes) .map_err(|_| js_sdk_error("could not set write deletes"))?; groups.push(&group_object); } Reflect::set(&object, &JsValue::from_str("groups"), &groups) .map_err(|_| js_sdk_error("could not set write groups"))?; Ok(object.into()) } fn js_value_to_value_batch( value: JsValue, context: &str, ) -> Result { let object = expect_backend_object(value, context)?; let groups = required_array(&object, "groups", context)?; let groups = groups .iter() .enumerate() .map(|(index, group)| { let group_context = format!("{context}.groups[{index}]"); let group = expect_backend_object(group, &group_context)?; let namespace = required_string(&group, "namespace", &group_context)?; let values = required_array(&group, "values", &group_context)?; let mut bytes = BytePageBuilder::with_capacity(values.length() as usize, 0); let mut present = Vec::with_capacity(values.length() as usize); for value in values.iter() { if value.is_null() || value.is_undefined() { bytes.push([]); present.push(false); } else { bytes.push(js_value_to_bytes( value, &format!("{group_context}.values"), )?); present.push(true); } } Ok(BackendKvValueGroup::new(namespace, bytes.finish(), present)) }) .collect::, LixError>>()?; Ok(BackendKvValueBatch { groups }) } fn js_value_to_exists_batch( value: JsValue, context: &str, ) -> Result { let object = expect_backend_object(value, context)?; let groups = required_array(&object, "groups", context)?; let groups = groups .iter() .enumerate() .map(|(index, group)| { let group_context = format!("{context}.groups[{index}]"); let group = expect_backend_object(group, &group_context)?; let namespace = required_string(&group, "namespace", &group_context)?; let exists = required_array(&group, "exists", &group_context)? .iter() .map(|value| { value.as_bool().ok_or_else(|| { js_sdk_error(format!("{group_context}.exists must contain booleans")) }) }) .collect::, LixError>>()?; Ok(BackendKvExistsGroup { namespace, exists }) }) .collect::, LixError>>()?; Ok(BackendKvExistsBatch { groups }) } fn js_value_to_key_page(value: JsValue, context: &str) -> Result { let object = expect_backend_object(value, context)?; Ok(BackendKvKeyPage { keys: byte_array_property(&object, "keys", context)?.finish(), resume_after: optional_bytes_property(&object, "resumeAfter", context)?, }) } fn js_value_to_value_page( value: JsValue, context: &str, ) -> Result { let object = expect_backend_object(value, context)?; Ok(BackendKvValuePage { values: byte_array_property(&object, "values", context)?.finish(), resume_after: optional_bytes_property(&object, "resumeAfter", context)?, }) } fn js_value_to_entry_page( value: JsValue, context: &str, ) -> Result { let object = expect_backend_object(value, context)?; Ok(BackendKvEntryPage { keys: byte_array_property(&object, "keys", context)?.finish(), values: byte_array_property(&object, "values", context)?.finish(), resume_after: optional_bytes_property(&object, "resumeAfter", context)?, }) } fn js_value_to_write_stats( value: JsValue, context: &str, ) -> Result { let object = expect_backend_object(value, context)?; Ok(BackendKvWriteStats { puts: required_usize(&object, "puts", context)?, deletes: required_usize(&object, "deletes", context)?, bytes_written: required_usize(&object, "bytesWritten", context)?, }) } fn expect_backend_object(value: JsValue, context: &str) -> Result { if value.is_null() || value.is_undefined() || !value.is_object() { return Err(js_sdk_error(format!("{context} must return an object"))); } Ok(Object::from(value)) } fn required_array(object: &Object, key: &str, context: &str) -> Result { let value = Reflect::get(object, &JsValue::from_str(key)) .map_err(|_| js_sdk_error(format!("{context}.{key} could not be read")))?; if !Array::is_array(&value) { return Err(js_sdk_error(format!("{context}.{key} must be an array"))); } Ok(Array::from(&value)) } fn byte_array_property( object: &Object, key: &str, context: &str, ) -> Result { let array = required_array(object, key, context)?; let mut page = BytePageBuilder::with_capacity(array.length() as usize, 0); for value in array.iter() { page.push(js_value_to_bytes(value, &format!("{context}.{key}"))?); } Ok(page) } fn optional_bytes_property( object: &Object, key: &str, context: &str, ) -> Result>, LixError> { let value = Reflect::get(object, &JsValue::from_str(key)) .map_err(|_| js_sdk_error(format!("{context}.{key} could not be read")))?; if value.is_undefined() || value.is_null() { return Ok(None); } Ok(Some(js_value_to_bytes(value, &format!("{context}.{key}"))?)) } fn required_usize(object: &Object, key: &str, context: &str) -> Result { let value = Reflect::get(object, &JsValue::from_str(key)) .map_err(|_| js_sdk_error(format!("{context}.{key} could not be read")))?; let number = value .as_f64() .ok_or_else(|| js_sdk_error(format!("{context}.{key} must be a number")))?; if !number.is_finite() || number < 0.0 || number.fract() != 0.0 { return Err(js_sdk_error(format!( "{context}.{key} must be a non-negative integer" ))); } Ok(number as usize) } fn js_to_lix_error(value: JsValue) -> LixError { if let Some(message) = value.as_string() { return js_sdk_error(message); } let code = Reflect::get(&value, &JsValue::from_str("code")) .ok() .and_then(|code| code.as_string()); let message = Reflect::get(&value, &JsValue::from_str("message")) .ok() .and_then(|message| message.as_string()) .unwrap_or_else(|| "JavaScript backend error".to_string()); let hint = Reflect::get(&value, &JsValue::from_str("hint")) .ok() .and_then(|hint| hint.as_string()); let details = Reflect::get(&value, &JsValue::from_str("details")) .ok() .and_then(|details| { if details.is_undefined() || details.is_null() { None } else { serde_wasm_bindgen::from_value(details).ok() } }); let mut error = LixError::new( code.unwrap_or_else(|| "LIX_ERROR_JS_SDK".to_string()), message, ); if let Some(hint) = hint { error = error.with_hint(hint); } if let Some(details) = details { error = error.with_details(details); } error } fn parse_create_version_options(value: JsValue) -> Result { let object = expect_object(value, "createVersion")?; let id = optional_string(&object, "id", "createVersion")?; let name = required_string(&object, "name", "createVersion")?; let from_commit_id = optional_string(&object, "fromCommitId", "createVersion")?; Ok(CreateVersionOptions { id, name, from_commit_id, }) } fn parse_switch_version_options(value: JsValue) -> Result { let object = expect_object(value, "switchVersion")?; let version_id = required_string(&object, "versionId", "switchVersion")?; Ok(SwitchVersionOptions { version_id }) } fn parse_merge_version_options(value: JsValue) -> Result { let object = expect_object(value, "mergeVersion")?; let source_version_id = required_string(&object, "sourceVersionId", "mergeVersion")?; Ok(MergeVersionOptions { source_version_id }) } fn parse_merge_version_preview_options( value: JsValue, ) -> Result { let object = expect_object(value, "mergeVersionPreview")?; let source_version_id = required_string(&object, "sourceVersionId", "mergeVersionPreview")?; Ok(MergeVersionPreviewOptions { source_version_id }) } fn expect_object(value: JsValue, method: &str) -> Result { if value.is_null() || value.is_undefined() || !value.is_object() { return Err(LixError::new( "LIX_ERROR_JS_SDK", format!("{method}() options must be an object"), )); } Ok(Object::from(value)) } fn invalid_argument_error( operation: &str, argument: &str, expected: &str, actual_value: &JsValue, ) -> LixError { LixError::new( "LIX_INVALID_ARGUMENT", format!( "lix.{operation}() expected {argument} to be {} {expected}", expected_article(expected) ), ) .with_details(json!({ "operation": operation, "argument": argument, "expected": expected, "actual": js_type_name(actual_value), })) } fn expected_article(expected: &str) -> &'static str { match expected.chars().next().map(|c| c.to_ascii_lowercase()) { Some('a' | 'e' | 'i' | 'o' | 'u') => "an", _ => "a", } } fn js_type_name(value: &JsValue) -> &'static str { if value.is_null() { "null" } else if Array::is_array(value) { "array" } else if value.is_undefined() { "undefined" } else if value.is_string() { "string" } else if value.as_bool().is_some() { "boolean" } else if value.as_f64().is_some() { "number" } else if value.is_function() { "function" } else if value.is_object() { "object" } else { "unknown" } } fn required_string(object: &Object, key: &str, method: &str) -> Result { let value = Reflect::get(object, &JsValue::from_str(key)).map_err(|_| { LixError::new( "LIX_ERROR_JS_SDK", format!("{method}() could not read {key}"), ) })?; if let Some(value) = value.as_string() { if !value.is_empty() { return Ok(value); } } Err(LixError::new( "LIX_ERROR_JS_SDK", format!("{method}() requires non-empty string {key}"), )) } fn optional_string( object: &Object, key: &str, method: &str, ) -> Result, LixError> { let value = Reflect::get(object, &JsValue::from_str(key)).map_err(|_| { LixError::new( "LIX_ERROR_JS_SDK", format!("{method}() could not read {key}"), ) })?; if value.is_undefined() || value.is_null() { return Ok(None); } if let Some(value) = value.as_string() { if !value.is_empty() { return Ok(Some(value)); } } Err(LixError::new( "LIX_ERROR_JS_SDK", format!("{method}() requires {key} to be a non-empty string when provided"), )) } fn value_from_js(value: JsValue) -> Result { if value.is_null() || value.is_undefined() || !value.is_object() { return Err(invalid_param( "parameter must be an explicit Lix value object", &value, )); } let object = Object::from(value.clone()); let kind = Reflect::get(&object, &JsValue::from_str("kind")) .ok() .and_then(|value| value.as_string()); match kind.as_deref() { Some("null") => Ok(Value::Null), Some("boolean") => Ok(Value::Boolean( Reflect::get(&object, &JsValue::from_str("value")) .ok() .and_then(|value| value.as_bool()) .ok_or_else(|| invalid_param("boolean value must be boolean", &value))?, )), Some("integer") => { let value = Reflect::get(&object, &JsValue::from_str("value")) .ok() .and_then(|value| value.as_f64()) .ok_or_else(|| invalid_param("integer value must be number", &value))?; if !value.is_finite() || value.fract() != 0.0 { return Err(invalid_param_message( "integer value must be a finite integer", )); } Ok(Value::Integer(value as i64)) } Some("real") => { let value = Reflect::get(&object, &JsValue::from_str("value")) .ok() .and_then(|value| value.as_f64()) .ok_or_else(|| invalid_param("real value must be number", &value))?; if !value.is_finite() { return Err(invalid_param_message("real value must be a finite number")); } Ok(Value::Real(value)) } Some("text") => Ok(Value::Text( Reflect::get(&object, &JsValue::from_str("value")) .ok() .and_then(|value| value.as_string()) .ok_or_else(|| invalid_param("text value must be string", &value))?, )), Some("json") => { let value = Reflect::get(&object, &JsValue::from_str("value")) .map_err(|_| invalid_param("json value is missing", &value))?; let json = serde_wasm_bindgen::from_value(value).map_err(|error| { LixError::new( LixError::CODE_INVALID_PARAM, format!("json value must be JSON-serializable: {error}"), ) })?; Ok(Value::Json(json)) } Some("blob") => { let base64 = Reflect::get(&object, &JsValue::from_str("base64")) .ok() .and_then(|value| value.as_string()) .ok_or_else(|| invalid_param("blob base64 must be string", &value))?; let bytes = base64::Engine::decode(&base64::engine::general_purpose::STANDARD, base64) .map_err(|error| { LixError::new( LixError::CODE_INVALID_PARAM, format!("blob base64 must be valid base64: {error}"), ) })?; Ok(Value::Blob(bytes)) } _ => Err(invalid_param( "parameter must be an explicit Lix value object", &value, )), } } fn execute_result_to_js(result: ExecuteResult) -> Result { let object = Object::new(); let columns = Array::new(); for column in result.columns() { columns.push(&JsValue::from_str(column)); } Reflect::set(&object, &JsValue::from_str("columns"), &columns) .map_err(|_| js_sdk_error("could not set columns"))?; let values = Array::new(); for row in result.rows() { let row_values = Array::new(); for value in row.values() { row_values.push(&value_to_js(value)?); } values.push(&row_values); } Reflect::set(&object, &JsValue::from_str("rows"), &values) .map_err(|_| js_sdk_error("could not set rows"))?; set_number(&object, "rowsAffected", result.rows_affected() as f64)?; let notices = Array::new(); for notice in result.notices() { let notice_object = Object::new(); set_string(¬ice_object, "code", ¬ice.code)?; set_string(¬ice_object, "message", ¬ice.message)?; if let Some(hint) = ¬ice.hint { set_string(¬ice_object, "hint", hint)?; } notices.push(¬ice_object); } Reflect::set(&object, &JsValue::from_str("notices"), ¬ices) .map_err(|_| js_sdk_error("could not set notices"))?; Ok(object.into()) } fn merge_version_preview_to_js( result: lix_rs_sdk::MergeVersionPreview, ) -> Result { let object = Object::new(); let outcome = match result.outcome { lix_rs_sdk::MergeVersionOutcome::AlreadyUpToDate => "alreadyUpToDate", lix_rs_sdk::MergeVersionOutcome::FastForward => "fastForward", lix_rs_sdk::MergeVersionOutcome::MergeCommitted => "mergeCommitted", }; set_string(&object, "outcome", outcome)?; set_string(&object, "targetVersionId", &result.target_version_id)?; set_string(&object, "sourceVersionId", &result.source_version_id)?; set_string(&object, "baseCommitId", &result.base_commit_id)?; set_string(&object, "targetHeadCommitId", &result.target_head_commit_id)?; set_string(&object, "sourceHeadCommitId", &result.source_head_commit_id)?; Reflect::set( &object, &JsValue::from_str("changeStats"), &merge_change_stats_to_js(&result.change_stats)?, ) .map_err(|_| js_sdk_error("could not set changeStats"))?; let conflicts = Array::new(); for conflict in result.conflicts { conflicts.push(&merge_conflict_to_js(&conflict)?); } Reflect::set(&object, &JsValue::from_str("conflicts"), &conflicts) .map_err(|_| js_sdk_error("could not set conflicts"))?; Ok(object.into()) } fn merge_change_stats_to_js(stats: &lix_rs_sdk::MergeChangeStats) -> Result { let object = Object::new(); set_number(&object, "total", stats.total as f64)?; set_number(&object, "added", stats.added as f64)?; set_number(&object, "modified", stats.modified as f64)?; set_number(&object, "removed", stats.removed as f64)?; Ok(object.into()) } fn merge_conflict_to_js(conflict: &lix_rs_sdk::MergeConflict) -> Result { let object = Object::new(); let kind = match conflict.kind { lix_rs_sdk::MergeConflictKind::SameEntityChanged => "sameEntityChanged", }; set_string(&object, "kind", kind)?; set_string(&object, "schemaKey", &conflict.schema_key)?; set_json(&object, "entityId", &conflict.entity_id)?; set_optional_string(&object, "fileId", conflict.file_id.as_deref())?; Reflect::set( &object, &JsValue::from_str("target"), &merge_conflict_side_to_js(&conflict.target)?, ) .map_err(|_| js_sdk_error("could not set target conflict side"))?; Reflect::set( &object, &JsValue::from_str("source"), &merge_conflict_side_to_js(&conflict.source)?, ) .map_err(|_| js_sdk_error("could not set source conflict side"))?; Ok(object.into()) } fn merge_conflict_side_to_js( side: &lix_rs_sdk::MergeConflictSide, ) -> Result { let object = Object::new(); let kind = match side.kind { lix_rs_sdk::MergeConflictChangeKind::Added => "added", lix_rs_sdk::MergeConflictChangeKind::Modified => "modified", lix_rs_sdk::MergeConflictChangeKind::Removed => "removed", }; set_string(&object, "kind", kind)?; set_optional_string(&object, "beforeChangeId", side.before_change_id.as_deref())?; set_optional_string(&object, "afterChangeId", side.after_change_id.as_deref())?; Ok(object.into()) } fn value_to_js(value: &Value) -> Result { let object = Object::new(); match value { Value::Null => { set_string(&object, "kind", "null")?; Reflect::set(&object, &JsValue::from_str("value"), &JsValue::NULL) .map_err(|_| js_sdk_error("could not set null value"))?; } Value::Boolean(value) => { set_string(&object, "kind", "boolean")?; Reflect::set( &object, &JsValue::from_str("value"), &JsValue::from_bool(*value), ) .map_err(|_| js_sdk_error("could not set boolean value"))?; } Value::Integer(value) => { set_string(&object, "kind", "integer")?; set_number(&object, "value", *value as f64)?; } Value::Real(value) => { set_string(&object, "kind", "real")?; set_number(&object, "value", *value)?; } Value::Text(value) => { set_string(&object, "kind", "text")?; set_string(&object, "value", value)?; } Value::Json(value) => { set_string(&object, "kind", "json")?; let serializer = serde_wasm_bindgen::Serializer::json_compatible(); let value = value.serialize(&serializer).map_err(|error| { LixError::new( "LIX_ERROR_JS_SDK", format!("could not serialize JSON value: {error}"), ) })?; Reflect::set(&object, &JsValue::from_str("value"), &value) .map_err(|_| js_sdk_error("could not set json value"))?; } Value::Blob(value) => { set_string(&object, "kind", "blob")?; set_string( &object, "base64", &base64::Engine::encode(&base64::engine::general_purpose::STANDARD, value), )?; } } Ok(object.into()) } fn set_string(object: &Object, key: &str, value: &str) -> Result<(), LixError> { Reflect::set(object, &JsValue::from_str(key), &JsValue::from_str(value)) .map(|_| ()) .map_err(|_| js_sdk_error(format!("could not set {key}"))) } fn set_optional_string( object: &Object, key: &str, value: Option<&str>, ) -> Result<(), LixError> { let value = value.map(JsValue::from_str).unwrap_or(JsValue::NULL); Reflect::set(object, &JsValue::from_str(key), &value) .map(|_| ()) .map_err(|_| js_sdk_error(format!("could not set {key}"))) } fn set_number(object: &Object, key: &str, value: f64) -> Result<(), LixError> { Reflect::set(object, &JsValue::from_str(key), &JsValue::from_f64(value)) .map(|_| ()) .map_err(|_| js_sdk_error(format!("could not set {key}"))) } fn set_json(object: &Object, key: &str, value: &serde_json::Value) -> Result<(), LixError> { let serializer = serde_wasm_bindgen::Serializer::json_compatible(); let value = value.serialize(&serializer).map_err(|error| { LixError::new( "LIX_ERROR_JS_SDK", format!("could not serialize JSON value for {key}: {error}"), ) })?; Reflect::set(object, &JsValue::from_str(key), &value) .map(|_| ()) .map_err(|_| js_sdk_error(format!("could not set {key}"))) } fn invalid_param(message: impl Into, value: &JsValue) -> LixError { LixError::new(LixError::CODE_INVALID_PARAM, message.into()).with_details(json!({ "operation": "execute", "actual": js_type_name(value), })) } fn invalid_param_message(message: impl Into) -> LixError { LixError::new(LixError::CODE_INVALID_PARAM, message.into()).with_details(json!({ "operation": "execute", })) } fn js_sdk_error(message: impl Into) -> LixError { LixError::new("LIX_ERROR_JS_SDK", message.into()) } fn js_error(error: LixError) -> JsValue { let js_error = js_sys::Error::new(&error.message); let object: &Object = js_error.as_ref(); let _ = Reflect::set( object, &JsValue::from_str("code"), &JsValue::from_str(&error.code), ); if let Some(hint) = error.hint { let _ = Reflect::set( object, &JsValue::from_str("hint"), &JsValue::from_str(&hint), ); } if let Some(details) = error.details { let serializer = serde_wasm_bindgen::Serializer::json_compatible(); if let Ok(value) = details.serialize(&serializer) { let _ = Reflect::set(object, &JsValue::from_str("details"), &value); } } js_error.into() } } ================================================ FILE: packages/plugin-json-v2/.gitignore ================================================ /target /Cargo.lock ================================================ FILE: packages/plugin-json-v2/Cargo.toml ================================================ [package] name = "plugin_json_v2" version = "0.1.0" edition = "2021" publish = false [lib] crate-type = ["cdylib", "rlib"] [dependencies] serde = { version = "1", features = ["derive"] } serde_json = "1" wit-bindgen = "0.40" [dev-dependencies] criterion = "0.5" [[bench]] name = "detect_changes" harness = false [[bench]] name = "apply_changes" harness = false [[bench]] name = "roundtrip" harness = false ================================================ FILE: packages/plugin-json-v2/README.md ================================================ # plugin-json-v2 Rust/WASM component JSON plugin for the Lix engine. - Uses `packages/engine/wit/lix-plugin.wit` as the API contract. - Implements JSON pointer based `detect-changes` and `apply-changes`. - Intended to be installed through `Engine::install_plugin(manifest_json, wasm_bytes)`. - `apply-changes` treats input as an unordered latest-state projection and reconstructs JSON deterministically from upsert rows. ================================================ FILE: packages/plugin-json-v2/benches/apply_changes.rs ================================================ mod common; use criterion::{criterion_group, criterion_main, BatchSize, Criterion}; use plugin_json_v2::apply_changes; fn bench_apply_changes(c: &mut Criterion) { let mut group = c.benchmark_group("apply_changes"); group.sample_size(30); for (name, (before, after)) in [ ("small", common::dataset_small()), ("medium", common::dataset_medium()), ("large", common::dataset_large()), ] { let projection = common::projection_for_transition(&before, &after); let seed = common::file_from_bytes("f1", "/x.json", br#"{"stale":"cache"}"#); group.bench_function(name, |b| { b.iter_batched( || (seed.clone(), projection.clone()), |(seed_file, rows)| { apply_changes(seed_file, rows).expect("apply_changes benchmark should succeed") }, BatchSize::SmallInput, ); }); } group.finish(); } criterion_group!(benches, bench_apply_changes); criterion_main!(benches); ================================================ FILE: packages/plugin-json-v2/benches/common/mod.rs ================================================ #![allow(dead_code)] use plugin_json_v2::{detect_changes, PluginEntityChange, PluginFile, SCHEMA_KEY}; use serde_json::{Map, Value}; use std::collections::BTreeMap; fn make_document(scale: usize, mutate: bool) -> Value { let mut root = Map::new(); for i in 0..scale { if mutate && i % 11 == 0 { continue; } let mut entry = Map::new(); let value = if mutate && i % 3 == 0 { (i as i64) * 2 } else { i as i64 }; entry.insert("value".to_string(), Value::Number(value.into())); entry.insert("enabled".to_string(), Value::Bool(i % 2 == 0)); let mut tags = Vec::new(); tags.push(Value::String(format!("tag-{i}"))); tags.push(Value::Number((i as i64 + 1).into())); if mutate && i % 5 == 0 { tags.push(Value::String("new".to_string())); } entry.insert("tags".to_string(), Value::Array(tags)); root.insert(format!("item-{i}"), Value::Object(entry)); } if mutate { let extra = scale / 10 + 1; for i in 0..extra { let mut entry = Map::new(); entry.insert( "value".to_string(), Value::Number((10_000 + i as i64).into()), ); entry.insert("enabled".to_string(), Value::Bool(true)); entry.insert( "tags".to_string(), Value::Array(vec![Value::String("added".to_string())]), ); root.insert(format!("added-{i}"), Value::Object(entry)); } } root.insert( "meta".to_string(), serde_json::json!({ "version": if mutate { 2 } else { 1 }, "name": if mutate { "after" } else { "before" }, }), ); Value::Object(root) } pub fn dataset_small() -> (Vec, Vec) { let before = make_document(20, false); let after = make_document(20, true); ( serde_json::to_vec(&before).expect("before JSON should serialize"), serde_json::to_vec(&after).expect("after JSON should serialize"), ) } pub fn dataset_medium() -> (Vec, Vec) { let before = make_document(200, false); let after = make_document(200, true); ( serde_json::to_vec(&before).expect("before JSON should serialize"), serde_json::to_vec(&after).expect("after JSON should serialize"), ) } pub fn dataset_large() -> (Vec, Vec) { let before = make_document(1000, false); let after = make_document(1000, true); ( serde_json::to_vec(&before).expect("before JSON should serialize"), serde_json::to_vec(&after).expect("after JSON should serialize"), ) } pub fn file_from_bytes(id: &str, path: &str, data: &[u8]) -> PluginFile { PluginFile { id: id.to_string(), path: path.to_string(), data: data.to_vec(), } } pub fn merge_latest_state_rows( changesets: Vec>, ) -> Vec { let mut latest = BTreeMap::new(); for changes in changesets { for change in changes { if change.schema_key != SCHEMA_KEY { continue; } latest.insert( (change.schema_key.clone(), change.entity_id.clone()), change, ); } } latest.into_values().collect() } pub fn projection_for_transition(before: &[u8], after: &[u8]) -> Vec { let before_file = file_from_bytes("f1", "/x.json", before); let after_file = file_from_bytes("f1", "/x.json", after); let baseline = detect_changes(None, before_file.clone()).expect("baseline detect_changes should work"); let delta = detect_changes(Some(before_file), after_file).expect("delta detect_changes should work"); merge_latest_state_rows(vec![baseline, delta]) } ================================================ FILE: packages/plugin-json-v2/benches/detect_changes.rs ================================================ mod common; use criterion::{criterion_group, criterion_main, BatchSize, Criterion}; use plugin_json_v2::detect_changes; fn bench_detect_changes(c: &mut Criterion) { let mut group = c.benchmark_group("detect_changes"); group.sample_size(30); for (name, (before, after)) in [ ("small", common::dataset_small()), ("medium", common::dataset_medium()), ("large", common::dataset_large()), ] { group.bench_function(name, |b| { b.iter_batched( || { ( common::file_from_bytes("f1", "/x.json", &before), common::file_from_bytes("f1", "/x.json", &after), ) }, |(before_file, after_file)| { detect_changes(Some(before_file), after_file) .expect("detect_changes benchmark should succeed") }, BatchSize::SmallInput, ); }); } group.finish(); } criterion_group!(benches, bench_detect_changes); criterion_main!(benches); ================================================ FILE: packages/plugin-json-v2/benches/roundtrip.rs ================================================ mod common; use criterion::{criterion_group, criterion_main, BatchSize, Criterion}; use plugin_json_v2::{apply_changes, detect_changes}; fn bench_roundtrip_projection(c: &mut Criterion) { let mut group = c.benchmark_group("roundtrip_projection"); group.sample_size(20); for (name, (before, after)) in [ ("small", common::dataset_small()), ("medium", common::dataset_medium()), ("large", common::dataset_large()), ] { group.bench_function(name, |b| { b.iter_batched( || { ( common::file_from_bytes("f1", "/x.json", &before), common::file_from_bytes("f1", "/x.json", &after), ) }, |(before_file, after_file)| { let baseline = detect_changes(None, before_file.clone()) .expect("baseline detect_changes should succeed"); let delta = detect_changes(Some(before_file), after_file) .expect("delta detect_changes should succeed"); let projection = common::merge_latest_state_rows(vec![baseline, delta]); let seed = common::file_from_bytes("f1", "/x.json", br#"{"stale":"cache"}"#); apply_changes(seed, projection).expect("apply_changes should succeed") }, BatchSize::SmallInput, ); }); } group.finish(); } criterion_group!(benches, bench_roundtrip_projection); criterion_main!(benches); ================================================ FILE: packages/plugin-json-v2/schema/json_pointer.json ================================================ { "x-lix-key": "json_pointer", "x-lix-primary-key": [ "/path" ], "type": "object", "properties": { "path": { "type": "string", "description": "RFC 6901 JSON Pointer path (empty string for root)." }, "value": { "anyOf": [ { "type": "object" }, { "type": "array" }, { "type": "string" }, { "type": "number" }, { "type": "boolean" }, { "type": "null" } ] } }, "required": [ "path", "value" ], "additionalProperties": false } ================================================ FILE: packages/plugin-json-v2/src/lib.rs ================================================ use crate::exports::lix::plugin::api::{EntityChange, File, Guest, PluginError}; use serde_json::{Map, Value}; use std::collections::{BTreeMap, BTreeSet, HashMap}; use std::sync::OnceLock; wit_bindgen::generate!({ path: "../engine/wit", world: "plugin", }); pub const SCHEMA_KEY: &str = "json_pointer"; const MAX_ARRAY_INDEX: usize = 100_000; const JSON_POINTER_SCHEMA_JSON: &str = include_str!("../schema/json_pointer.json"); static JSON_POINTER_SCHEMA: OnceLock = OnceLock::new(); pub use crate::exports::lix::plugin::api::{ EntityChange as PluginEntityChange, File as PluginFile, PluginError as PluginApiError, }; struct JsonPlugin; #[derive(Debug, serde::Serialize)] struct SnapshotContentRef<'a> { path: &'a str, value: &'a Value, } #[derive(Debug, serde::Deserialize)] #[serde(deny_unknown_fields)] struct SnapshotContentWithPath { path: String, value: Value, } #[derive(Debug, Clone)] struct ProjectionUpsert { pointer: String, tokens: Vec, terminal_token: Option, value: Value, } #[derive(Debug, Clone)] struct ProjectionTombstone { pointer: String, tokens: Vec, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum ProjectionNodeKind { Object, Array, Scalar, } impl ProjectionNodeKind { fn from_value(value: &Value) -> Self { if value.is_object() { Self::Object } else if value.is_array() { Self::Array } else { Self::Scalar } } } #[derive(Debug, Clone)] enum TypedPathToken { ObjectKey(String), ArrayIndex(usize), } #[derive(Debug)] struct ProjectionTreeNode { value: Option, terminal_token: Option, object_children: Vec<(String, usize)>, array_children: Vec<(usize, usize)>, } impl Guest for JsonPlugin { fn detect_changes( before: Option, after: File, _state_context: Option, ) -> Result, PluginError> { let before_json = before .as_ref() .map(|file| parse_json_bytes(&file.data)) .transpose()?; let after_json = parse_json_bytes(&after.data)?; let mut changes = Vec::new(); diff_json( before_json.as_ref(), Some(&after_json), &mut Vec::new(), &mut changes, )?; Ok(changes) } fn apply_changes(_file: File, changes: Vec) -> Result, PluginError> { let mut seen_entity_ids = BTreeSet::new(); let mut upserts = Vec::new(); let mut tombstones = Vec::new(); for change in changes { if change.schema_key != SCHEMA_KEY { continue; } let pointer = change.entity_id; if !seen_entity_ids.insert(pointer.clone()) { return Err(PluginError::InvalidInput(format!( "duplicate entity_id '{pointer}' for schema_key '{SCHEMA_KEY}'" ))); } let tokens = pointer_tokens(&pointer)?; match change.snapshot_content { Some(snapshot_content) => { let value = parse_snapshot_value(&snapshot_content, &pointer)?; upserts.push(ProjectionUpsert { pointer, tokens, terminal_token: None, value, }); } None => { tombstones.push(ProjectionTombstone { pointer, tokens }); } } } let has_root_tombstone = tombstones.iter().any(|entry| entry.tokens.is_empty()); if has_root_tombstone && (upserts.iter().any(|entry| !entry.tokens.is_empty()) || tombstones.iter().any(|entry| !entry.tokens.is_empty())) { return Err(PluginError::InvalidInput( "root tombstone cannot coexist with non-root projection rows".to_string(), )); } let has_root_upsert = upserts.iter().any(|entry| entry.pointer.is_empty()); let has_non_root_rows = upserts.iter().any(|entry| !entry.pointer.is_empty()) || tombstones.iter().any(|entry| !entry.tokens.is_empty()); if has_non_root_rows && !has_root_upsert { return Err(PluginError::InvalidInput( "non-root projection rows require a root row with entity_id ''".to_string(), )); } let upsert_pointers = upserts .iter() .map(|entry| entry.pointer.clone()) .collect::>(); let tombstone_pointers = tombstones .iter() .map(|entry| entry.pointer.clone()) .collect::>(); let upsert_node_kinds = upserts .iter() .map(|entry| { ( entry.pointer.clone(), ProjectionNodeKind::from_value(&entry.value), ) }) .collect::>(); let mut array_child_indices: BTreeMap> = BTreeMap::new(); let mut canonical_upsert_pointers = BTreeSet::new(); for upsert in &mut upserts { let mut ancestor = String::new(); let mut canonical_pointer = String::new(); let raw_tokens = std::mem::take(&mut upsert.tokens); let mut terminal_token = None; for token in raw_tokens { if tombstone_pointers.contains(&ancestor) { return Err(PluginError::InvalidInput(format!( "entity_id '{}' conflicts with tombstoned ancestor '{ancestor}'", upsert.pointer ))); } if !upsert_pointers.contains(&ancestor) { return Err(PluginError::InvalidInput(format!( "missing ancestor container row '{ancestor}' for entity_id '{}'", upsert.pointer ))); } let ancestor_kind = *upsert_node_kinds .get(&ancestor) .expect("ancestor pointer existence checked above"); let validated = validate_child_token_for_ancestor( ancestor_kind, &token, &ancestor, &upsert.pointer, )?; let canonical_token = validated.canonical_token; let parent_ancestor = ancestor.clone(); push_pointer_segment(&mut ancestor, &token); push_pointer_segment(&mut canonical_pointer, &canonical_token); if let Some(index) = validated.array_index { array_child_indices .entry(parent_ancestor) .or_default() .insert(index); terminal_token = Some(TypedPathToken::ArrayIndex(index)); } else { terminal_token = Some(TypedPathToken::ObjectKey(token)); } } upsert.terminal_token = terminal_token; if !canonical_upsert_pointers.insert(canonical_pointer.clone()) { return Err(PluginError::InvalidInput(format!( "logical duplicate pointer '{canonical_pointer}' in projection rows" ))); } } let mut canonical_tombstone_pointers = BTreeSet::new(); for tombstone in &tombstones { let mut ancestor = String::new(); let mut canonical_pointer = String::new(); for token in &tombstone.tokens { if array_child_indices.contains_key(&ancestor) { if token == "-" || (!token.is_empty() && token.chars().all(|ch| ch.is_ascii_digit())) { let index = parse_projection_array_index(token, &ancestor, &tombstone.pointer)?; push_pointer_segment(&mut canonical_pointer, &index.to_string()); } else { push_pointer_segment(&mut canonical_pointer, token); } } else { push_pointer_segment(&mut canonical_pointer, token); } push_pointer_segment(&mut ancestor, token); } if canonical_upsert_pointers.contains(&canonical_pointer) { return Err(PluginError::InvalidInput(format!( "tombstone '{}' conflicts with live projection row '{}'", tombstone.pointer, canonical_pointer ))); } if !canonical_tombstone_pointers.insert(canonical_pointer.clone()) { return Err(PluginError::InvalidInput(format!( "logical duplicate tombstone pointer '{canonical_pointer}' in projection rows" ))); } } validate_sparse_array_children(&array_child_indices)?; let document = build_document_from_projection(upserts, has_root_tombstone)?; serde_json::to_vec(&document).map_err(|error| { PluginError::Internal(format!("failed to serialize reconstructed JSON: {error}")) }) } } fn parse_json_bytes(data: &[u8]) -> Result { if data.is_empty() { return Ok(Value::Object(Map::new())); } serde_json::from_slice::(data).map_err(|error| { PluginError::InvalidInput(format!("file.data must be valid JSON UTF-8 bytes: {error}")) }) } fn parse_snapshot_value(raw: &str, pointer: &str) -> Result { if let Ok(parsed) = serde_json::from_str::(raw) { if parsed.path != pointer { return Err(PluginError::InvalidInput(format!( "snapshot path '{}' does not match entity_id '{}'", parsed.path, pointer ))); } return Ok(parsed.value); } parse_snapshot_value_slow(raw, pointer) } fn parse_snapshot_value_slow(raw: &str, pointer: &str) -> Result { let parsed = serde_json::from_str::(raw).map_err(|error| { PluginError::InvalidInput(format!( "invalid snapshot_content for pointer '{pointer}': {error}" )) })?; let Value::Object(mut object) = parsed else { return Err(PluginError::InvalidInput(format!( "snapshot_content for pointer '{pointer}' must be an object with 'value'" ))); }; let raw_path = object.remove("path"); let raw_value = object.remove("value"); if !object.is_empty() { return Err(PluginError::InvalidInput(format!( "snapshot_content for pointer '{pointer}' contains unsupported properties" ))); } match (raw_path, raw_value) { (Some(path), Some(value)) => { let Some(path_string) = path.as_str() else { return Err(PluginError::InvalidInput(format!( "snapshot path for entity_id '{pointer}' must be a string" ))); }; if path_string != pointer { return Err(PluginError::InvalidInput(format!( "snapshot path '{path_string}' does not match entity_id '{pointer}'" ))); } Ok(value) } (None, Some(_)) => Err(PluginError::InvalidInput(format!( "snapshot_content for pointer '{pointer}' must contain 'path'" ))), (_, None) => Err(PluginError::InvalidInput(format!( "snapshot_content for pointer '{pointer}' must contain 'value'" ))), } } fn diff_json( before: Option<&Value>, after: Option<&Value>, path: &mut Vec, changes: &mut Vec, ) -> Result<(), PluginError> { if before.is_none() && after.is_none() { return Ok(()); } if after.is_none() { collect_deletions( before.expect("after is none implies before exists"), path, changes, true, ); return Ok(()); } if before.is_none() { collect_leaves(after.expect("checked above"), path, changes)?; return Ok(()); } let before_value = before.expect("checked above"); let after_value = after.expect("checked above"); if before_value == after_value { return Ok(()); } let before_is_container = is_container(before_value); let after_is_container = is_container(after_value); if before_is_container && after_is_container { if let (Some(before_items), Some(after_items)) = (before_value.as_array(), after_value.as_array()) { let shared = before_items.len().min(after_items.len()); for index in 0..shared { path.push(index.to_string()); diff_json( before_items.get(index), after_items.get(index), path, changes, )?; path.pop(); } if before_items.len() > after_items.len() { for index in (after_items.len()..before_items.len()).rev() { path.push(index.to_string()); diff_json(before_items.get(index), None, path, changes)?; path.pop(); } } else { for index in before_items.len()..after_items.len() { path.push(index.to_string()); diff_json(None, after_items.get(index), path, changes)?; path.pop(); } } return Ok(()); } if let (Some(before_object), Some(after_object)) = (before_value.as_object(), after_value.as_object()) { let mut keys = before_object.keys().cloned().collect::>(); for key in after_object.keys() { if !before_object.contains_key(key) { keys.push(key.clone()); } } for key in keys { path.push(key.clone()); diff_json( before_object.get(&key), after_object.get(&key), path, changes, )?; path.pop(); } return Ok(()); } } if before_is_container || after_is_container { collect_deletions(before_value, path, changes, false); collect_leaves(after_value, path, changes)?; return Ok(()); } if before_value != after_value { push_upsert(changes, pointer_from_segments(path), after_value.clone())?; } Ok(()) } fn collect_deletions( value: &Value, path: &mut Vec, changes: &mut Vec, include_current: bool, ) { match value { Value::Array(items) => { if include_current { push_deletion(changes, pointer_from_segments(path)); } for index in (0..items.len()).rev() { path.push(index.to_string()); collect_deletions(&items[index], path, changes, true); path.pop(); } } Value::Object(object) => { if include_current { push_deletion(changes, pointer_from_segments(path)); } for (key, item) in object { path.push(key.clone()); collect_deletions(item, path, changes, true); path.pop(); } } _ => { if include_current { push_deletion(changes, pointer_from_segments(path)); } } } } fn collect_leaves( value: &Value, path: &mut Vec, changes: &mut Vec, ) -> Result<(), PluginError> { match value { Value::Array(items) => { push_upsert( changes, pointer_from_segments(path), Value::Array(Vec::new()), )?; for (index, item) in items.iter().enumerate() { path.push(index.to_string()); collect_leaves(item, path, changes)?; path.pop(); } Ok(()) } Value::Object(object) => { push_upsert( changes, pointer_from_segments(path), Value::Object(Map::new()), )?; for (key, item) in object { path.push(key.clone()); collect_leaves(item, path, changes)?; path.pop(); } Ok(()) } _ => push_upsert(changes, pointer_from_segments(path), value.clone()), } } fn push_deletion(changes: &mut Vec, pointer: String) { changes.push(EntityChange { entity_id: pointer, schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }); } fn push_upsert( changes: &mut Vec, pointer: String, value: Value, ) -> Result<(), PluginError> { let snapshot_content = serde_json::to_string(&SnapshotContentRef { path: &pointer, value: &value, }) .map_err(|error| { PluginError::Internal(format!( "failed to serialize snapshot content for '{pointer}': {error}" )) })?; changes.push(EntityChange { entity_id: pointer, schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content), }); Ok(()) } fn is_container(value: &Value) -> bool { value.is_array() || value.is_object() } fn pointer_from_segments(segments: &[String]) -> String { if segments.is_empty() { return String::new(); } let mut pointer = String::new(); for segment in segments { push_pointer_segment(&mut pointer, segment); } pointer } fn push_pointer_segment(pointer: &mut String, token: &str) { pointer.push('/'); for ch in token.chars() { match ch { '~' => pointer.push_str("~0"), '/' => pointer.push_str("~1"), _ => pointer.push(ch), } } } fn unescape_pointer_token(token: &str) -> Result { let mut output = String::with_capacity(token.len()); let mut chars = token.chars(); while let Some(ch) = chars.next() { if ch != '~' { output.push(ch); continue; } match chars.next() { Some('0') => output.push('~'), Some('1') => output.push('/'), Some(other) => { return Err(PluginError::InvalidInput(format!( "invalid JSON pointer escape '~{other}' in token '{token}'" ))); } None => { return Err(PluginError::InvalidInput(format!( "invalid JSON pointer escape '~' in token '{token}'" ))); } } } Ok(output) } fn pointer_tokens(pointer: &str) -> Result, PluginError> { if pointer.is_empty() { return Ok(Vec::new()); } if !pointer.starts_with('/') { return Err(PluginError::InvalidInput(format!( "entity_id '{pointer}' must be a JSON pointer" ))); } pointer .split('/') .skip(1) .map(unescape_pointer_token) .collect() } struct ValidatedChildToken { canonical_token: String, array_index: Option, } fn validate_child_token_for_ancestor( ancestor_kind: ProjectionNodeKind, child_token: &str, ancestor_pointer: &str, entity_id: &str, ) -> Result { match ancestor_kind { ProjectionNodeKind::Object => Ok(ValidatedChildToken { canonical_token: child_token.to_string(), array_index: None, }), ProjectionNodeKind::Array => { let index = parse_projection_array_index(child_token, ancestor_pointer, entity_id)?; Ok(ValidatedChildToken { canonical_token: index.to_string(), array_index: Some(index), }) } ProjectionNodeKind::Scalar => Err(PluginError::InvalidInput(format!( "ancestor '{ancestor_pointer}' for entity_id '{entity_id}' is not a container" ))), } } fn validate_sparse_array_children( indices_by_ancestor: &BTreeMap>, ) -> Result<(), PluginError> { for (ancestor, indices) in indices_by_ancestor { let Some(max_index) = indices.iter().next_back() else { continue; }; for expected in 0..=*max_index { if !indices.contains(&expected) { return Err(PluginError::InvalidInput(format!( "sparse array projection under '{ancestor}': missing index {expected}" ))); } } } Ok(()) } fn parse_projection_array_index( token: &str, ancestor_pointer: &str, entity_id: &str, ) -> Result { if token == "-" { return Err(PluginError::InvalidInput(format!( "entity_id '{entity_id}' uses non-canonical '-' array token under '{ancestor_pointer}'" ))); } if token.is_empty() || !token.chars().all(|ch| ch.is_ascii_digit()) { return Err(PluginError::InvalidInput(format!( "invalid array index token '{token}' under '{ancestor_pointer}'" ))); } if token.len() > 1 && token.starts_with('0') { return Err(PluginError::InvalidInput(format!( "entity_id '{entity_id}' uses non-canonical array index token '{token}' under '{ancestor_pointer}'" ))); } let index = token.parse::().map_err(|error| { PluginError::InvalidInput(format!( "invalid array index token '{token}' under '{ancestor_pointer}': {error}" )) })?; if index > MAX_ARRAY_INDEX { return Err(PluginError::InvalidInput(format!( "array index {index} exceeds max supported index {MAX_ARRAY_INDEX}" ))); } Ok(index) } fn build_document_from_projection( upserts: Vec, has_root_tombstone: bool, ) -> Result { if upserts.is_empty() { return Ok(if has_root_tombstone { Value::Null } else { Value::Object(Map::new()) }); } let mut index_by_pointer = HashMap::with_capacity(upserts.len()); let mut pointers = Vec::with_capacity(upserts.len()); let mut nodes = Vec::with_capacity(upserts.len()); for (index, upsert) in upserts.into_iter().enumerate() { index_by_pointer.insert(upsert.pointer.clone(), index); pointers.push(upsert.pointer); nodes.push(ProjectionTreeNode { value: Some(upsert.value), terminal_token: upsert.terminal_token, object_children: Vec::new(), array_children: Vec::new(), }); } let root_index = index_by_pointer.get("").copied().ok_or_else(|| { PluginError::InvalidInput( "non-root projection rows require a root row with entity_id ''".to_string(), ) })?; for index in 0..pointers.len() { let pointer = &pointers[index]; if pointer.is_empty() { continue; } let parent_pointer = parent_pointer(pointer); let parent_index = index_by_pointer .get(parent_pointer) .copied() .ok_or_else(|| { PluginError::InvalidInput(format!( "missing ancestor container row '{parent_pointer}' for entity_id '{pointer}'" )) })?; let terminal_token = nodes[index].terminal_token.take().ok_or_else(|| { PluginError::Internal(format!( "missing terminal token for non-root projection row '{pointer}'" )) })?; match terminal_token { TypedPathToken::ObjectKey(key) => { nodes[parent_index].object_children.push((key, index)); } TypedPathToken::ArrayIndex(array_index) => { nodes[parent_index] .array_children .push((array_index, index)); } } } materialize_projection_node(&mut nodes, root_index) } fn parent_pointer(pointer: &str) -> &str { pointer .rsplit_once('/') .map(|(parent, _)| parent) .unwrap_or("") } fn materialize_projection_node( nodes: &mut [ProjectionTreeNode], index: usize, ) -> Result { let (mut value, object_children, array_children) = { let node = nodes.get_mut(index).ok_or_else(|| { PluginError::Internal(format!("projection node index {index} out of bounds")) })?; ( node.value.take().ok_or_else(|| { PluginError::Internal(format!("projection node {index} was materialized twice")) })?, std::mem::take(&mut node.object_children), std::mem::take(&mut node.array_children), ) }; match &mut value { Value::Object(object) => { if !array_children.is_empty() { return Err(PluginError::InvalidInput( "object projection node cannot have array-index children".to_string(), )); } for (key, child_index) in object_children { let child_value = materialize_projection_node(nodes, child_index)?; object.insert(key, child_value); } } Value::Array(items) => { if !object_children.is_empty() { return Err(PluginError::InvalidInput( "array projection node cannot have object-key children".to_string(), )); } for (array_index, child_index) in array_children { while items.len() <= array_index { items.push(Value::Null); } items[array_index] = materialize_projection_node(nodes, child_index)?; } } _ => { if !object_children.is_empty() || !array_children.is_empty() { return Err(PluginError::InvalidInput( "scalar projection node cannot have children".to_string(), )); } } } Ok(value) } pub fn detect_changes(before: Option, after: File) -> Result, PluginError> { ::detect_changes(before, after, None) } pub fn detect_changes_with_state_context( before: Option, after: File, state_context: Option, ) -> Result, PluginError> { ::detect_changes(before, after, state_context) } pub fn apply_changes(file: File, changes: Vec) -> Result, PluginError> { ::apply_changes(file, changes) } pub fn schema_json() -> &'static str { JSON_POINTER_SCHEMA_JSON } pub fn schema_definition() -> &'static Value { JSON_POINTER_SCHEMA.get_or_init(|| { serde_json::from_str(JSON_POINTER_SCHEMA_JSON).expect("json pointer schema must be valid") }) } export!(JsonPlugin); ================================================ FILE: packages/plugin-json-v2/tests/apply_changes.rs ================================================ mod common; use common::{file_from_json, snapshot_content}; use plugin_json_v2::{apply_changes, PluginApiError, PluginEntityChange, SCHEMA_KEY}; use serde_json::Value; fn with_root_object(mut changes: Vec) -> Vec { if changes.iter().any(|change| change.entity_id.is_empty()) { return changes; } let mut with_root = vec![PluginEntityChange { entity_id: "".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("", Value::Object(serde_json::Map::new()))), }]; with_root.append(&mut changes); with_root } #[test] fn applies_insert_update_delete() { let file = file_from_json("f1", "/x.json", r#"{"stale":"cache"}"#); let changes = vec![ PluginEntityChange { entity_id: "/Name".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content( "/Name", Value::String("Samuel".to_string()), )), }, PluginEntityChange { entity_id: "/Age".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/Age", Value::Number(20.into()))), }, PluginEntityChange { entity_id: "/City".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, ]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should be valid JSON"); assert_eq!(parsed, serde_json::json!({"Name":"Samuel","Age":20})); } #[test] fn applies_array_changes_with_indexes() { let file = file_from_json("f1", "/x.json", r#"{"stale":"cache"}"#); let changes = vec![ PluginEntityChange { entity_id: "/list".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/list", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/list/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/list/0", Value::String("a".to_string()))), }, PluginEntityChange { entity_id: "/list/1".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/list/1", Value::String("x".to_string()))), }, PluginEntityChange { entity_id: "/list/2".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/list/2", Value::String("c".to_string()))), }, PluginEntityChange { entity_id: "/list/3".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/list/3", Value::String("d".to_string()))), }, ]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should be valid JSON"); assert_eq!(parsed, serde_json::json!({"list":["a","x","c","d"]})); } #[test] fn rejects_snapshot_missing_path() { let file = file_from_json("f1", "/x.json", r#"{"foo":1}"#); let changes = vec![PluginEntityChange { entity_id: "/foo".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(r#"{"value":2}"#.to_string()), }]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("must contain 'path'")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn infers_array_parent_for_numeric_pointer_segment() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/team".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/team", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/team/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content( "/team/0", Value::Object(serde_json::Map::new()), )), }, PluginEntityChange { entity_id: "/team/0/name".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content( "/team/0/name", Value::String("Ada".to_string()), )), }, ]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should parse"); assert_eq!(parsed, serde_json::json!({"team":[{"name":"Ada"}]})); } #[test] fn removing_root_sets_null() { let file = file_from_json("f1", "/x.json", r#"{"foo":1}"#); let changes = vec![PluginEntityChange { entity_id: "".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should parse"); assert_eq!(parsed, Value::Null); } #[test] fn rejects_duplicate_entity_ids_in_projection_set() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/foo".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/foo", Value::Number(1.into()))), }, PluginEntityChange { entity_id: "/foo".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/foo", Value::Number(2.into()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("duplicate entity_id")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_mismatched_snapshot_path() { let file = file_from_json("f1", "/x.json", r#"{"foo":1}"#); let changes = vec![PluginEntityChange { entity_id: "/foo".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(r#"{"path":"/bar","value":2}"#.to_string()), }]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("snapshot path '/bar'")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_invalid_json_pointer_escape() { let file = file_from_json("f1", "/x.json", r#"{"foo":1}"#); let changes = vec![PluginEntityChange { entity_id: "/foo/~2bar".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/foo/~2bar", Value::Number(2.into()))), }]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("invalid JSON pointer escape")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_invalid_dash_placement() { let file = file_from_json("f1", "/x.json", r#"{"list":[{"x":"a"}]}"#); let changes = vec![ PluginEntityChange { entity_id: "/list".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/list", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/list/-/x".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content( "/list/-/x", Value::String("b".to_string()), )), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("non-canonical '-' array token")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn allows_proto_like_keys_when_projection_rows_are_consistent() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/__proto__".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content( "/__proto__", Value::Object(serde_json::Map::new()), )), }, PluginEntityChange { entity_id: "/__proto__/x".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content( "/__proto__/x", Value::String("pwn".to_string()), )), }, ]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should parse"); assert_eq!(parsed, serde_json::json!({"__proto__":{"x":"pwn"}})); } #[test] fn rejects_descendant_upsert_under_tombstoned_ancestor() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/a".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, PluginEntityChange { entity_id: "/a/b".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/a/b", Value::Number(1.into()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("conflicts with tombstoned ancestor")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_root_tombstone_with_non_root_rows() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, PluginEntityChange { entity_id: "/a".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/a", Value::Number(1.into()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("root tombstone cannot coexist")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_snapshot_path_non_string() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![PluginEntityChange { entity_id: "/safe".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(r#"{"path":123,"value":1}"#.to_string()), }]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("must be a string")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_snapshot_with_additional_properties_or_missing_value() { let file = file_from_json("f1", "/x.json", r#"{}"#); let with_extra = vec![PluginEntityChange { entity_id: "/safe".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(r#"{"path":"/safe","value":1,"extra":true}"#.to_string()), }]; let error = apply_changes(file.clone(), with_root_object(with_extra)) .expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("unsupported properties")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } let missing_value = vec![PluginEntityChange { entity_id: "/safe".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(r#"{"path":"/safe"}"#.to_string()), }]; let error = apply_changes(file, with_root_object(missing_value)) .expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("must contain 'value'")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_numeric_child_without_parent_container_row() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![PluginEntityChange { entity_id: "/foo/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/foo/0", Value::String("x".to_string()))), }]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("missing ancestor container row")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_huge_array_index_growth() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/100001".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content( "/arr/100001", Value::String("x".to_string()), )), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("exceeds max supported index")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_leading_zero_array_indices_under_array_ancestor() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/01".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/01", Value::String("A".to_string()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("non-canonical array index token")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn accepts_canonical_zero_array_index() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/0", Value::String("A".to_string()))), }, ]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should parse"); assert_eq!(parsed, serde_json::json!({"arr":["A"]})); } #[test] fn rejects_sparse_array_projection_rows() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/5".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/5", Value::String("x".to_string()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("sparse array projection")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_aliasing_array_indices_via_non_canonical_form() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/1".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/1", Value::String("A".to_string()))), }, PluginEntityChange { entity_id: "/arr/01".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/01", Value::String("B".to_string()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("non-canonical array index token")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_tombstone_with_leading_zero_token_under_live_array_context() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/0", Value::String("A".to_string()))), }, PluginEntityChange { entity_id: "/arr/01".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("non-canonical array index token")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_tombstone_with_dash_token_under_live_array_context() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/0", Value::String("A".to_string()))), }, PluginEntityChange { entity_id: "/arr/-".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("non-canonical '-' array token")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn allows_tombstone_with_leading_zero_token_with_only_live_array_container() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/00".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, ]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should parse"); assert_eq!(parsed, serde_json::json!({"arr":[]})); } #[test] fn allows_tombstone_with_dash_token_with_only_live_array_container() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/-".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, ]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should parse"); assert_eq!(parsed, serde_json::json!({"arr":[]})); } #[test] fn rejects_live_array_row_with_non_canonical_tombstone_alias() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/0", Value::Null)), }, PluginEntityChange { entity_id: "/arr/1".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/1", Value::String("B".to_string()))), }, PluginEntityChange { entity_id: "/arr/01".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("non-canonical array index token")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn allows_tombstone_non_numeric_token_under_live_array_context() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/0", Value::Null)), }, PluginEntityChange { entity_id: "/arr/foo".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: None, }, ]; let output = apply_changes(file, with_root_object(changes)).expect("apply_changes should succeed"); let parsed: Value = serde_json::from_slice(&output).expect("output should parse"); assert_eq!(parsed, serde_json::json!({"arr":[null]})); } #[test] fn rejects_root_scalar_with_non_root_descendants() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("", Value::Number(7.into()))), }, PluginEntityChange { entity_id: "/a".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/a", Value::Number(1.into()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("is not a container")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_scalar_ancestor_with_descendant_row() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/a".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/a", Value::Number(1.into()))), }, PluginEntityChange { entity_id: "/a/b".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/a/b", Value::Number(2.into()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("is not a container")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_final_dash_token_in_projection_rows() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![ PluginEntityChange { entity_id: "/arr".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr", Value::Array(Vec::new()))), }, PluginEntityChange { entity_id: "/arr/-".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/arr/-", Value::String("x".to_string()))), }, ]; let error = apply_changes(file, with_root_object(changes)).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("non-canonical '-' array token")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn rejects_non_root_rows_when_root_row_is_missing() { let file = file_from_json("f1", "/x.json", r#"{}"#); let changes = vec![PluginEntityChange { entity_id: "/0".to_string(), schema_key: SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content("/0", Value::String("x".to_string()))), }]; let error = apply_changes(file, changes).expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("non-root projection rows require a root row")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } ================================================ FILE: packages/plugin-json-v2/tests/common/mod.rs ================================================ #![allow(dead_code)] use plugin_json_v2::{PluginEntityChange, PluginFile}; use serde::Deserialize; use serde_json::Value; #[derive(Debug, Deserialize)] struct SnapshotContent { path: String, value: Value, } pub fn file_from_json(id: &str, path: &str, json: &str) -> PluginFile { PluginFile { id: id.to_string(), path: path.to_string(), data: json.as_bytes().to_vec(), } } pub fn parse_snapshot_value_from_change(change: &PluginEntityChange) -> Value { let Some(snapshot_content) = change.snapshot_content.as_ref() else { panic!("change should have snapshot_content"); }; let parsed: SnapshotContent = serde_json::from_str(snapshot_content).expect("snapshot content should parse"); assert_eq!(parsed.path, change.entity_id); parsed.value } pub fn snapshot_content(path: &str, value: Value) -> String { serde_json::json!({ "path": path, "value": value, }) .to_string() } ================================================ FILE: packages/plugin-json-v2/tests/detect_changes.rs ================================================ mod common; use common::{file_from_json, parse_snapshot_value_from_change}; use plugin_json_v2::{detect_changes, SCHEMA_KEY}; use serde_json::Value; #[test] fn returns_empty_when_documents_are_equal() { let before = file_from_json("f1", "/x.json", r#"{"Name":"Anna","Age":20}"#); let after = file_from_json("f1", "/x.json", r#"{"Name":"Anna","Age":20}"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert!(changes.is_empty()); } #[test] fn detects_root_insert() { let before = file_from_json("f1", "/x.json", r#"{"Name":"Anna","Age":20}"#); let after = file_from_json( "f1", "/x.json", r#"{"Name":"Anna","Age":20,"City":"New York"}"#, ); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 1); assert_eq!(changes[0].entity_id, "/City"); assert_eq!(changes[0].schema_key, SCHEMA_KEY); assert_eq!( parse_snapshot_value_from_change(&changes[0]), Value::String("New York".to_string()) ); } #[test] fn detects_nested_array_updates_and_deletions() { let before = file_from_json("f1", "/x.json", r#"{"list":["a","b","c"]}"#); let after = file_from_json("f1", "/x.json", r#"{"list":["a","x"]}"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 2); assert_eq!(changes[0].entity_id, "/list/1"); assert_eq!( parse_snapshot_value_from_change(&changes[0]), Value::String("x".to_string()) ); assert_eq!(changes[1].entity_id, "/list/2"); assert_eq!(changes[1].snapshot_content, None); } #[test] fn detects_container_replacement() { let before = file_from_json("f1", "/x.json", r#"{"a":{"x":1}}"#); let after = file_from_json("f1", "/x.json", r#"{"a":2}"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 2); assert_eq!(changes[0].entity_id, "/a/x"); assert_eq!(changes[0].snapshot_content, None); assert_eq!(changes[1].entity_id, "/a"); assert_eq!( parse_snapshot_value_from_change(&changes[1]), Value::Number(2.into()) ); } #[test] fn handles_file_creation_without_synthetic_root_deletion() { let after = file_from_json("f1", "/x.json", r#"{"Name":"Anna"}"#); let changes = detect_changes(None, after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 2); assert_eq!(changes[0].entity_id, ""); assert_eq!( parse_snapshot_value_from_change(&changes[0]), Value::Object(serde_json::Map::new()) ); assert_eq!(changes[1].entity_id, "/Name"); assert_eq!( parse_snapshot_value_from_change(&changes[1]), Value::String("Anna".to_string()) ); } #[test] fn detects_multi_delete_array_in_descending_order() { let before = file_from_json("f1", "/x.json", r#"{"list":["a","b","c","d"]}"#); let after = file_from_json("f1", "/x.json", r#"{"list":["a"]}"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 3); assert_eq!(changes[0].entity_id, "/list/3"); assert_eq!(changes[0].snapshot_content, None); assert_eq!(changes[1].entity_id, "/list/2"); assert_eq!(changes[1].snapshot_content, None); assert_eq!(changes[2].entity_id, "/list/1"); assert_eq!(changes[2].snapshot_content, None); } #[test] fn deleting_non_empty_container_emits_subtree_tombstones() { let before = file_from_json("f1", "/x.json", r#"{"a":{"b":1}}"#); let after = file_from_json("f1", "/x.json", r#"{}"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 2); assert_eq!(changes[0].entity_id, "/a"); assert_eq!(changes[0].snapshot_content, None); assert_eq!(changes[1].entity_id, "/a/b"); assert_eq!(changes[1].snapshot_content, None); } #[test] fn replacing_non_empty_container_with_scalar_tombstones_subtree() { let before = file_from_json("f1", "/x.json", r#"{"a":{"b":1}}"#); let after = file_from_json("f1", "/x.json", r#"2"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 3); assert_eq!(changes[0].entity_id, "/a"); assert_eq!(changes[0].snapshot_content, None); assert_eq!(changes[1].entity_id, "/a/b"); assert_eq!(changes[1].snapshot_content, None); assert_eq!(changes[2].entity_id, ""); assert_eq!( parse_snapshot_value_from_change(&changes[2]), Value::Number(2.into()) ); } #[test] fn deleting_whole_object_property_emits_subtree_tombstones() { let before = file_from_json( "f1", "/x.json", r#"{"keep":1,"obj":{"k":1,"nested":{"z":2}}}"#, ); let after = file_from_json("f1", "/x.json", r#"{"keep":1}"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let mut entity_ids = changes .iter() .map(|change| change.entity_id.as_str()) .collect::>(); entity_ids.sort_unstable(); assert_eq!( entity_ids, vec!["/obj", "/obj/k", "/obj/nested", "/obj/nested/z"] ); assert!(changes .iter() .all(|change| change.snapshot_content.is_none())); } #[test] fn deleting_whole_array_property_emits_subtree_tombstones() { let before = file_from_json("f1", "/x.json", r#"{"keep":1,"arr":[{"x":1},2,3]}"#); let after = file_from_json("f1", "/x.json", r#"{"keep":1}"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let mut entity_ids = changes .iter() .map(|change| change.entity_id.as_str()) .collect::>(); entity_ids.sort_unstable(); assert_eq!( entity_ids, vec!["/arr", "/arr/0", "/arr/0/x", "/arr/1", "/arr/2"] ); assert!(changes .iter() .all(|change| change.snapshot_content.is_none())); } #[test] fn deleting_nested_subtree_emits_all_descendant_tombstones() { let before = file_from_json("f1", "/x.json", r#"{"a":{"b":{"c":1,"d":2},"e":3},"x":0}"#); let after = file_from_json("f1", "/x.json", r#"{"a":{"e":3},"x":0}"#); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let mut entity_ids = changes .iter() .map(|change| change.entity_id.as_str()) .collect::>(); entity_ids.sort_unstable(); assert_eq!(entity_ids, vec!["/a/b", "/a/b/c", "/a/b/d"]); assert!(changes .iter() .all(|change| change.snapshot_content.is_none())); } ================================================ FILE: packages/plugin-json-v2/tests/roundtrip.rs ================================================ mod common; use std::collections::BTreeMap; use common::file_from_json; use plugin_json_v2::{apply_changes, detect_changes, PluginEntityChange, SCHEMA_KEY}; use serde_json::Value; fn merge_latest_state_rows(changesets: Vec>) -> Vec { let mut latest = BTreeMap::new(); for changes in changesets { for change in changes { if change.schema_key != SCHEMA_KEY { continue; } latest.insert( (change.schema_key.clone(), change.entity_id.clone()), change, ); } } latest.into_values().collect() } fn projected_changes_for_transition( before_json: &str, after_json: &str, ) -> Vec { let baseline = detect_changes(None, file_from_json("f1", "/x.json", before_json)) .expect("baseline detect_changes should succeed"); let delta = detect_changes( Some(file_from_json("f1", "/x.json", before_json)), file_from_json("f1", "/x.json", after_json), ) .expect("delta detect_changes should succeed"); merge_latest_state_rows(vec![baseline, delta]) } fn apply_projection(changes: Vec) -> Value { let seed = file_from_json("f1", "/x.json", r#"{"stale":"cache"}"#); let reconstructed = apply_changes(seed, changes).expect("apply_changes should succeed"); serde_json::from_slice(&reconstructed).expect("reconstructed bytes should parse") } fn assert_projection_roundtrip(before_json: &str, after_json: &str) { let reconstructed_json = apply_projection(projected_changes_for_transition(before_json, after_json)); let expected_json: Value = serde_json::from_str(after_json).expect("expected JSON should parse"); assert_eq!(reconstructed_json, expected_json); } #[test] fn roundtrip_reconstructs_after_document() { assert_projection_roundtrip( r#"{"Name":"Samuel","address":{"city":"Berlin","zip":"10115"},"tags":["a","b","c"]}"#, r#"{"Name":"Sam","address":{"city":"Berlin"},"tags":["a","x"],"active":true}"#, ); } #[test] fn roundtrip_file_creation_from_empty_seed() { assert_projection_roundtrip( r#"{}"#, r#"{"profile":{"name":"Anna"},"roles":["admin","editor"]}"#, ); } #[test] fn roundtrip_handles_numeric_object_keys() { assert_projection_roundtrip(r#"{}"#, r#"{"foo":{"0":"x","1":"y"}}"#); } #[test] fn roundtrip_handles_multi_delete_arrays() { assert_projection_roundtrip(r#"{"list":["a","b","c","d"]}"#, r#"{"list":["a"]}"#); } #[test] fn roundtrip_preserves_pointer_escaped_keys() { assert_projection_roundtrip( r#"{"a/b":"old","tilde~key":"x"}"#, r#"{"a/b":"new","tilde~key":"y"}"#, ); } #[test] fn roundtrip_replacing_empty_object_in_array_index_keeps_neighbors() { assert_projection_roundtrip(r#"{"arr":[{}, "x"]}"#, r#"{"arr":[1, "x"]}"#); } #[test] fn roundtrip_replacing_empty_array_with_empty_object_in_array_index_keeps_neighbors() { assert_projection_roundtrip(r#"{"arr":[[], "x"]}"#, r#"{"arr":[{}, "x"]}"#); } #[test] fn roundtrip_deleting_non_empty_container_removes_descendants() { assert_projection_roundtrip(r#"{"a":{"b":1}}"#, r#"{}"#); } #[test] fn roundtrip_replacing_non_empty_container_with_scalar_removes_descendants() { assert_projection_roundtrip(r#"{"a":{"b":1}}"#, r#"2"#); } #[test] fn roundtrip_deleting_whole_object_property_removes_subtree_rows() { assert_projection_roundtrip( r#"{"keep":1,"obj":{"k":1,"nested":{"z":2}}}"#, r#"{"keep":1}"#, ); } #[test] fn roundtrip_deleting_whole_array_property_removes_subtree_rows() { assert_projection_roundtrip(r#"{"keep":1,"arr":[{"x":1},2,3]}"#, r#"{"keep":1}"#); } #[test] fn roundtrip_deleting_nested_subtree_removes_descendants() { assert_projection_roundtrip( r#"{"a":{"b":{"c":1,"d":2},"e":3},"x":0}"#, r#"{"a":{"e":3},"x":0}"#, ); } #[test] fn roundtrip_replacing_root_array_with_scalar_removes_descendants() { assert_projection_roundtrip(r#"[{"a":1},{"b":2},3]"#, r#"7"#); } #[test] fn roundtrip_with_proto_like_keys_is_supported() { assert_projection_roundtrip( r#"{"__proto__":{"ok":true},"prototype":[1],"constructor":{"x":1}}"#, r#"{"__proto__":{"ok":false},"prototype":[1,2],"constructor":{"x":2}}"#, ); } #[test] fn roundtrip_handles_object_key_dash() { assert_projection_roundtrip(r#"{}"#, r#"{"obj":{"-":{"x":1}}}"#); } #[test] fn roundtrip_handles_pointer_escape_edge_keys() { assert_projection_roundtrip(r#"{}"#, r#"{"":{"/":1,"~":2,"~1":3,"~0":4}}"#); } #[test] fn roundtrip_replacing_root_object_with_array_allows_non_numeric_old_keys() { assert_projection_roundtrip(r#"{"~1":"x"}"#, r#"[]"#); } #[test] fn roundtrip_replacing_nested_object_with_array_allows_non_numeric_old_keys() { assert_projection_roundtrip(r#"{"x":{"~1":"v"}}"#, r#"{"x":[]}"#); } #[test] fn roundtrip_replacing_object_with_array_allows_dash_and_leading_zero_keys() { assert_projection_roundtrip(r#"{"-":"dash","01":"lead","foo":"bar"}"#, r#"[]"#); } #[derive(Clone)] struct Lcg { state: u64, } impl Lcg { fn new(seed: u64) -> Self { Self { state: seed } } fn next_u32(&mut self) -> u32 { self.state = self.state.wrapping_mul(6364136223846793005).wrapping_add(1); (self.state >> 32) as u32 } fn next_usize(&mut self, max_exclusive: usize) -> usize { if max_exclusive == 0 { return 0; } (self.next_u32() as usize) % max_exclusive } fn next_bool(&mut self) -> bool { (self.next_u32() & 1) == 0 } } fn random_scalar(rng: &mut Lcg) -> Value { match rng.next_usize(5) { 0 => Value::Null, 1 => Value::Bool(rng.next_bool()), 2 => Value::Number(((rng.next_u32() % 100) as i64).into()), 3 => Value::String(format!("s{}", rng.next_u32() % 10)), _ => Value::String(String::new()), } } fn random_json(rng: &mut Lcg, depth: usize) -> Value { if depth == 0 { return random_scalar(rng); } match rng.next_usize(5) { 0 => random_scalar(rng), 1 => { let len = rng.next_usize(3); let mut values = Vec::new(); for _ in 0..len { values.push(random_json(rng, depth - 1)); } Value::Array(values) } _ => { let candidate_keys = ["", "a", "b", "x", "~", "~0", "~1", "/", "a/b"]; let count = rng.next_usize(4); let mut object = serde_json::Map::new(); for _ in 0..count { let key = candidate_keys[rng.next_usize(candidate_keys.len())].to_string(); object .entry(key) .or_insert_with(|| random_json(rng, depth - 1)); } Value::Object(object) } } } #[test] fn roundtrip_randomized_transition_invariant() { let mut rng = Lcg::new(0xA11CE5EEDu64); for _ in 0..300 { let before = random_json(&mut rng, 3); let after = random_json(&mut rng, 3); let before_json = serde_json::to_string(&before).expect("before should serialize"); let after_json = serde_json::to_string(&after).expect("after should serialize"); assert_projection_roundtrip(&before_json, &after_json); } } #[test] fn roundtrip_is_invariant_to_change_order_permutations() { let before_json = r#"{"list":["a","b","c","d"],"flags":{"active":false},"old":"x"}"#; let after_json = r#"{"list":["a"],"flags":{"active":true},"team":[{"name":"Ada"}]}"#; let projected = projected_changes_for_transition(before_json, after_json); let expected: Value = serde_json::from_str(after_json).expect("expected JSON should parse"); let mut permutations = Vec::new(); permutations.push(projected.clone()); let mut reversed = projected.clone(); reversed.reverse(); permutations.push(reversed); let mut rotated = projected.clone(); if !rotated.is_empty() { rotated.rotate_left(1); } permutations.push(rotated); let mut lexicographic = projected.clone(); lexicographic.sort_by(|a, b| a.entity_id.cmp(&b.entity_id)); permutations.push(lexicographic); let mut reverse_lexicographic = projected.clone(); reverse_lexicographic.sort_by(|a, b| b.entity_id.cmp(&a.entity_id)); permutations.push(reverse_lexicographic); for changes in permutations { let reconstructed = apply_projection(changes); assert_eq!(reconstructed, expected); } } #[test] fn roundtrip_reconstructs_with_lexicographic_entity_id_order() { let before_json = r#"{"list":["a","b","c","d"]}"#; let after_json = r#"{"list":["a"]}"#; let mut projected = projected_changes_for_transition(before_json, after_json); projected.sort_by(|a, b| a.entity_id.cmp(&b.entity_id)); let reconstructed = apply_projection(projected); let expected: Value = serde_json::from_str(after_json).expect("expected JSON should parse"); assert_eq!(reconstructed, expected); } ================================================ FILE: packages/plugin-json-v2/tests/schema.rs ================================================ use plugin_json_v2::{schema_definition, schema_json, SCHEMA_KEY}; #[test] fn schema_json_is_valid_and_matches_constants() { let schema = schema_definition(); let key = schema .get("x-lix-key") .and_then(serde_json::Value::as_str) .expect("schema must define string x-lix-key"); assert_eq!(key, SCHEMA_KEY); let primary_key = schema .get("x-lix-primary-key") .and_then(serde_json::Value::as_array) .expect("schema must define x-lix-primary-key array"); assert_eq!(primary_key.len(), 1); assert_eq!(primary_key[0].as_str(), Some("/path")); } #[test] fn schema_json_accessor_returns_expected_text() { let raw = schema_json(); assert!(raw.contains("\"x-lix-key\": \"json_pointer\"")); } ================================================ FILE: packages/plugin-md-v2/.gitignore ================================================ /target ================================================ FILE: packages/plugin-md-v2/Cargo.toml ================================================ [package] name = "plugin_md_v2" version = "0.1.0" edition = "2021" publish = false [lib] crate-type = ["cdylib", "rlib"] [dependencies] markdown = { version = "1", features = ["serde"] } # Temporary workspace unblock: local dependency is missing in this checkout. # markdown_wc = { path = "../../../markdown-wc" } serde = { version = "1", features = ["derive"] } serde_json = "1" strsim = "0.11" unicode-normalization = "0.1" wit-bindgen = "0.40" [dev-dependencies] criterion = "0.5" [[bench]] name = "detect_changes" harness = false ================================================ FILE: packages/plugin-md-v2/README.md ================================================ # plugin-md-v2 Rust/WASM component Markdown plugin for the Lix engine. ## Current scope - `detect-changes` parses markdown with `markdown-rs` using GFM + MDX + math + frontmatter options. - Emits block-level rows (`markdown_v2_block`) plus a document order row (`markdown_v2_document`). - `apply-changes` materializes markdown from the latest block snapshots and document order. This establishes a deterministic block-level projection baseline with unit tests and benchmarks. ## Identity Model (v2) `plugin-md-v2` detect expects active state context for top-level block IDs: - With `detect_changes.state_context.include_active_state: true`: existing IDs are reused from active state rows whenever blocks can be matched (exact + fuzzy matching). - Fingerprint normalization includes: - line ending normalization (`CRLF`/`CR` -> `LF`) - Unicode NFC normalization for all string fields Practical behavior (with active state context): - Pure reorder of unchanged blocks keeps IDs stable and only updates the document `order`. - With active state context, content edits can keep existing IDs and emit only an upsert. - Cross-type edits (e.g. paragraph -> heading) also produce tombstone + upsert + document update. ## Expected Change Shapes Common detect scenarios: - No-op: `[]` - New file with `N` top-level blocks: `N` block upserts + `1` document row - Pure reorder: `1` document row only - Insert one block: `1` block upsert + `1` document row - Delete one block: `1` block tombstone + `1` document row - Edit one block: `1` block tombstone + `1` block upsert + `1` document row This is intentionally different from v1 nested-node identity. v2 tracks identity at top-level block granularity. ================================================ FILE: packages/plugin-md-v2/benches/common/mod.rs ================================================ use plugin_md_v2::PluginFile; pub fn file_from_markdown(id: &str, path: &str, markdown: &str) -> PluginFile { PluginFile { id: id.to_string(), path: path.to_string(), data: markdown.as_bytes().to_vec(), } } pub fn dataset_small() -> (String, String) { let before = "# Title\n\nA short paragraph.\n".to_string(); let after = "# Title\n\nA short paragraph with update.\n".to_string(); (before, after) } pub fn dataset_medium() -> (String, String) { let mut before = String::new(); let mut after = String::new(); before.push_str("---\ntitle: Medium\n---\n\n"); after.push_str("---\ntitle: Medium\n---\n\n"); for idx in 0..120 { before.push_str(&format!("## Section {idx}\n\nParagraph {idx}.\n\n")); after.push_str(&format!( "## Section {idx}\n\nParagraph {idx} changed with value {}.\n\n", idx * 3 )); } (before, after) } pub fn dataset_large() -> (String, String) { let mut before = String::new(); let mut after = String::new(); before.push_str("---\ntitle: Large\n---\n\n"); after.push_str("---\ntitle: Large\n---\n\n"); for idx in 0..450 { before.push_str(&format!( "### Item {idx}\n\n- [x] done\n- [ ] pending\n\nInline math $a_{} + b_{}$\n\n\n\n", idx, idx, idx )); after.push_str(&format!( "### Item {idx}\n\n- [x] done\n- [x] pending\n\nInline math $a_{} + b_{} + c_{}$\n\n\n\n", idx, idx, idx, idx )); } (before, after) } ================================================ FILE: packages/plugin-md-v2/benches/detect_changes.rs ================================================ mod common; use criterion::{criterion_group, criterion_main, BatchSize, Criterion}; use plugin_md_v2::{ detect_changes, detect_changes_with_state_context, PluginActiveStateRow, PluginDetectStateContext, PluginEntityChange, }; fn to_state_context(rows: &[PluginEntityChange]) -> PluginDetectStateContext { PluginDetectStateContext { active_state: Some( rows.iter() .map(|row| PluginActiveStateRow { entity_id: row.entity_id.clone(), schema_key: Some(row.schema_key.clone()), snapshot_content: row.snapshot_content.clone(), file_id: None, plugin_key: None, version_id: None, change_id: None, metadata: None, created_at: None, updated_at: None, }) .collect::>(), ), } } fn bench_detect_changes(c: &mut Criterion) { let mut group = c.benchmark_group("detect_changes"); group.sample_size(20); for (name, (before, after)) in [ ("small", common::dataset_small()), ("medium", common::dataset_medium()), ("large", common::dataset_large()), ] { group.bench_function(name, |b| { b.iter_batched( || { ( common::file_from_markdown("f1", "/doc.mdx", &before), common::file_from_markdown("f1", "/doc.mdx", &after), ) }, |(before_file, after_file)| { detect_changes(Some(before_file), after_file) .expect("detect_changes benchmark should succeed") }, BatchSize::SmallInput, ); }); } group.finish(); } fn bench_detect_changes_with_state_context(c: &mut Criterion) { let mut group = c.benchmark_group("detect_changes_with_state_context"); group.sample_size(20); for (name, (before, after)) in [ ("medium", common::dataset_medium()), ("large", common::dataset_large()), ] { let before_file = common::file_from_markdown("f1", "/doc.mdx", &before); let after_file = common::file_from_markdown("f1", "/doc.mdx", &after); let bootstrap = detect_changes(None, before_file.clone()) .expect("bootstrap detect_changes benchmark should succeed"); let state_context = to_state_context(&bootstrap); group.bench_function(name, |b| { b.iter_batched( || { ( before_file.clone(), after_file.clone(), state_context.clone(), ) }, |(before_file, after_file, state_context)| { detect_changes_with_state_context( Some(before_file), after_file, Some(state_context), ) .expect("detect_changes_with_state_context benchmark should succeed") }, BatchSize::SmallInput, ); }); } group.finish(); } criterion_group!( benches, bench_detect_changes, bench_detect_changes_with_state_context ); criterion_main!(benches); ================================================ FILE: packages/plugin-md-v2/manifest.json ================================================ { "key": "plugin_md_v2", "runtime": "wasm-component-v1", "api_version": "0.1.0", "match": { "path_glob": "*.{md,mdx}", "content_type": "text" }, "detect_changes": { "state_context": { "include_active_state": true, "columns": [ "entity_id", "schema_key", "snapshot_content" ] } }, "entry": "plugin.wasm", "schemas": [ "schema/markdown_document.json", "schema/markdown_block.json" ] } ================================================ FILE: packages/plugin-md-v2/schema/markdown_block.json ================================================ { "x-lix-key": "markdown_v2_block", "x-lix-primary-key": [ "/id" ], "type": "object", "properties": { "id": { "type": "string", "minLength": 1 }, "type": { "type": "string", "minLength": 1 }, "node": { "type": "object" }, "markdown": { "type": "string" } }, "required": [ "id", "type", "node", "markdown" ], "additionalProperties": false } ================================================ FILE: packages/plugin-md-v2/schema/markdown_document.json ================================================ { "x-lix-key": "markdown_v2_document", "x-lix-primary-key": [ "/id" ], "type": "object", "properties": { "id": { "type": "string", "const": "root" }, "order": { "type": "array", "items": { "type": "string", "minLength": 1 } } }, "required": [ "id", "order" ], "additionalProperties": false } ================================================ FILE: packages/plugin-md-v2/src/apply_changes.rs ================================================ use crate::common::{BlockSnapshotContent, DocumentSnapshotContent}; use crate::exports::lix::plugin::api::{EntityChange, File, PluginError}; use crate::schemas::{BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY}; use crate::ROOT_ENTITY_ID; use std::collections::{BTreeMap, BTreeSet}; pub(crate) fn apply_changes( file: File, changes: Vec, ) -> Result, PluginError> { let mut document: Option = None; let mut blocks_by_id: BTreeMap = BTreeMap::new(); let mut seen_block_ids = BTreeSet::new(); for change in changes { if change.schema_key != DOCUMENT_SCHEMA_KEY && change.schema_key != BLOCK_SCHEMA_KEY { continue; } if change.schema_key == DOCUMENT_SCHEMA_KEY { if change.entity_id != ROOT_ENTITY_ID { return Err(PluginError::InvalidInput(format!( "unsupported entity_id '{}' for schema_key '{}', expected '{}'", change.entity_id, DOCUMENT_SCHEMA_KEY, ROOT_ENTITY_ID ))); } if document.is_some() { return Err(PluginError::InvalidInput(format!( "duplicate entity_id '{}' for schema_key '{}'", ROOT_ENTITY_ID, DOCUMENT_SCHEMA_KEY ))); } let snapshot = match change.snapshot_content { Some(raw) => { let parsed: DocumentSnapshotContent = serde_json::from_str(&raw).map_err(|error| { PluginError::InvalidInput(format!( "invalid snapshot_content for entity_id '{}': {error}", ROOT_ENTITY_ID )) })?; if parsed.id != ROOT_ENTITY_ID { return Err(PluginError::InvalidInput(format!( "document snapshot id '{}' does not match expected '{}'", parsed.id, ROOT_ENTITY_ID ))); } parsed } None => DocumentSnapshotContent { id: ROOT_ENTITY_ID.to_string(), order: Vec::new(), }, }; document = Some(snapshot); continue; } // BLOCK_SCHEMA_KEY if !seen_block_ids.insert(change.entity_id.clone()) { return Err(PluginError::InvalidInput(format!( "duplicate entity_id '{}' for schema_key '{}'", change.entity_id, BLOCK_SCHEMA_KEY ))); } let Some(snapshot_content) = change.snapshot_content else { continue; }; let snapshot: BlockSnapshotContent = serde_json::from_str(&snapshot_content).map_err(|error| { PluginError::InvalidInput(format!( "invalid snapshot_content for entity_id '{}': {error}", change.entity_id )) })?; if snapshot.id != change.entity_id { return Err(PluginError::InvalidInput(format!( "block snapshot id '{}' does not match entity_id '{}'", snapshot.id, change.entity_id ))); } blocks_by_id.insert(change.entity_id, snapshot); } if document.is_none() && blocks_by_id.is_empty() { return Ok(file.data); } let mut ordered_ids = document .as_ref() .map(|doc| doc.order.clone()) .unwrap_or_else(|| blocks_by_id.keys().cloned().collect()); // Include orphaned blocks not referenced by document order to avoid data loss. for id in blocks_by_id.keys() { if !ordered_ids.contains(id) { ordered_ids.push(id.clone()); } } let mut parts = Vec::new(); for id in ordered_ids { let Some(block) = blocks_by_id.get(&id) else { continue; }; let normalized = block.markdown.trim_matches('\n').to_string(); if normalized.is_empty() { continue; } parts.push(normalized); } let mut markdown = parts.join("\n\n"); if !markdown.is_empty() { markdown.push('\n'); } Ok(markdown.into_bytes()) } ================================================ FILE: packages/plugin-md-v2/src/common.rs ================================================ #[derive(Debug, serde::Serialize, serde::Deserialize, PartialEq, Eq)] #[serde(deny_unknown_fields)] pub(crate) struct DocumentSnapshotContent { pub(crate) id: String, pub(crate) order: Vec, } #[derive(Debug, serde::Serialize, serde::Deserialize, PartialEq)] #[serde(deny_unknown_fields)] pub(crate) struct BlockSnapshotContent { pub(crate) id: String, #[serde(rename = "type")] pub(crate) node_type: String, pub(crate) node: serde_json::Value, pub(crate) markdown: String, } ================================================ FILE: packages/plugin-md-v2/src/detect_changes.rs ================================================ use crate::common::{BlockSnapshotContent, DocumentSnapshotContent}; use crate::exports::lix::plugin::api::{DetectStateContext, EntityChange, File, PluginError}; use crate::schemas::{BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY}; use crate::ROOT_ENTITY_ID; use markdown::mdast::{Node, Root}; use markdown::{to_mdast, ParseOptions}; use serde_json::Value; use std::collections::{BTreeMap, HashMap, HashSet}; use strsim::normalized_levenshtein; use unicode_normalization::{is_nfc, UnicodeNormalization}; #[derive(Debug, Clone)] struct ParsedBlock { id: String, schema_key: String, node_type: String, node_json: Value, markdown: String, fingerprint: String, } #[derive(Debug, Clone)] struct ParsedBlockCandidate { node_type: String, node_json: Value, markdown: String, fingerprint: String, } #[derive(Debug, Clone)] struct BeforeProjection { order: Vec, blocks_by_id: BTreeMap, } pub(crate) fn detect_changes( _before: Option, after: File, state_context: Option, ) -> Result, PluginError> { if !is_markdown_path(&after.path) { return Ok(Vec::new()); } let after_markdown = decode_markdown_bytes(&after.data)?; let after_candidates = parse_top_level_block_candidates(&after_markdown)?; let before_projection = parse_state_context_projection(state_context.as_ref())?; let BeforeProjection { order: before_order, blocks_by_id: before_by_id, } = before_projection; let after_blocks = assign_ids_with_existing_state(after_candidates, &before_order, &before_by_id); let after_order = after_blocks .iter() .map(|block| block.id.clone()) .collect::>(); let after_by_id = to_block_map(after_blocks)?; let mut changes = Vec::new(); for id in before_by_id.keys() { if !after_by_id.contains_key(id) { let before_block = before_by_id .get(id) .expect("key came from before_by_id.keys() iterator"); changes.push(EntityChange { entity_id: id.clone(), schema_key: before_block.schema_key.clone(), snapshot_content: None, }); } } for (id, after_block) in &after_by_id { match before_by_id.get(id) { Some(before_block) if blocks_equal_for_change_detection(before_block, after_block)? => { } _ => changes.push(block_upsert_change(after_block)?), } } if before_order != after_order { let snapshot_content = serde_json::to_string(&DocumentSnapshotContent { id: ROOT_ENTITY_ID.to_string(), order: after_order, }) .map_err(|error| { PluginError::Internal(format!( "failed to serialize markdown document snapshot: {error}" )) })?; changes.push(EntityChange { entity_id: ROOT_ENTITY_ID.to_string(), schema_key: DOCUMENT_SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot_content), }); } Ok(changes) } fn parse_state_context_projection( state_context: Option<&DetectStateContext>, ) -> Result { let Some(state_context) = state_context else { return Ok(BeforeProjection { order: Vec::new(), blocks_by_id: BTreeMap::new(), }); }; let Some(rows) = state_context.active_state.as_ref() else { return Ok(BeforeProjection { order: Vec::new(), blocks_by_id: BTreeMap::new(), }); }; let mut document_order = None::>; let mut blocks_by_id = BTreeMap::::new(); for row in rows { let Some(schema_key) = row.schema_key.as_deref() else { continue; }; let Some(snapshot_content) = row.snapshot_content.as_deref() else { continue; }; if schema_key == DOCUMENT_SCHEMA_KEY { let snapshot: DocumentSnapshotContent = serde_json::from_str(snapshot_content) .map_err(|error| { PluginError::Internal(format!( "invalid markdown document row in detect state context: {error}" )) })?; document_order = Some(snapshot.order); continue; } if schema_key != BLOCK_SCHEMA_KEY { continue; } let snapshot: BlockSnapshotContent = serde_json::from_str(snapshot_content).map_err(|error| { PluginError::Internal(format!( "invalid markdown block row in detect state context: {error}" )) })?; let fingerprint = normalize_text_for_fingerprint(&snapshot.markdown); let block = ParsedBlock { id: row.entity_id.clone(), schema_key: BLOCK_SCHEMA_KEY.to_string(), node_type: snapshot.node_type, node_json: snapshot.node, markdown: snapshot.markdown, fingerprint, }; blocks_by_id.insert(block.id.clone(), block); } let mut order = document_order.unwrap_or_default(); order.retain(|id| blocks_by_id.contains_key(id)); if order.len() != blocks_by_id.len() { let order_set = order.iter().cloned().collect::>(); let remaining = blocks_by_id .keys() .filter(|id| !order_set.contains(*id)) .cloned() .collect::>(); order.extend(remaining); } Ok(BeforeProjection { order, blocks_by_id, }) } fn assign_ids_with_existing_state( candidates: Vec, before_order: &[String], before_by_id: &BTreeMap, ) -> Vec { if candidates.is_empty() { return Vec::new(); } let mut ordered_before_ids = before_order .iter() .filter(|id| before_by_id.contains_key(*id)) .cloned() .collect::>(); let mut ordered_before_id_set = ordered_before_ids.iter().cloned().collect::>(); for id in before_by_id.keys() { if !ordered_before_id_set.contains(id) { ordered_before_ids.push(id.clone()); ordered_before_id_set.insert(id.clone()); } } let mut assigned_ids = vec![None::; candidates.len()]; let mut matched_before_ids = HashSet::::new(); let mut before_exact = BTreeMap::<(String, String), Vec>::new(); for id in &ordered_before_ids { let before = before_by_id .get(id) .expect("ordered_before_ids are sourced from before_by_id"); before_exact .entry((before.node_type.clone(), before.fingerprint.clone())) .or_default() .push(id.clone()); } let mut after_exact = BTreeMap::<(String, String), Vec>::new(); for (idx, after) in candidates.iter().enumerate() { after_exact .entry((after.node_type.clone(), after.fingerprint.clone())) .or_default() .push(idx); } for (key, after_indexes) in after_exact { let Some(before_ids) = before_exact.get(&key) else { continue; }; let pair_count = before_ids.len().min(after_indexes.len()); let before_positions = if before_ids.len() > after_indexes.len() { sampled_positions(before_ids.len(), pair_count) } else { (0..pair_count).collect::>() }; let after_positions = if after_indexes.len() > before_ids.len() { sampled_positions(after_indexes.len(), pair_count) } else { (0..pair_count).collect::>() }; for offset in 0..pair_count { let before_id = before_ids[before_positions[offset]].clone(); let after_idx = after_indexes[after_positions[offset]]; if assigned_ids[after_idx].is_none() { assigned_ids[after_idx] = Some(before_id.clone()); matched_before_ids.insert(before_id); } } } // Fast-path: if lengths are equal, reuse same-index IDs for unmatched candidates // when node types align. This avoids O(n^2) fuzzy scoring for in-place edits. if candidates.len() == ordered_before_ids.len() { for (after_idx, after) in candidates.iter().enumerate() { if assigned_ids[after_idx].is_some() { continue; } let Some(before_id) = ordered_before_ids.get(after_idx) else { continue; }; if matched_before_ids.contains(before_id) { continue; } let Some(before_block) = before_by_id.get(before_id) else { continue; }; if before_block.node_type == after.node_type { assigned_ids[after_idx] = Some(before_id.clone()); matched_before_ids.insert(before_id.clone()); } } } let before_positions = ordered_before_ids .iter() .enumerate() .map(|(idx, id)| (id.clone(), idx)) .collect::>(); let before_normalized_text = ordered_before_ids .iter() .filter_map(|id| { before_by_id .get(id) .map(|before| (id.clone(), normalize_text_for_fingerprint(&before.markdown))) }) .collect::>(); let after_normalized_text = candidates .iter() .map(|after| normalize_text_for_fingerprint(&after.markdown)) .collect::>(); let mut before_ids_by_type = HashMap::>::new(); for id in &ordered_before_ids { let before = before_by_id .get(id) .expect("ordered_before_ids are sourced from before_by_id"); before_ids_by_type .entry(before.node_type.clone()) .or_default() .push(id.clone()); } for (after_idx, after) in candidates.iter().enumerate() { if assigned_ids[after_idx].is_some() { continue; } let mut pool = before_ids_by_type .get(&after.node_type) .into_iter() .flat_map(|ids| ids.iter()) .filter_map(|id| { if matched_before_ids.contains(id) { return None; } let before = before_by_id.get(id)?; let before_idx = *before_positions.get(id).unwrap_or(&0); Some((id.clone(), before, before_idx)) }) .collect::>(); if pool.is_empty() { continue; } let chosen = if pool.len() == 1 { Some(pool.swap_remove(0).0) } else { let after_text = &after_normalized_text[after_idx]; let total = candidates.len().max(ordered_before_ids.len()).max(1) as f64; let mut scored = pool .iter() .map(|(id, before, before_idx)| { let before_text = before_normalized_text .get(id) .map(String::as_str) .unwrap_or(&before.markdown); let similarity = normalized_levenshtein(&before_text, &after_text); let position = 1.0 - ((after_idx as f64 - *before_idx as f64).abs() / total); let score = similarity * 0.75 + position * 0.25; (id.clone(), similarity, score) }) .collect::>(); scored.sort_by(|a, b| b.2.total_cmp(&a.2).then_with(|| b.1.total_cmp(&a.1))); let top = scored[0].clone(); let second = scored.get(1).cloned(); let accept = match second { None => true, Some((_, second_similarity, second_score)) => { top.1 >= 0.55 || top.2 >= 0.60 || (top.1 >= 0.35 && (top.1 - second_similarity) >= 0.15 && (top.2 - second_score) >= 0.08) } }; if accept { Some(top.0) } else { None } }; if let Some(id) = chosen { matched_before_ids.insert(id.clone()); assigned_ids[after_idx] = Some(id); } } assign_missing_ids(candidates, assigned_ids) } fn sampled_positions(total: usize, picks: usize) -> Vec { if picks == 0 || total == 0 { return Vec::new(); } if picks == 1 { return vec![0]; } let mut positions = Vec::with_capacity(picks); for index in 0..picks { let ratio = index as f64 / (picks - 1) as f64; let target = (ratio * (total - 1) as f64).round() as usize; let min_allowed = positions.last().copied().unwrap_or(0); let max_allowed = total - (picks - index); positions.push(target.clamp(min_allowed, max_allowed)); } positions } fn assign_missing_ids( candidates: Vec, assigned_ids: Vec>, ) -> Vec { let mut occurrence_counter: HashMap<(String, String), u32> = HashMap::new(); let mut used_ids = assigned_ids .iter() .filter_map(|id| id.clone()) .collect::>(); candidates .into_iter() .enumerate() .map(|(idx, candidate)| { let occurrence_key = (candidate.node_type.clone(), candidate.fingerprint.clone()); let occurrence = occurrence_counter .entry(occurrence_key) .and_modify(|count| *count += 1) .or_insert(1); let id = if let Some(existing) = assigned_ids[idx].clone() { existing } else { let base = block_id(&candidate.node_type, &candidate.fingerprint, *occurrence); if !used_ids.contains(&base) { base } else { let mut suffix = 2u32; let mut candidate_id = format!("{base}_{suffix}"); while used_ids.contains(&candidate_id) { suffix += 1; candidate_id = format!("{base}_{suffix}"); } candidate_id } }; used_ids.insert(id.clone()); ParsedBlock { id, schema_key: BLOCK_SCHEMA_KEY.to_string(), node_type: candidate.node_type, node_json: candidate.node_json, markdown: candidate.markdown, fingerprint: candidate.fingerprint, } }) .collect() } fn block_upsert_change(block: &ParsedBlock) -> Result { let snapshot_content = serde_json::to_string(&BlockSnapshotContent { id: block.id.clone(), node_type: block.node_type.clone(), node: block.node_json.clone(), markdown: block.markdown.clone(), }) .map_err(|error| { PluginError::Internal(format!( "failed to serialize markdown block snapshot: {error}" )) })?; Ok(EntityChange { entity_id: block.id.clone(), schema_key: block.schema_key.clone(), snapshot_content: Some(snapshot_content), }) } fn blocks_equal_for_change_detection( before: &ParsedBlock, after: &ParsedBlock, ) -> Result { if before.schema_key != after.schema_key || before.node_type != after.node_type { return Ok(false); } if before.fingerprint == after.fingerprint { return Ok(true); } if !needs_semantic_ast_compare(&before.node_type) { return Ok(false); } Ok(stable_json_string(&before.node_json)? == stable_json_string(&after.node_json)?) } fn needs_semantic_ast_compare(node_type: &str) -> bool { matches!(node_type, "paragraph" | "code") } fn to_block_map(blocks: Vec) -> Result, PluginError> { let mut map = BTreeMap::new(); for block in blocks { if map.insert(block.id.clone(), block).is_some() { return Err(PluginError::Internal( "generated duplicate markdown block id".to_string(), )); } } Ok(map) } fn parse_top_level_block_candidates( markdown: &str, ) -> Result, PluginError> { let root = parse_markdown_to_root(markdown)?; let mut blocks = Vec::new(); for node in root.children { let node_type = node_type_name(&node).to_string(); let node_json = node_json_without_position(&node)?; let markdown_fragment = extract_block_markdown(markdown, &node)?; let fingerprint = normalize_text_for_fingerprint(&markdown_fragment); blocks.push(ParsedBlockCandidate { node_type, node_json, markdown: markdown_fragment, fingerprint, }); } Ok(blocks) } fn parse_markdown_to_root(markdown: &str) -> Result { let tree = to_mdast(markdown, &parse_options_all_extensions()).map_err(|error| { PluginError::InvalidInput(format!( "markdown parse failed with configured extensions: {}", error )) })?; match tree { Node::Root(root) => Ok(root), _ => Err(PluginError::Internal( "markdown parser returned non-root AST node".to_string(), )), } } fn node_json_without_position(node: &Node) -> Result { let mut value = serde_json::to_value(node).map_err(|error| { PluginError::Internal(format!("failed to serialize mdast node: {error}")) })?; strip_position_recursively(&mut value); Ok(value) } fn strip_position_recursively(value: &mut Value) { match value { Value::Object(object) => { object.remove("position"); for child in object.values_mut() { strip_position_recursively(child); } } Value::Array(items) => { for item in items { strip_position_recursively(item); } } _ => {} } } fn stable_json_string(value: &Value) -> Result { let mut normalized = value.clone(); normalize_json_for_fingerprint(&mut normalized); serde_json::to_string(&normalized).map_err(|error| { PluginError::Internal(format!("failed to serialize node fingerprint: {error}")) }) } fn normalize_json_for_fingerprint(value: &mut Value) { match value { Value::Object(object) => { for child in object.values_mut() { normalize_json_for_fingerprint(child); } } Value::Array(items) => { for item in items { normalize_json_for_fingerprint(item); } } Value::String(text) => { *text = normalize_text_for_fingerprint(text); } _ => {} } } fn normalize_text_for_fingerprint(input: &str) -> String { let has_carriage_return = input.as_bytes().contains(&b'\r'); if !has_carriage_return { if input.is_ascii() || is_nfc(input) { return input.to_string(); } return input.nfc().collect(); } let normalized_newlines = input.replace("\r\n", "\n").replace('\r', "\n"); if normalized_newlines.is_ascii() || is_nfc(&normalized_newlines) { return normalized_newlines; } normalized_newlines.nfc().collect() } fn extract_block_markdown(markdown: &str, node: &Node) -> Result { let Some(position) = node.position() else { return Err(PluginError::Internal( "top-level markdown node is missing position metadata".to_string(), )); }; let start = position.start.offset; let end = position.end.offset; if start > end || end > markdown.len() { return Err(PluginError::Internal( "markdown node position offsets are out of bounds".to_string(), )); } if !markdown.is_char_boundary(start) || !markdown.is_char_boundary(end) { return Err(PluginError::Internal( "markdown node position offsets are not valid UTF-8 boundaries".to_string(), )); } Ok(markdown[start..end].to_string()) } fn block_id(node_type: &str, fingerprint: &str, occurrence: u32) -> String { let node_type_sanitized = node_type .chars() .map(|ch| if ch.is_ascii_alphanumeric() { ch } else { '_' }) .collect::() .to_ascii_lowercase(); let hash = fnv1a64(fingerprint.as_bytes()); format!("b_{node_type_sanitized}_{hash:016x}_{occurrence}") } fn fnv1a64(input: &[u8]) -> u64 { let mut hash = 0xcbf29ce484222325u64; for byte in input { hash ^= *byte as u64; hash = hash.wrapping_mul(0x100000001b3); } hash } fn decode_markdown_bytes(bytes: &[u8]) -> Result { std::str::from_utf8(bytes) .map(|markdown| markdown.to_owned()) .map_err(|error| { PluginError::InvalidInput(format!( "file.data must be valid UTF-8 markdown bytes: {error}" )) }) } fn is_markdown_path(path: &str) -> bool { let path = path.to_ascii_lowercase(); path.ends_with(".md") || path.ends_with(".mdx") } fn parse_options_all_extensions() -> ParseOptions { let mut options = ParseOptions::gfm(); let constructs = &mut options.constructs; constructs.frontmatter = true; constructs.gfm_autolink_literal = true; constructs.gfm_footnote_definition = true; constructs.gfm_label_start_footnote = true; constructs.gfm_strikethrough = true; constructs.gfm_table = true; constructs.gfm_task_list_item = true; constructs.math_flow = true; constructs.math_text = true; options } fn node_type_name(node: &Node) -> &'static str { match node { Node::Root(_) => "root", Node::Blockquote(_) => "blockquote", Node::FootnoteDefinition(_) => "footnoteDefinition", Node::MdxJsxFlowElement(_) => "mdxJsxFlowElement", Node::List(_) => "list", Node::MdxjsEsm(_) => "mdxjsEsm", Node::Toml(_) => "toml", Node::Yaml(_) => "yaml", Node::Break(_) => "break", Node::InlineCode(_) => "inlineCode", Node::InlineMath(_) => "inlineMath", Node::Delete(_) => "delete", Node::Emphasis(_) => "emphasis", Node::MdxTextExpression(_) => "mdxTextExpression", Node::FootnoteReference(_) => "footnoteReference", Node::Html(_) => "html", Node::Image(_) => "image", Node::ImageReference(_) => "imageReference", Node::MdxJsxTextElement(_) => "mdxJsxTextElement", Node::Link(_) => "link", Node::LinkReference(_) => "linkReference", Node::Strong(_) => "strong", Node::Text(_) => "text", Node::Code(_) => "code", Node::Math(_) => "math", Node::MdxFlowExpression(_) => "mdxFlowExpression", Node::Heading(_) => "heading", Node::Table(_) => "table", Node::ThematicBreak(_) => "thematicBreak", Node::TableRow(_) => "tableRow", Node::TableCell(_) => "tableCell", Node::ListItem(_) => "listItem", Node::Definition(_) => "definition", Node::Paragraph(_) => "paragraph", } } ================================================ FILE: packages/plugin-md-v2/src/lib.rs ================================================ use crate::exports::lix::plugin::api::{EntityChange, File, Guest, PluginError}; wit_bindgen::generate!({ path: "../engine/wit", world: "plugin", }); mod apply_changes; mod common; mod detect_changes; pub mod schemas; pub const ROOT_ENTITY_ID: &str = "root"; pub const DOCUMENT_SCHEMA_KEY: &str = schemas::DOCUMENT_SCHEMA_KEY; pub const BLOCK_SCHEMA_KEY: &str = schemas::BLOCK_SCHEMA_KEY; pub use crate::exports::lix::plugin::api::{ ActiveStateRow as PluginActiveStateRow, DetectStateContext as PluginDetectStateContext, EntityChange as PluginEntityChange, File as PluginFile, PluginError as PluginApiError, }; struct MarkdownPlugin; impl Guest for MarkdownPlugin { fn detect_changes( before: Option, after: File, state_context: Option, ) -> Result, PluginError> { detect_changes::detect_changes(before, after, state_context) } fn apply_changes(file: File, changes: Vec) -> Result, PluginError> { apply_changes::apply_changes(file, changes) } } pub fn detect_changes(before: Option, after: File) -> Result, PluginError> { let state_context = project_state_context_from_before(before)?; ::detect_changes(None, after, Some(state_context)) } pub fn detect_changes_with_state_context( before: Option, after: File, state_context: Option, ) -> Result, PluginError> { ::detect_changes(before, after, state_context) } pub fn apply_changes(file: File, changes: Vec) -> Result, PluginError> { ::apply_changes(file, changes) } fn empty_state_context() -> PluginDetectStateContext { PluginDetectStateContext { active_state: Some(Vec::new()), } } fn project_state_context_from_before( before: Option, ) -> Result { let Some(before_file) = before else { return Ok(empty_state_context()); }; // Compatibility helper for tests/callers using detect_changes(before, after): // bootstrap a projected active-state from `before`. let bootstrap = ::detect_changes(None, before_file, Some(empty_state_context()))?; Ok(PluginDetectStateContext { active_state: Some( bootstrap .into_iter() .map(|row| PluginActiveStateRow { entity_id: row.entity_id, schema_key: Some(row.schema_key), snapshot_content: row.snapshot_content, file_id: None, plugin_key: None, version_id: None, change_id: None, metadata: None, created_at: None, updated_at: None, }) .collect(), ), }) } export!(MarkdownPlugin); ================================================ FILE: packages/plugin-md-v2/src/schemas.rs ================================================ use serde_json::Value; use std::sync::OnceLock; pub const DOCUMENT_SCHEMA_KEY: &str = "markdown_v2_document"; pub const BLOCK_SCHEMA_KEY: &str = "markdown_v2_block"; const DOCUMENT_SCHEMA_JSON: &str = include_str!("../schema/markdown_document.json"); const BLOCK_SCHEMA_JSON: &str = include_str!("../schema/markdown_block.json"); const SCHEMA_JSONS: [&str; 2] = [DOCUMENT_SCHEMA_JSON, BLOCK_SCHEMA_JSON]; static SCHEMA_DEFINITIONS: OnceLock> = OnceLock::new(); pub fn schema_jsons() -> &'static [&'static str] { &SCHEMA_JSONS } pub fn schema_definitions() -> &'static Vec { SCHEMA_DEFINITIONS.get_or_init(|| { SCHEMA_JSONS .iter() .map(|raw| serde_json::from_str(raw).expect("markdown schema JSON must be valid")) .collect() }) } ================================================ FILE: packages/plugin-md-v2/tests/apply_changes.rs ================================================ mod common; use common::{ assert_invalid_input, block_change, decode_utf8, document_change, empty_file, file_from_markdown, }; use plugin_md_v2::{apply_changes, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY}; #[test] fn materializes_markdown_from_document_order_and_blocks() { let file = empty_file("f1", "/notes.md"); let changes = vec![ block_change("b2", "paragraph", "Second paragraph."), document_change(vec!["b1".to_string(), "b2".to_string()]), block_change("b1", "heading", "# Title"), ]; let data = apply_changes(file, changes).expect("apply_changes should succeed"); assert_eq!(decode_utf8(data), "# Title\n\nSecond paragraph.\n"); } #[test] fn document_tombstone_results_in_empty_file() { let file = file_from_markdown("f1", "/notes.md", "before"); let changes = vec![plugin_md_v2::PluginEntityChange { entity_id: plugin_md_v2::ROOT_ENTITY_ID.to_string(), schema_key: DOCUMENT_SCHEMA_KEY.to_string(), snapshot_content: None, }]; let data = apply_changes(file, changes).expect("apply_changes should succeed"); assert!(data.is_empty()); } #[test] fn passes_through_when_no_markdown_rows_are_present() { let file = file_from_markdown("f1", "/notes.md", "keep me"); let data = apply_changes(file, Vec::new()).expect("apply_changes should succeed"); assert_eq!(decode_utf8(data), "keep me"); } #[test] fn rejects_duplicate_document_rows() { let file = empty_file("f1", "/notes.md"); let changes = vec![ document_change(vec!["b1".to_string()]), document_change(vec!["b2".to_string()]), ]; let error = apply_changes(file, changes).expect_err("apply_changes should fail"); assert_invalid_input(error); } #[test] fn rejects_duplicate_block_rows() { let file = empty_file("f1", "/notes.md"); let changes = vec![ block_change("b1", "paragraph", "a"), block_change("b1", "paragraph", "b"), ]; let error = apply_changes(file, changes).expect_err("apply_changes should fail"); assert_invalid_input(error); } #[test] fn rejects_unknown_document_entity_id() { let file = empty_file("f1", "/notes.md"); let changes = vec![plugin_md_v2::PluginEntityChange { entity_id: "other".to_string(), schema_key: DOCUMENT_SCHEMA_KEY.to_string(), snapshot_content: Some( serde_json::json!({ "id": "other", "order": ["b1"], }) .to_string(), ), }]; let error = apply_changes(file, changes).expect_err("apply_changes should fail"); assert_invalid_input(error); } #[test] fn rejects_invalid_block_snapshot_json() { let file = empty_file("f1", "/notes.md"); let changes = vec![plugin_md_v2::PluginEntityChange { entity_id: "b1".to_string(), schema_key: BLOCK_SCHEMA_KEY.to_string(), snapshot_content: Some("{".to_string()), }]; let error = apply_changes(file, changes).expect_err("apply_changes should fail"); assert_invalid_input(error); } #[test] fn rejects_invalid_document_snapshot_json() { let file = empty_file("f1", "/notes.md"); let changes = vec![plugin_md_v2::PluginEntityChange { entity_id: plugin_md_v2::ROOT_ENTITY_ID.to_string(), schema_key: DOCUMENT_SCHEMA_KEY.to_string(), snapshot_content: Some("{".to_string()), }]; let error = apply_changes(file, changes).expect_err("apply_changes should fail"); assert_invalid_input(error); } #[test] fn rejects_block_snapshot_id_mismatch_with_entity_id() { let file = empty_file("f1", "/notes.md"); let changes = vec![plugin_md_v2::PluginEntityChange { entity_id: "b1".to_string(), schema_key: BLOCK_SCHEMA_KEY.to_string(), snapshot_content: Some( serde_json::json!({ "id": "b2", "type": "paragraph", "node": {}, "markdown": "hello", }) .to_string(), ), }]; let error = apply_changes(file, changes).expect_err("apply_changes should fail"); assert_invalid_input(error); } #[test] fn rejects_document_snapshot_id_mismatch_with_root() { let file = empty_file("f1", "/notes.md"); let changes = vec![plugin_md_v2::PluginEntityChange { entity_id: plugin_md_v2::ROOT_ENTITY_ID.to_string(), schema_key: DOCUMENT_SCHEMA_KEY.to_string(), snapshot_content: Some( serde_json::json!({ "id": "other", "order": ["b1"], }) .to_string(), ), }]; let error = apply_changes(file, changes).expect_err("apply_changes should fail"); assert_invalid_input(error); } #[test] fn ignores_unknown_schema_rows() { let file = file_from_markdown("f1", "/notes.md", "keep me"); let changes = vec![ plugin_md_v2::PluginEntityChange { entity_id: "unknown1".to_string(), schema_key: "other_schema".to_string(), snapshot_content: Some("{\"x\":1}".to_string()), }, plugin_md_v2::PluginEntityChange { entity_id: "unknown2".to_string(), schema_key: "other_schema".to_string(), snapshot_content: None, }, ]; let data = apply_changes(file, changes).expect("apply_changes should succeed"); assert_eq!(decode_utf8(data), "keep me"); } #[test] fn skips_missing_block_ids_referenced_in_document_order() { let file = empty_file("f1", "/notes.md"); let changes = vec![ document_change(vec!["b1".to_string(), "b2".to_string()]), block_change("b1", "paragraph", "Only this exists."), ]; let data = apply_changes(file, changes).expect("apply_changes should succeed"); assert_eq!(decode_utf8(data), "Only this exists.\n"); } #[test] fn appends_orphan_blocks_not_in_document_order() { let file = empty_file("f1", "/notes.md"); let changes = vec![ document_change(vec!["b1".to_string()]), block_change("b2", "paragraph", "Second"), block_change("b1", "paragraph", "First"), ]; let data = apply_changes(file, changes).expect("apply_changes should succeed"); assert_eq!(decode_utf8(data), "First\n\nSecond\n"); } #[test] fn materializes_deterministically_without_document_row() { let file = empty_file("f1", "/notes.md"); let changes = vec![ block_change("b2", "paragraph", "Second"), block_change("b1", "paragraph", "First"), ]; let data = apply_changes(file, changes).expect("apply_changes should succeed"); // BTreeMap key ordering makes this deterministic. assert_eq!(decode_utf8(data), "First\n\nSecond\n"); } #[test] fn normalizes_block_markdown_whitespace_and_trailing_newline() { let file = empty_file("f1", "/notes.md"); let changes = vec![ document_change(vec!["b1".to_string(), "b2".to_string()]), block_change("b1", "heading", "\n# Title\n"), block_change("b2", "paragraph", "\n\nParagraph\n\n"), ]; let data = apply_changes(file, changes).expect("apply_changes should succeed"); assert_eq!(decode_utf8(data), "# Title\n\nParagraph\n"); } #[test] fn tombstoned_block_is_not_rendered_even_if_order_mentions_it() { let file = empty_file("f1", "/notes.md"); let changes = vec![ document_change(vec!["b1".to_string(), "b2".to_string()]), block_change("b1", "paragraph", "Alive"), plugin_md_v2::PluginEntityChange { entity_id: "b2".to_string(), schema_key: BLOCK_SCHEMA_KEY.to_string(), snapshot_content: None, }, ]; let data = apply_changes(file, changes).expect("apply_changes should succeed"); assert_eq!(decode_utf8(data), "Alive\n"); } ================================================ FILE: packages/plugin-md-v2/tests/common/mod.rs ================================================ #![allow(dead_code)] use plugin_md_v2::{ PluginApiError, PluginEntityChange, PluginFile, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY, ROOT_ENTITY_ID, }; use std::collections::BTreeMap; pub type StateKey = (String, String); pub type StateRows = BTreeMap; pub fn file_from_markdown(id: &str, path: &str, markdown: &str) -> PluginFile { PluginFile { id: id.to_string(), path: path.to_string(), data: markdown.as_bytes().to_vec(), } } pub fn empty_file(id: &str, path: &str) -> PluginFile { PluginFile { id: id.to_string(), path: path.to_string(), data: Vec::new(), } } pub fn decode_utf8(bytes: Vec) -> String { String::from_utf8(bytes).expect("materialized markdown should be valid UTF-8") } pub fn is_document_change(change: &PluginEntityChange) -> bool { change.schema_key == DOCUMENT_SCHEMA_KEY } pub fn is_block_change(change: &PluginEntityChange) -> bool { change.schema_key == BLOCK_SCHEMA_KEY } pub fn parse_document_order(change: &PluginEntityChange) -> Vec { assert!(is_document_change(change)); let raw = change .snapshot_content .as_ref() .expect("document snapshot should be present"); let parsed: serde_json::Value = serde_json::from_str(raw).expect("document snapshot should be valid JSON"); assert_eq!( parsed.get("id").and_then(serde_json::Value::as_str), Some(ROOT_ENTITY_ID) ); parsed .get("order") .and_then(serde_json::Value::as_array) .expect("document snapshot should contain order array") .iter() .map(|entry| { entry .as_str() .expect("order entries should be strings") .to_string() }) .collect() } pub fn parse_block_markdown(change: &PluginEntityChange) -> String { assert!(is_block_change(change)); let raw = change .snapshot_content .as_ref() .expect("block snapshot should be present"); let parsed: serde_json::Value = serde_json::from_str(raw).expect("block snapshot should be valid JSON"); parsed .get("markdown") .and_then(serde_json::Value::as_str) .expect("block snapshot should contain markdown") .to_string() } pub fn assert_invalid_input(error: PluginApiError) { match error { PluginApiError::InvalidInput(_) => {} PluginApiError::Internal(message) => { panic!("expected invalid-input error, got internal error: {message}") } } } pub fn apply_delta(state: &mut StateRows, delta: Vec) { for change in delta { let key = (change.schema_key.clone(), change.entity_id.clone()); if change.snapshot_content.is_some() { state.insert(key, change); } else { state.remove(&key); } } } pub fn collect_state_rows(state: &StateRows) -> Vec { state.values().cloned().collect() } pub fn document_change(order: Vec) -> PluginEntityChange { PluginEntityChange { entity_id: ROOT_ENTITY_ID.to_string(), schema_key: DOCUMENT_SCHEMA_KEY.to_string(), snapshot_content: Some( serde_json::json!({ "id": ROOT_ENTITY_ID, "order": order, }) .to_string(), ), } } pub fn block_change(id: &str, node_type: &str, markdown: &str) -> PluginEntityChange { PluginEntityChange { entity_id: id.to_string(), schema_key: BLOCK_SCHEMA_KEY.to_string(), snapshot_content: Some( serde_json::json!({ "id": id, "type": node_type, "node": {}, "markdown": markdown, }) .to_string(), ), } } ================================================ FILE: packages/plugin-md-v2/tests/detect_changes.rs ================================================ mod common; use common::{ assert_invalid_input, file_from_markdown, is_block_change, is_document_change, parse_document_order, }; use plugin_md_v2::{ detect_changes, detect_changes_with_state_context, PluginDetectStateContext, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY, }; use std::collections::BTreeSet; fn count_tombstones(changes: &[plugin_md_v2::PluginEntityChange]) -> usize { changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none()) .count() } fn count_upserts(changes: &[plugin_md_v2::PluginEntityChange]) -> usize { changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some()) .count() } fn count_document_rows(changes: &[plugin_md_v2::PluginEntityChange]) -> usize { changes .iter() .filter(|change| change.schema_key == DOCUMENT_SCHEMA_KEY) .count() } fn upsert_types(changes: &[plugin_md_v2::PluginEntityChange]) -> Vec { changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY) .filter_map(|change| change.snapshot_content.as_ref()) .map(|raw| { let parsed: serde_json::Value = serde_json::from_str(raw).expect("block snapshot should be valid JSON"); parsed .get("type") .and_then(serde_json::Value::as_str) .expect("block snapshot should contain type") .to_string() }) .collect() } fn upsert_markdowns(changes: &[plugin_md_v2::PluginEntityChange]) -> Vec { changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY) .filter_map(|change| change.snapshot_content.as_ref()) .map(|raw| { let parsed: serde_json::Value = serde_json::from_str(raw).expect("block snapshot should be valid JSON"); parsed .get("markdown") .and_then(serde_json::Value::as_str) .expect("block snapshot should contain markdown") .to_string() }) .collect() } fn bootstrap_order(markdown: &str) -> Vec { let bootstrap = detect_changes(None, file_from_markdown("bootstrap", "/notes.md", markdown)) .expect("bootstrap detect_changes should succeed"); let document = bootstrap .iter() .find(|change| is_document_change(change)) .expect("bootstrap should include document row"); parse_document_order(document) } fn document_order_from_changes( changes: &[plugin_md_v2::PluginEntityChange], ) -> Option> { changes .iter() .find(|change| is_document_change(change)) .map(parse_document_order) } fn tombstone_ids(changes: &[plugin_md_v2::PluginEntityChange]) -> Vec { changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none()) .map(|change| change.entity_id.clone()) .collect() } fn upsert_ids(changes: &[plugin_md_v2::PluginEntityChange]) -> Vec { changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some()) .map(|change| change.entity_id.clone()) .collect() } fn state_context_from_rows(rows: &[plugin_md_v2::PluginEntityChange]) -> PluginDetectStateContext { PluginDetectStateContext { active_state: Some( rows.iter() .map(|row| plugin_md_v2::PluginActiveStateRow { entity_id: row.entity_id.clone(), schema_key: Some(row.schema_key.clone()), snapshot_content: row.snapshot_content.clone(), file_id: None, plugin_key: None, version_id: None, change_id: None, metadata: None, created_at: None, updated_at: None, }) .collect::>(), ), } } fn bootstrap_state( markdown: &str, ) -> ( plugin_md_v2::PluginFile, Vec, PluginDetectStateContext, ) { let before = file_from_markdown("f1", "/notes.md", markdown); let bootstrap = detect_changes(None, before.clone()).expect("bootstrap detect_changes should succeed"); let before_order = document_order_from_changes(&bootstrap).expect("bootstrap should include document row"); let state_context = state_context_from_rows(&bootstrap); (before, before_order, state_context) } fn make_large_markdown_paragraphs(count: usize) -> Vec { (1..=count).map(|idx| format!("P{idx}")).collect::>() } #[test] fn no_changes_when_documents_are_equal() { let before = file_from_markdown("f1", "/notes.md", "# Title\n\nSame paragraph.\n"); let after = file_from_markdown("f1", "/notes.md", "# Title\n\nSame paragraph.\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert!(changes.is_empty()); } #[test] fn emits_document_and_block_rows_for_new_file() { let after = file_from_markdown("f1", "/notes.md", "# Title\n\nParagraph.\n"); let changes = detect_changes(None, after).expect("detect_changes should succeed"); let document_rows = changes .iter() .filter(|change| is_document_change(change)) .collect::>(); let block_rows = changes .iter() .filter(|change| is_block_change(change)) .collect::>(); assert_eq!(document_rows.len(), 1); assert_eq!(block_rows.len(), 2); for row in block_rows { assert_eq!(row.schema_key, BLOCK_SCHEMA_KEY); assert!(row.snapshot_content.is_some()); } } #[test] fn handles_empty_documents() { let before = file_from_markdown("f1", "/notes.md", ""); let after = file_from_markdown("f1", "/notes.md", ""); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert!(changes.is_empty()); } #[test] fn rejects_non_utf8_input() { let after = plugin_md_v2::PluginFile { id: "f1".to_string(), path: "/notes.md".to_string(), data: vec![0xFF, 0xFE, 0xFD], }; let error = detect_changes(None, after).expect_err("detect_changes should fail"); assert_invalid_input(error); } #[test] fn inline_html_br_does_not_drop_changes() { let after = file_from_markdown( "f1", "/notes.md", "SSH auth: `git clone git@github.com:microsoft/vscode-docs.git`
HTTPS auth: `git clone https://github.com/microsoft/vscode-docs.git`\n", ); let changes = detect_changes(None, after).expect("detect_changes should succeed"); assert!( !changes.is_empty(), "inline html
in .md should not produce an empty change set" ); assert!(changes.iter().any(is_document_change)); assert!(changes.iter().any(is_block_change)); } #[test] fn move_only_emits_document_row() { let before = file_from_markdown("f1", "/notes.md", "First paragraph.\n\nSecond paragraph.\n"); let after = file_from_markdown("f1", "/notes.md", "Second paragraph.\n\nFirst paragraph.\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 1); assert!(changes.iter().all(is_document_change)); } #[test] fn move_section_emits_document_row_only() { let before = file_from_markdown("f1", "/notes.md", "# A\n\npara a\n\n# B\n\npara b\n"); let after = file_from_markdown("f1", "/notes.md", "# B\n\npara b\n\n# A\n\npara a\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(changes.len(), 1); assert_eq!(changes[0].schema_key, DOCUMENT_SCHEMA_KEY); } #[test] fn cross_type_paragraph_to_heading_emits_delete_add_and_document_update() { let before = file_from_markdown("f1", "/notes.md", "Hello\n"); let after = file_from_markdown("f1", "/notes.md", "# Hello\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let tombstones = changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none()) .count(); let upserts = changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some()) .count(); let document_rows = changes .iter() .filter(|change| change.schema_key == DOCUMENT_SCHEMA_KEY) .count(); assert_eq!(tombstones, 1); assert_eq!(upserts, 1); assert_eq!(document_rows, 1); } #[test] fn cross_type_code_to_paragraph_emits_delete_add_and_document_update() { let before = file_from_markdown("f1", "/notes.md", "```js\nconsole.log(1)\n```\n"); let after = file_from_markdown("f1", "/notes.md", "console.log(1)\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let tombstones = changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none()) .count(); let upserts = changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some()) .count(); let document_rows = changes .iter() .filter(|change| change.schema_key == DOCUMENT_SCHEMA_KEY) .count(); assert_eq!(tombstones, 1); assert_eq!(upserts, 1); assert_eq!(document_rows, 1); } #[test] fn duplicate_paragraphs_with_no_text_change_emit_no_changes() { let before = file_from_markdown("f1", "/notes.md", "Same\n\nSame\n"); let after = file_from_markdown("f1", "/notes.md", "Same\n\nSame\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert!(changes.is_empty()); } #[test] fn insert_duplicate_paragraph_emits_new_block_and_document_update() { let before = file_from_markdown("f1", "/notes.md", "Same\n\nOther\n"); let after = file_from_markdown("f1", "/notes.md", "Same\n\nSame\n\nOther\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let tombstones = changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none()) .count(); let upserts = changes .iter() .filter(|change| change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some()) .count(); let document_rows = changes .iter() .filter(|change| change.schema_key == DOCUMENT_SCHEMA_KEY) .count(); assert_eq!(tombstones, 0); assert_eq!(upserts, 1); assert_eq!(document_rows, 1); } #[test] fn crlf_vs_lf_normalization_emits_no_changes() { let before = file_from_markdown("f1", "/notes.md", "Line A\r\n\r\nLine B\r\n"); let after = file_from_markdown("f1", "/notes.md", "Line A\n\nLine B\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert!(changes.is_empty()); } #[test] fn unicode_nfc_vs_nfd_emits_no_changes() { let before = file_from_markdown("f1", "/notes.md", "caf\u{00E9}\n"); let after = file_from_markdown("f1", "/notes.md", "caf\u{0065}\u{0301}\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert!(changes.is_empty()); } #[test] fn large_doc_pure_shuffle_emits_document_row_only() { let paragraphs = (0..140).map(|idx| format!("P{idx}")).collect::>(); let before_markdown = paragraphs.join("\n\n") + "\n"; let mut after = paragraphs.clone(); after.rotate_left(37); let after_markdown = after.join("\n\n") + "\n"; let changes = detect_changes( Some(file_from_markdown("f1", "/notes.md", &before_markdown)), file_from_markdown("f1", "/notes.md", &after_markdown), ) .expect("detect_changes should succeed"); assert_eq!(changes.len(), 1); assert_eq!(changes[0].schema_key, DOCUMENT_SCHEMA_KEY); } #[test] fn paragraph_to_blockquote_emits_delete_add_and_document_update() { let before = file_from_markdown("f1", "/notes.md", "Hello\n"); let after = file_from_markdown("f1", "/notes.md", "> Hello\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert_eq!(count_tombstones(&changes), 1); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 1); assert_eq!(upsert_types(&changes), vec!["blockquote".to_string()]); } #[test] fn hard_break_variant_does_not_introduce_extra_blocks() { let before = file_from_markdown("f1", "/notes.md", "line \r\nbreak\r\n"); let after = file_from_markdown("f1", "/notes.md", "line\\\r\nbreak\r\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); if changes.is_empty() { return; } assert_eq!(count_tombstones(&changes), 1); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 1); assert_eq!(upsert_types(&changes), vec!["paragraph".to_string()]); } #[test] fn code_fence_length_variation_does_not_introduce_new_id() { let before = file_from_markdown("f1", "/notes.md", "```js\nconsole.log(1)\n```\n"); let after = file_from_markdown("f1", "/notes.md", "````js\nconsole.log(1)\n````\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); if changes.is_empty() { return; } assert_eq!(count_tombstones(&changes), 1); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 1); assert_eq!(upsert_types(&changes), vec!["code".to_string()]); } #[test] fn id_stability_pure_reorder_preserves_existing_ids() { let before_markdown = "First\n\nSecond\n"; let before_order = bootstrap_order(before_markdown); assert_eq!(before_order.len(), 2); let changes = detect_changes( Some(file_from_markdown("f1", "/notes.md", before_markdown)), file_from_markdown("f1", "/notes.md", "Second\n\nFirst\n"), ) .expect("detect_changes should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 0); assert_eq!(count_document_rows(&changes), 1); let after_order = document_order_from_changes(&changes).expect("reorder should include document row"); assert_eq!( after_order, vec![before_order[1].clone(), before_order[0].clone()] ); } #[test] fn id_stability_insert_between_keeps_neighbors_and_mints_new_id() { let before_markdown = "A\n\nC\n"; let before_order = bootstrap_order(before_markdown); assert_eq!(before_order.len(), 2); let changes = detect_changes( Some(file_from_markdown("f1", "/notes.md", before_markdown)), file_from_markdown("f1", "/notes.md", "A\n\nB\n\nC\n"), ) .expect("detect_changes should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 1); let after_order = document_order_from_changes(&changes).expect("insert should include document row"); assert_eq!(after_order[0], before_order[0]); assert_eq!(after_order[2], before_order[1]); assert_ne!(after_order[1], before_order[0]); assert_ne!(after_order[1], before_order[1]); assert_eq!(upsert_ids(&changes), vec![after_order[1].clone()]); } #[test] fn id_stability_delete_keeps_survivor_id_and_tombstones_deleted() { let before_markdown = "Keep me\n\nDelete me\n"; let before_order = bootstrap_order(before_markdown); assert_eq!(before_order.len(), 2); let changes = detect_changes( Some(file_from_markdown("f1", "/notes.md", before_markdown)), file_from_markdown("f1", "/notes.md", "Keep me\n"), ) .expect("detect_changes should succeed"); assert_eq!(count_tombstones(&changes), 1); assert_eq!(count_upserts(&changes), 0); assert_eq!(count_document_rows(&changes), 1); assert_eq!(tombstone_ids(&changes), vec![before_order[1].clone()]); assert_eq!( document_order_from_changes(&changes).expect("delete should include document row"), vec![before_order[0].clone()] ); } #[test] fn id_stability_cross_type_does_not_reuse_old_id() { let before_markdown = "Hello\n"; let before_order = bootstrap_order(before_markdown); assert_eq!(before_order.len(), 1); let changes = detect_changes( Some(file_from_markdown("f1", "/notes.md", before_markdown)), file_from_markdown("f1", "/notes.md", "# Hello\n"), ) .expect("detect_changes should succeed"); assert_eq!(count_tombstones(&changes), 1); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 1); assert_eq!(tombstone_ids(&changes), vec![before_order[0].clone()]); let upserts = upsert_ids(&changes); assert_eq!(upserts.len(), 1); assert_ne!(upserts[0], before_order[0]); let after_order = document_order_from_changes(&changes).expect("should include doc row"); assert_eq!(after_order, upserts); } #[test] fn id_stability_large_pure_shuffle_preserves_id_set() { let paragraphs = (1..=500).map(|idx| format!("P{idx}")).collect::>(); let before_markdown = paragraphs.join("\n\n") + "\n"; let before_order = bootstrap_order(&before_markdown); assert_eq!(before_order.len(), 500); let mut after = paragraphs.clone(); after.rotate_left(123); let after_markdown = after.join("\n\n") + "\n"; let changes = detect_changes( Some(file_from_markdown("f1", "/notes.md", &before_markdown)), file_from_markdown("f1", "/notes.md", &after_markdown), ) .expect("detect_changes should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 0); assert_eq!(count_document_rows(&changes), 1); let after_order = document_order_from_changes(&changes).expect("shuffle should include doc"); let before_set = before_order.into_iter().collect::>(); let after_set = after_order.into_iter().collect::>(); assert_eq!(before_set, after_set); } #[test] fn with_state_context_paragraph_edit_reuses_existing_id_without_tombstone() { let before = file_from_markdown("f1", "/notes.md", "Hello\n\nWorld\n"); let bootstrap = detect_changes(None, before.clone()).expect("bootstrap detect_changes should succeed"); let before_order = bootstrap_order("Hello\n\nWorld\n"); let state_context = state_context_from_rows(&bootstrap); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "Hello updated\n\nWorld\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_move_and_edit_reuses_existing_id_and_updates_order() { let before_markdown = "Alpha\n\nBeta\n"; let before = file_from_markdown("f1", "/notes.md", before_markdown); let bootstrap = detect_changes(None, before.clone()).expect("bootstrap detect_changes should succeed"); let before_order = bootstrap_order(before_markdown); let state_context = state_context_from_rows(&bootstrap); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "Beta plus\n\nAlpha\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 1); assert_eq!(upsert_ids(&changes), vec![before_order[1].clone()]); assert_eq!( document_order_from_changes(&changes).expect("document row should be present"), vec![before_order[1].clone(), before_order[0].clone()] ); } #[test] fn with_state_context_insert_between_preserves_neighbor_ids_and_mints_new_id() { let before_markdown = "A\n\nC\n"; let before = file_from_markdown("f1", "/notes.md", before_markdown); let bootstrap = detect_changes(None, before.clone()).expect("bootstrap detect_changes should succeed"); let before_order = bootstrap_order(before_markdown); let state_context = state_context_from_rows(&bootstrap); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "A\n\nB\n\nC\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 1); let order = document_order_from_changes(&changes).expect("document row should be present"); assert_eq!(order[0], before_order[0]); assert_eq!(order[2], before_order[1]); assert_ne!(order[1], before_order[0]); assert_ne!(order[1], before_order[1]); assert_eq!(upsert_ids(&changes), vec![order[1].clone()]); } #[test] fn with_state_context_pure_reorder_emits_only_document_row() { let (before, before_order, state_context) = bootstrap_state("First\n\nSecond\n"); assert_eq!(before_order.len(), 2); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "Second\n\nFirst\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 0); assert_eq!(count_document_rows(&changes), 1); assert_eq!( document_order_from_changes(&changes).expect("document row should be present"), vec![before_order[1].clone(), before_order[0].clone()] ); } #[test] fn with_state_context_move_section_emits_only_document_row() { let (before, before_order, state_context) = bootstrap_state("# A\n\nPara A\n\n# B\n\nPara B\n"); assert_eq!(before_order.len(), 4); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "# B\n\nPara B\n\n# A\n\nPara A\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 0); assert_eq!(count_document_rows(&changes), 1); assert_eq!( document_order_from_changes(&changes).expect("document row should be present"), vec![ before_order[2].clone(), before_order[3].clone(), before_order[0].clone(), before_order[1].clone(), ] ); } #[test] fn with_state_context_large_shuffle_500_emits_only_document_row() { let paragraphs = (1..=500).map(|idx| format!("P{idx}")).collect::>(); let before_markdown = paragraphs.join("\n\n") + "\n"; let (before, before_order, state_context) = bootstrap_state(&before_markdown); assert_eq!(before_order.len(), 500); let mut after = paragraphs; after.rotate_left(123); let after_markdown = after.join("\n\n") + "\n"; let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", &after_markdown), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 0); assert_eq!(count_document_rows(&changes), 1); let after_order = document_order_from_changes(&changes).expect("document row should be present"); let before_set = before_order.into_iter().collect::>(); let after_set = after_order.into_iter().collect::>(); assert_eq!(before_set, after_set); } #[test] fn with_state_context_duplicate_edit_second_preserves_first_id_without_document_noise() { let (before, before_order, state_context) = bootstrap_state("Same\n\nSame\n"); assert_eq!(before_order.len(), 2); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "Same\n\nSame updated\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_ids(&changes), vec![before_order[1].clone()]); } #[test] fn with_state_context_duplicate_middle_edit_targets_only_middle_entity() { let (before, before_order, state_context) = bootstrap_state("Same\n\nSame\n\nSame\n"); assert_eq!(before_order.len(), 3); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "Same\n\nSame updated\n\nSame\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_ids(&changes), vec![before_order[1].clone()]); } #[test] fn with_state_context_list_reorder_emits_single_list_upsert_without_document_row() { let (before, before_order, state_context) = bootstrap_state("- one\n- two\n- three\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "- three\n- one\n- two\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["list".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_list_add_item_emits_single_list_upsert_without_document_row() { let (before, before_order, state_context) = bootstrap_state("- one\n- two\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "- one\n- two\n- three\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["list".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_list_remove_item_emits_single_list_upsert_without_document_row() { let (before, before_order, state_context) = bootstrap_state("- one\n- two\n- three\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "- one\n- three\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["list".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_table_reorder_rows_emits_single_table_upsert_without_document_row() { let (before, before_order, state_context) = bootstrap_state("| a | b |\n| - | - |\n| 1 | 2 |\n| 3 | 4 |\n| 5 | 6 |\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown( "f1", "/notes.md", "| a | b |\n| - | - |\n| 3 | 4 |\n| 5 | 6 |\n| 1 | 2 |\n", ), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["table".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_table_add_row_emits_single_table_upsert_without_document_row() { let (before, before_order, state_context) = bootstrap_state("| a | b |\n| - | - |\n| 1 | 2 |\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown( "f1", "/notes.md", "| a | b |\n| - | - |\n| 1 | 2 |\n| 3 | 4 |\n", ), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["table".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_table_remove_row_emits_single_table_upsert_without_document_row() { let (before, before_order, state_context) = bootstrap_state("| a | b |\n| - | - |\n| 1 | 2 |\n| 3 | 4 |\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "| a | b |\n| - | - |\n| 1 | 2 |\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["table".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_heading_edit_reuses_existing_id_without_document_row() { let (before, before_order, state_context) = bootstrap_state("# Hello\n\nBody\n"); assert_eq!(before_order.len(), 2); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "# Hello World\n\nBody\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["heading".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_code_edit_reuses_existing_id_without_document_row() { let (before, before_order, state_context) = bootstrap_state("```js\nconsole.log(1)\n```\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "```js\nconsole.log(2)\n```\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["code".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_link_text_edit_reuses_existing_id_without_document_row() { let (before, before_order, state_context) = bootstrap_state("[text](https://example.com)\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "[new](https://example.com)\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["paragraph".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_link_url_edit_reuses_existing_id_without_document_row() { let (before, before_order, state_context) = bootstrap_state("[text](https://example.com)\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "[text](https://example.org)\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 0); assert_eq!(upsert_types(&changes), vec!["paragraph".to_string()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); } #[test] fn with_state_context_paragraph_split_reuses_first_id_and_mints_one_new() { let (before, before_order, state_context) = bootstrap_state("AB\n"); assert_eq!(before_order.len(), 1); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "A\n\nB\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 2); assert_eq!(count_document_rows(&changes), 1); let upserts = upsert_ids(&changes); assert!(upserts.contains(&before_order[0])); assert_eq!( upserts.iter().filter(|id| **id != before_order[0]).count(), 1 ); let order = document_order_from_changes(&changes).expect("document row should be present"); assert_eq!(order.len(), 2); assert_eq!(order[0], before_order[0]); assert_ne!(order[1], before_order[0]); } #[test] fn with_state_context_paragraph_merge_reuses_first_id_and_tombstones_second() { let (before, before_order, state_context) = bootstrap_state("A\n\nB\n"); assert_eq!(before_order.len(), 2); let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", "AB\n"), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 1); assert_eq!(count_upserts(&changes), 1); assert_eq!(count_document_rows(&changes), 1); assert_eq!(tombstone_ids(&changes), vec![before_order[1].clone()]); assert_eq!(upsert_ids(&changes), vec![before_order[0].clone()]); assert_eq!( document_order_from_changes(&changes).expect("document row should be present"), vec![before_order[0].clone()] ); } #[test] fn with_state_context_large_500_tiny_edits_emit_only_targeted_upserts() { let paragraphs = make_large_markdown_paragraphs(500); let before_markdown = paragraphs.join("\n\n") + "\n"; let (before, before_order, state_context) = bootstrap_state(&before_markdown); assert_eq!(before_order.len(), 500); let mut after = paragraphs; let edited_indexes = [10usize, 111, 222, 333, 444]; for index in edited_indexes { after[index] = format!("{} x", after[index]); } let after_markdown = after.join("\n\n") + "\n"; let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", &after_markdown), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 5); assert_eq!(count_document_rows(&changes), 0); let expected_ids = edited_indexes .iter() .map(|idx| before_order[*idx].clone()) .collect::>(); let actual_ids = upsert_ids(&changes).into_iter().collect::>(); assert_eq!(actual_ids, expected_ids); } #[test] fn with_state_context_large_500_delete_insert_move_emits_minimal_noise() { let paragraphs = make_large_markdown_paragraphs(500); let before_markdown = paragraphs.join("\n\n") + "\n"; let (before, before_order, state_context) = bootstrap_state(&before_markdown); assert_eq!(before_order.len(), 500); let moved = paragraphs[450..460].to_vec(); let mut remaining = paragraphs[..450].to_vec(); remaining.extend_from_slice(¶graphs[460..]); remaining.retain(|entry| entry != "P500"); let idx_p300 = remaining .iter() .position(|entry| entry == "P300") .expect("P300 should exist"); let mut after = Vec::new(); after.extend(moved); after.extend_from_slice(&remaining[..=idx_p300]); after.push("PX".to_string()); after.extend_from_slice(&remaining[idx_p300 + 1..]); let after_markdown = after.join("\n\n") + "\n"; let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", &after_markdown), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); let tombstones = count_tombstones(&changes); let upserts = count_upserts(&changes); assert!(tombstones <= 1); assert_eq!(upserts, 1); assert_eq!(count_document_rows(&changes), 1); assert!(tombstones + upserts <= 2); let deleted_id = before_order[499].clone(); if tombstones == 1 { assert_eq!(tombstone_ids(&changes), vec![deleted_id.clone()]); } let inserted_id = upsert_ids(&changes) .into_iter() .next() .expect("insert should create one upsert"); if tombstones == 1 { assert!(!before_order.contains(&inserted_id)); } else { assert_eq!(inserted_id, deleted_id); } assert!(upsert_markdowns(&changes) .iter() .any(|markdown| markdown.contains("PX"))); let order = document_order_from_changes(&changes).expect("document row should be present"); assert_eq!(order.len(), 500); assert_eq!(order[0..10], before_order[450..460]); if tombstones == 1 { assert!(!order.contains(&deleted_id)); } else { assert!(order.contains(&deleted_id)); } let idx_300_in_after = order .iter() .position(|id| id == &before_order[299]) .expect("P300 id should remain in order"); assert_eq!(order[idx_300_in_after + 1], inserted_id); } #[test] fn with_state_context_large_duplicates_edit_350_targets_only_matching_id() { let before_paragraphs = (0..500).map(|_| "Same".to_string()).collect::>(); let before_markdown = before_paragraphs.join("\n\n") + "\n"; let (before, before_order, state_context) = bootstrap_state(&before_markdown); assert_eq!(before_order.len(), 500); let mut after = before_paragraphs; after[349] = "Same updated".to_string(); let after_markdown = after.join("\n\n") + "\n"; let changes = detect_changes_with_state_context( Some(before), file_from_markdown("f1", "/notes.md", &after_markdown), Some(state_context), ) .expect("detect_changes_with_state_context should succeed"); assert_eq!(count_tombstones(&changes), 0); assert_eq!(count_upserts(&changes), 1); assert!(count_document_rows(&changes) <= 1); assert!(before_order.contains(&upsert_ids(&changes)[0])); if let Some(order) = document_order_from_changes(&changes) { let before_set = before_order.into_iter().collect::>(); let after_set = order.into_iter().collect::>(); assert_eq!(before_set, after_set); } } ================================================ FILE: packages/plugin-md-v2/tests/roundtrip.rs ================================================ mod common; use common::{ apply_delta, collect_state_rows, decode_utf8, empty_file, file_from_markdown, is_document_change, StateRows, }; use plugin_md_v2::{ apply_changes, detect_changes, detect_changes_with_state_context, PluginActiveStateRow, PluginDetectStateContext, PluginEntityChange, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY, }; fn to_state_context(rows: &[PluginEntityChange]) -> PluginDetectStateContext { PluginDetectStateContext { active_state: Some( rows.iter() .map(|row| PluginActiveStateRow { entity_id: row.entity_id.clone(), schema_key: Some(row.schema_key.clone()), snapshot_content: row.snapshot_content.clone(), file_id: None, plugin_key: None, version_id: None, change_id: None, metadata: None, created_at: None, updated_at: None, }) .collect::>(), ), } } fn detect_with_state_context( state: &StateRows, before: plugin_md_v2::PluginFile, after: plugin_md_v2::PluginFile, ) -> Vec { let rows = collect_state_rows(state); let ctx = to_state_context(&rows); detect_changes_with_state_context(Some(before), after, Some(ctx)) .expect("detect_changes_with_state_context should succeed") } fn count_tombstones(changes: &[PluginEntityChange]) -> usize { changes .iter() .filter(|c| c.schema_key == BLOCK_SCHEMA_KEY && c.snapshot_content.is_none()) .count() } fn count_upserts(changes: &[PluginEntityChange]) -> usize { changes .iter() .filter(|c| c.schema_key == BLOCK_SCHEMA_KEY && c.snapshot_content.is_some()) .count() } fn count_document_rows(changes: &[PluginEntityChange]) -> usize { changes .iter() .filter(|c| c.schema_key == DOCUMENT_SCHEMA_KEY) .count() } fn upsert_block_types(changes: &[PluginEntityChange]) -> Vec { changes .iter() .filter(|c| c.schema_key == BLOCK_SCHEMA_KEY && c.snapshot_content.is_some()) .map(|c| { let raw = c .snapshot_content .as_ref() .expect("upsert should have snapshot"); let parsed: serde_json::Value = serde_json::from_str(raw).expect("block snapshot should be valid JSON"); parsed .get("type") .and_then(serde_json::Value::as_str) .expect("block snapshot should contain type") .to_string() }) .collect() } #[test] fn roundtrip_file_detect_state_apply_markdown() { let markdown = "# Title\n\nParagraph one.\n\nParagraph two.\n"; let file = file_from_markdown("f1", "/notes.md", markdown); let delta = detect_changes(None, file).expect("detect_changes should succeed"); let mut state = StateRows::new(); apply_delta(&mut state, delta); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), markdown); } #[test] fn roundtrip_edit_move_delete_across_block_rows() { let before_markdown = "Alpha.\n\nBravo.\n\nCharlie.\n"; let after_markdown = "Charlie.\n\nAlpha updated.\n"; let before_file = file_from_markdown("f1", "/notes.md", before_markdown); let mut state = StateRows::new(); let bootstrap = detect_changes(None, before_file.clone()).expect("bootstrap detect should succeed"); apply_delta(&mut state, bootstrap); let delta = detect_changes( Some(before_file), file_from_markdown("f1", "/notes.md", after_markdown), ) .expect("delta detect should succeed"); assert!(delta .iter() .any(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)); assert!(delta.iter().any(|change| { change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_none() })); assert!(delta.iter().any(|change| { change.schema_key == BLOCK_SCHEMA_KEY && change.snapshot_content.is_some() })); apply_delta(&mut state, delta); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), after_markdown); } #[test] fn roundtrip_move_only_updates_document_order() { let before_markdown = "First block.\n\nSecond block.\n"; let after_markdown = "Second block.\n\nFirst block.\n"; let delta = detect_changes( Some(file_from_markdown("f1", "/notes.md", before_markdown)), file_from_markdown("f1", "/notes.md", after_markdown), ) .expect("detect_changes should succeed"); assert_eq!(delta.len(), 1); assert!(delta.iter().all(is_document_change)); } #[test] fn roundtrip_multi_step_evolution() { let a = "# Title\n\nOne.\n"; let b = "# Title v2\n\nOne.\n\nTwo.\n"; let c = "Two.\n\n# Title v3\n"; let a_file = file_from_markdown("f1", "/notes.md", a); let b_file = file_from_markdown("f1", "/notes.md", b); let c_file = file_from_markdown("f1", "/notes.md", c); let mut state = StateRows::new(); let delta_a = detect_changes(None, a_file.clone()).expect("detect_changes should succeed"); apply_delta(&mut state, delta_a); let delta_b = detect_with_state_context(&state, a_file, b_file.clone()); apply_delta(&mut state, delta_b); let delta_c = detect_with_state_context(&state, b_file, c_file); apply_delta(&mut state, delta_c); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), c); } #[test] fn roundtrip_delete_all_blocks_to_empty_document() { let before = "A\n\nB\n"; let before_file = file_from_markdown("f1", "/notes.md", before); let mut state = StateRows::new(); let bootstrap = detect_changes(None, before_file.clone()).expect("bootstrap detect should succeed"); apply_delta(&mut state, bootstrap); let delta = detect_changes(Some(before_file), file_from_markdown("f1", "/notes.md", "")) .expect("detect_changes should succeed"); apply_delta(&mut state, delta); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), ""); } #[test] fn roundtrip_list_internal_edit_keeps_top_level_block_model() { let before = "- one\n- two\n"; let after = "- one\n- two changed\n"; let before_file = file_from_markdown("f1", "/notes.md", before); let mut state = StateRows::new(); let bootstrap = detect_changes(None, before_file.clone()).expect("bootstrap detect should succeed"); apply_delta(&mut state, bootstrap); let delta = detect_with_state_context( &state, before_file, file_from_markdown("f1", "/notes.md", after), ); assert_eq!(count_tombstones(&delta), 0); assert_eq!(count_upserts(&delta), 1); assert_eq!(count_document_rows(&delta), 0); assert_eq!(upsert_block_types(&delta), vec!["list".to_string()]); apply_delta(&mut state, delta); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), after); } #[test] fn roundtrip_table_row_add_remove_reorder() { let initial = "| a | b |\n| - | - |\n| 1 | 2 |\n"; let add = "| a | b |\n| - | - |\n| 1 | 2 |\n| 3 | 4 |\n"; let reorder = "| a | b |\n| - | - |\n| 3 | 4 |\n| 1 | 2 |\n"; let remove = "| a | b |\n| - | - |\n| 3 | 4 |\n"; let mut state = StateRows::new(); let initial_file = file_from_markdown("f1", "/notes.md", initial); let bootstrap = detect_changes(None, initial_file.clone()).expect("bootstrap detect should succeed"); apply_delta(&mut state, bootstrap); let delta_add = detect_with_state_context( &state, initial_file, file_from_markdown("f1", "/notes.md", add), ); assert_eq!(count_tombstones(&delta_add), 0); assert_eq!(count_upserts(&delta_add), 1); assert_eq!(count_document_rows(&delta_add), 0); assert_eq!(upsert_block_types(&delta_add), vec!["table".to_string()]); apply_delta(&mut state, delta_add); let delta_reorder = detect_with_state_context( &state, file_from_markdown("f1", "/notes.md", add), file_from_markdown("f1", "/notes.md", reorder), ); assert_eq!(count_tombstones(&delta_reorder), 0); assert_eq!(count_upserts(&delta_reorder), 1); assert_eq!(count_document_rows(&delta_reorder), 0); assert_eq!( upsert_block_types(&delta_reorder), vec!["table".to_string()] ); apply_delta(&mut state, delta_reorder); let delta_remove = detect_with_state_context( &state, file_from_markdown("f1", "/notes.md", reorder), file_from_markdown("f1", "/notes.md", remove), ); assert_eq!(count_tombstones(&delta_remove), 0); assert_eq!(count_upserts(&delta_remove), 1); assert_eq!(count_document_rows(&delta_remove), 0); assert_eq!(upsert_block_types(&delta_remove), vec!["table".to_string()]); apply_delta(&mut state, delta_remove); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), remove); } #[test] fn roundtrip_large_shuffle_500_with_state_context_low_noise() { let paragraphs = (1..=500).map(|idx| format!("P{idx}")).collect::>(); let before_markdown = paragraphs.join("\n\n") + "\n"; let before_file = file_from_markdown("f1", "/notes.md", &before_markdown); let mut state = StateRows::new(); let bootstrap = detect_changes(None, before_file.clone()).expect("bootstrap detect should succeed"); apply_delta(&mut state, bootstrap); let mut after = paragraphs; after.rotate_left(123); let after_markdown = after.join("\n\n") + "\n"; let delta = detect_with_state_context( &state, before_file, file_from_markdown("f1", "/notes.md", &after_markdown), ); assert_eq!(count_tombstones(&delta), 0); assert_eq!(count_upserts(&delta), 0); assert_eq!(count_document_rows(&delta), 1); apply_delta(&mut state, delta); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), after_markdown); } #[test] fn roundtrip_large_tiny_edits_500_with_state_context_low_noise() { let paragraphs = (1..=500).map(|idx| format!("P{idx}")).collect::>(); let before_markdown = paragraphs.join("\n\n") + "\n"; let before_file = file_from_markdown("f1", "/notes.md", &before_markdown); let mut state = StateRows::new(); let bootstrap = detect_changes(None, before_file.clone()).expect("bootstrap detect should succeed"); apply_delta(&mut state, bootstrap); let mut after = paragraphs; for idx in [10usize, 111, 222, 333, 444] { after[idx] = format!("{} x", after[idx]); } let after_markdown = after.join("\n\n") + "\n"; let delta = detect_with_state_context( &state, before_file, file_from_markdown("f1", "/notes.md", &after_markdown), ); assert_eq!(count_tombstones(&delta), 0); assert_eq!(count_upserts(&delta), 5); assert_eq!(count_document_rows(&delta), 0); apply_delta(&mut state, delta); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), after_markdown); } #[test] fn roundtrip_large_duplicate_edit_with_state_context_low_noise() { let before_paragraphs = (0..500).map(|_| "Same".to_string()).collect::>(); let before_markdown = before_paragraphs.join("\n\n") + "\n"; let before_file = file_from_markdown("f1", "/notes.md", &before_markdown); let mut state = StateRows::new(); let bootstrap = detect_changes(None, before_file.clone()).expect("bootstrap detect should succeed"); apply_delta(&mut state, bootstrap); let mut after = before_paragraphs; after[349] = "Same updated".to_string(); let after_markdown = after.join("\n\n") + "\n"; let delta = detect_with_state_context( &state, before_file, file_from_markdown("f1", "/notes.md", &after_markdown), ); assert_eq!(count_tombstones(&delta), 0); assert_eq!(count_upserts(&delta), 1); assert!(count_document_rows(&delta) <= 1); apply_delta(&mut state, delta); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), after_markdown); } #[test] fn roundtrip_move_insert_delete_large_with_state_context_low_noise() { let paragraphs = (1..=500).map(|idx| format!("P{idx}")).collect::>(); let before_markdown = paragraphs.join("\n\n") + "\n"; let before_file = file_from_markdown("f1", "/notes.md", &before_markdown); let mut state = StateRows::new(); let bootstrap = detect_changes(None, before_file.clone()).expect("bootstrap detect should succeed"); apply_delta(&mut state, bootstrap); let moved = paragraphs[450..460].to_vec(); let mut remaining = paragraphs[..450].to_vec(); remaining.extend_from_slice(¶graphs[460..]); remaining.retain(|entry| entry != "P500"); let idx_p300 = remaining .iter() .position(|entry| entry == "P300") .expect("P300 should exist"); let mut after = Vec::new(); after.extend(moved); after.extend_from_slice(&remaining[..=idx_p300]); after.push("PX".to_string()); after.extend_from_slice(&remaining[idx_p300 + 1..]); let after_markdown = after.join("\n\n") + "\n"; let delta = detect_with_state_context( &state, before_file, file_from_markdown("f1", "/notes.md", &after_markdown), ); let tombstones = count_tombstones(&delta); let upserts = count_upserts(&delta); let docs = count_document_rows(&delta); assert!(tombstones <= 1); assert_eq!(upserts, 1); assert_eq!(docs, 1); assert!(tombstones + upserts <= 2); apply_delta(&mut state, delta); let materialized = apply_changes(empty_file("f1", "/notes.md"), collect_state_rows(&state)) .expect("apply_changes should succeed"); assert_eq!(decode_utf8(materialized), after_markdown); } ================================================ FILE: packages/plugin-md-v2/tests/schema.rs ================================================ use plugin_md_v2::schemas::{ schema_definitions, schema_jsons, BLOCK_SCHEMA_KEY, DOCUMENT_SCHEMA_KEY, }; use std::collections::BTreeSet; #[test] fn schema_definitions_have_expected_keys() { let schemas = schema_definitions(); assert_eq!(schemas.len(), 2); let expected_keys = BTreeSet::from([DOCUMENT_SCHEMA_KEY, BLOCK_SCHEMA_KEY]); let mut actual_keys = BTreeSet::new(); for schema in schemas { let key = schema .get("x-lix-key") .and_then(serde_json::Value::as_str) .expect("schema must define string x-lix-key"); let primary_key = schema .get("x-lix-primary-key") .and_then(serde_json::Value::as_array) .expect("schema must define x-lix-primary-key array"); actual_keys.insert(key); assert_eq!(primary_key.len(), 1); assert_eq!(primary_key[0].as_str(), Some("/id")); } assert_eq!(actual_keys, expected_keys); } #[test] fn schema_json_accessors_return_expected_text() { let raw = schema_jsons().join("\n"); assert!(raw.contains("\"x-lix-key\": \"markdown_v2_document\"")); assert!(raw.contains("\"x-lix-key\": \"markdown_v2_block\"")); } ================================================ FILE: packages/react-utils/.oxlintrc.json ================================================ { "plugins": ["typescript"], "categories": { "correctness": "error", "suspicious": "warn" }, "env": { "es2022": true, "node": true }, "ignorePatterns": ["dist", "coverage", "**/*.d.ts"], "rules": { "typescript/no-explicit-any": "off" } } ================================================ FILE: packages/react-utils/.prettierrc.json ================================================ { "useTabs": true } ================================================ FILE: packages/react-utils/LICENSE ================================================ MIT License Copyright (c) 2025 Opral US Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: packages/react-utils/README.md ================================================ # @lix-js/react-utils React 19 hooks and helpers for building reactive UIs on top of the Lix SDK. These utilities wire Kysely queries to React Suspense and subscribe to live database updates. - React 19 Suspense-first data fetching - Live updates via Lix.observe(query) - Minimal API surface: `LixProvider`, `useLix`, `useQuery`, `useQueryTakeFirst`, `useQueryTakeFirstOrThrow` ## Installation ```bash npm i @lix-js/react-utils ``` ## Requirements - React 19 (these hooks use `use()` and Suspense) - Lix SDK instance provided via context ## Quick start Wrap your app with `LixProvider` and pass a Lix instance. ```tsx import { createRoot } from "react-dom/client"; import { LixProvider } from "@lix-js/react-utils"; import { openLix } from "@lix-js/sdk"; async function bootstrap() { const lix = await openLix({}); const root = createRoot(document.getElementById("root")!); root.render( , ); } bootstrap(); ``` ## useQuery Subscribe to a live query using React Suspense. The callback receives `lix` and must return a compilable/executable query (for example `qb(lix).selectFrom(...)`). ```tsx import { Suspense } from "react"; import { ErrorBoundary } from "react-error-boundary"; import { useQuery } from "@lix-js/react-utils"; import { qb } from "@lix-js/kysely"; function KeyValueList() { const rows = useQuery((lix) => qb(lix).selectFrom("key_value").where("key", "like", "demo_%").selectAll(), ); return (
    {rows.map((r) => (
  • {r.key}: {r.value}
  • ))}
); } export function Page() { return ( Loading…}>
Failed to load.
}>
); } ``` Options ```tsx // One-time execution (no live updates) const rows = useQuery((lix) => qb(lix).selectFrom("config").selectAll(), { subscribe: false, }); ``` ### Behavior - Suspends on first render until the underlying query resolves. - Re-suspends if the compiled SQL or params of the query change. - Subscribes to live updates when `subscribe !== false` and updates state on emissions. - On subscription error, clears the cached promise and throws to the nearest ErrorBoundary. ## Single-row helpers When you want just one row: ```tsx import { useQueryTakeFirst, useQueryTakeFirstOrThrow, } from "@lix-js/react-utils"; import { qb } from "@lix-js/kysely"; // First row or undefined const file = useQueryTakeFirst((lix) => qb(lix).selectFrom("file").select(["id", "path"]).where("id", "=", fileId), ); // First row or throw (suspends, then throws to ErrorBoundary if not found) const activeVersion = useQueryTakeFirstOrThrow((lix) => qb(lix) .selectFrom("active_version") .innerJoin("version", "version.id", "active_version.version_id") .selectAll("version"), ); ``` ## Query Builder Integration `react-utils` does not construct query builders for you. Pass any query object that implements `compile()` and `execute()`. In practice, most apps use `qb(lix)` from `@lix-js/kysely`. ## Synchronizing external state updates (rich text editors, etc.) When building experiences like rich text editors, dashboards, or collaborative views, you often need to synchronize external changes while avoiding feedback loops from your own writes. Lix provides a simple pattern for this using a “writer key” and commit events. See the guide for the pattern, pitfalls, and a decision matrix: - https://lix.dev/guide/writer-key ## Provider and context ```tsx import { LixProvider, useLix } from "@lix-js/react-utils"; function NeedsLix() { const lix = useLix(); // same instance passed to LixProvider // … } ``` ## FAQ - Why does the callback receive `lix` directly? - The hook is query-builder agnostic. You can wrap `lix` however you want (for example `qb(lix)`), and react-utils only needs the compiled SQL + execute behavior. - Can I do imperative fetching? - Yes, you can call `qb(lix)` directly in event handlers. `useQuery` is for declarative, Suspense-friendly reads. ## TypeScript tips - `useQuery(...)` infers the row shape from your Kysely selection. You can also provide an explicit generic to guide inference if needed. ## License Apache-2.0 ================================================ FILE: packages/react-utils/package.json ================================================ { "name": "@lix-js/react-utils", "type": "module", "publishConfig": { "access": "public" }, "version": "0.1.0", "license": "Apache-2.0", "types": "./dist/index.d.ts", "exports": { ".": "./dist/index.js" }, "scripts": { "build": "tsc --build", "test": "tsc --noEmit && vitest run", "test:watch": "vitest", "lint": "oxlint --config .oxlintrc.json --tsconfig ./tsconfig.json --format stylish src", "dev": "tsc --watch", "format": "prettier ./src --write" }, "_comment": "Required for tree-shaking https://webpack.js.org/guides/tree-shaking/#mark-the-file-as-side-effect-free", "sideEffects": false, "peerDependencies": { "@lix-js/sdk": "*", "react": ">=19.0.0" }, "devDependencies": { "@lix-js/kysely": "workspace:*", "@lix-js/sdk": "workspace:*", "@testing-library/react": "^16.3.0", "@types/react": "^19.1.8", "@vitest/coverage-v8": "^3.2.4", "https-proxy-agent": "7.0.2", "jsdom": "^26.1.0", "oxlint": "^1.14.0", "prettier": "^3.3.3", "react": "19.2.0", "react-dom": "19.2.0", "typescript": "^5.5.4", "vitest": "^3.2.4" } } ================================================ FILE: packages/react-utils/src/hooks/use-lix.test.tsx ================================================ import { test, expect } from "vitest"; import { renderHook } from "@testing-library/react"; import React from "react"; import { useLix } from "./use-lix.js"; import { LixProvider } from "../provider.js"; import { openLix } from "@lix-js/sdk"; test("useLix throws error when used outside LixProvider", () => { expect(() => { renderHook(() => useLix()); }).toThrow("useLix must be used inside ."); }); test("useLix returns the Lix instance when used inside LixProvider", async () => { const lix = await openLix({}); const wrapper = ({ children }: { children: React.ReactNode }) => ( {children} ); const { result } = renderHook(() => useLix(), { wrapper }); expect(result.current).toBe(lix); expect(result.current.execute).toBeDefined(); expect(result.current.observe).toBeDefined(); expect(result.current.close).toBeDefined(); await lix.close(); }); ================================================ FILE: packages/react-utils/src/hooks/use-lix.ts ================================================ import { useContext } from "react"; import { LixContext } from "../provider.js"; /** * Hook to access the Lix instance from the context. * Must be used within a LixProvider. * * @example * ```tsx * function CreateAccountButton() { * const lix = useLix(); * * const handleClick = async () => { * await qb(lix) * .insertInto('account') * .values({ * name: 'John Doe', * }) * .execute(); * }; * * return ( * * ); * } * ``` */ export function useLix() { const lix = useContext(LixContext); if (!lix) { throw new Error("useLix must be used inside ."); } return lix; } ================================================ FILE: packages/react-utils/src/hooks/use-query.test.tsx ================================================ import { test, expect } from "vitest"; import { renderHook, waitFor, act } from "@testing-library/react"; import React, { Suspense } from "react"; import { useQuery, useQueryTakeFirst, useQueryTakeFirstOrThrow, } from "./use-query.js"; import { LixProvider } from "../provider.js"; import { openLix } from "@lix-js/sdk"; import { qb, sql } from "@lix-js/kysely"; type KeyValueRow = { key: string; value: unknown; readonly [key: string]: unknown; }; // React Error Boundaries require class components - no functional equivalent exists class MockErrorBoundary extends React.Component< { children: React.ReactNode; onError?: (error: Error) => void }, { hasError: boolean; error?: Error } > { override state = { hasError: false, error: undefined }; // @ts-expect-error - type error static override getDerivedStateFromError(error: Error) { return { hasError: true, error }; } override componentDidCatch(error: Error) { this.props.onError?.(error); } override render() { return this.state.hasError ? (
Error occurred
) : ( this.props.children ); } } test("useQuery throws error when used outside LixProvider", () => { // We need to catch the error since it's thrown during render expect(() => { renderHook(() => useQuery((lix) => qb(lix).selectFrom("lix_key_value").selectAll()), ); }).toThrow("useQuery must be used inside ."); }); test("returns array with data using new API", async () => { const lix = await openLix({}); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow[] }; await act(async () => { const { result } = renderHook( () => { const data = useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "like", "test_%"), ); return data; }, { wrapper }, ); hookResult = result; }); // Wait for suspense to resolve and data to be available await waitFor(() => { expect(Array.isArray(hookResult.current)).toBe(true); expect(hookResult.current).toEqual([]); // No test keys initially }); await lix.close(); }); test("updates when data changes", async () => { const lix = await openLix({}); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow[] }; await act(async () => { const { result } = renderHook( () => { const data = useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "like", "react_test_%"), ); return data; }, { wrapper }, ); hookResult = result; }); // Wait for initial empty data await waitFor(() => { expect(hookResult.current).toEqual([]); }); // Insert a test key-value pair await act(async () => { await qb(lix) .insertInto("lix_key_value") .values({ key: "react_test_key", value: "test_value" }) .execute(); }); // Check updated data await waitFor(() => { expect(hookResult.current).toHaveLength(1); expect(hookResult.current[0]).toMatchObject({ key: "react_test_key", value: "test_value", }); }); await lix.close(); }); test("akeFirst returns array with single item or undefined", async () => { const lix = await openLix({}); // Insert test data await qb(lix) .insertInto("lix_key_value") .values([ { key: "first_test_1", value: "first" }, { key: "first_test_2", value: "second" }, ]) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow | undefined } | undefined; await act(async () => { const { result } = renderHook( () => { const data = useQueryTakeFirst((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "like", "first_test_%") .orderBy("key", "asc"), ); return data; }, { wrapper }, ); hookResult = result; }); await waitFor(() => { expect(hookResult!.current).toMatchObject({ key: "first_test_1", value: "first", }); }); await lix.close(); }); test("akeFirst returns undefined for empty results", async () => { const lix = await openLix({}); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow | undefined }; await act(async () => { const { result } = renderHook( () => { const data = useQueryTakeFirst((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", "non_existent"), ); return data; }, { wrapper }, ); hookResult = result; }); await waitFor(() => { expect(hookResult.current).toBeUndefined(); }); await lix.close(); }); test("useQueryTakeFirst (subscribe:false) returns fresh data on rerender", async () => { const lix = await openLix({}); await qb(lix) .insertInto("lix_key_value") .values([ { key: "memo_a", value: "value_a" }, { key: "memo_b", value: "value_b" }, ]) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading…}> {children} ); const seenKeys: Array = []; const hook = await act(async () => renderHook( ({ lookup = "memo_a" }: { lookup?: string } = {}) => { const row = useQueryTakeFirst( (lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", lookup), { subscribe: false }, ); if (row?.key) seenKeys.push(row.key); return row; }, { wrapper }, ), ); const { rerender, unmount } = hook; await waitFor(() => { expect(seenKeys.length).toBeGreaterThan(0); }); seenKeys.length = 0; await act(async () => { rerender({ lookup: "memo_b" }); }); await waitFor(() => { expect(seenKeys.length).toBeGreaterThan(0); }); expect(seenKeys[0]).toBe("memo_b"); unmount(); await lix.close(); }); test("useQueryTakeFirst (subscribe:false) does not reuse previous rows", async () => { const lix = await openLix({}); await qb(lix) .insertInto("lix_key_value") .values([ { key: "no_subscribe_a", value: "value_a" }, { key: "no_subscribe_b", value: "value_b" }, ]) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); const emissions: Array = []; let rerender: (props?: { key: string }) => void; await act(async () => { const { rerender: rerenderFn } = renderHook( ({ key = "no_subscribe_a" }: { key?: string } = {}) => { const row = useQueryTakeFirst( (lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", key), { subscribe: false }, ); emissions.push(row?.key); return row; }, { wrapper }, ); rerender = rerenderFn; }); await waitFor(() => { expect(emissions).toContain("no_subscribe_a"); }); emissions.length = 0; await act(async () => { rerender({ key: "no_subscribe_b" }); }); await waitFor(() => { expect(emissions).toContain("no_subscribe_b"); }); // The first emission after switching should be the new key, not the previous one. expect(emissions[0]).toBe("no_subscribe_b"); await lix.close(); }); test("useQueryTakeFirst (subscribe:false) returns fresh data on rerender", async () => { const lix = await openLix({}); await qb(lix) .insertInto("lix_key_value") .values([ { key: "memo_a", value: "value_a" }, { key: "memo_b", value: "value_b" }, ]) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading…}> {children} ); const seenKeys: Array = []; const hook = await act(async () => renderHook( ({ lookup = "memo_a" }: { lookup?: string } = {}) => { const row = useQueryTakeFirst( (lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", lookup), { subscribe: false }, ); if (row?.key) seenKeys.push(row.key); return row; }, { wrapper }, ), ); const { rerender, unmount } = hook; await waitFor(() => { expect(seenKeys.length).toBeGreaterThan(0); }); seenKeys.length = 0; await act(async () => { rerender({ lookup: "memo_b" }); }); await waitFor(() => { expect(seenKeys.length).toBeGreaterThan(0); }); expect(seenKeys[0]).toBe("memo_b"); unmount(); await lix.close(); }); test("akeFirst updates reference when underlying row changes", async () => { const lix = await openLix({}); const rowKey = "react_first_ref"; await qb(lix) .insertInto("lix_key_value") .values({ key: rowKey, value: "initial" }) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow | undefined }; await act(async () => { const { result } = renderHook( () => useQueryTakeFirst((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", rowKey), ), { wrapper }, ); hookResult = result; }); await waitFor(() => { expect(hookResult!.current?.value).toBe("initial"); }); const initialRef = hookResult!.current; await act(async () => { await lix.execute("DELETE FROM lix_key_value WHERE key = ?1", [rowKey]); await qb(lix) .insertInto("lix_key_value") .values({ key: rowKey, value: "updated" }) .execute(); }); await waitFor(() => { expect(hookResult!.current?.value).toBe("updated"); expect(hookResult!.current).not.toBe(initialRef); }); await lix.close(); }); test("useQuery key includes lix instance (no cross-instance reuse)", async () => { const lix1 = await openLix({}); const lix2 = await openLix({}); await qb(lix1) .insertInto("lix_key_value") .values({ key: "shared_key", value: "instance_one" }) .execute(); await qb(lix2) .insertInto("lix_key_value") .values({ key: "shared_key", value: "instance_two" }) .execute(); let current = lix1; const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading…}> {children} ); let hookResult: { current: KeyValueRow[] }; let rerender: () => void; await act(async () => { const { result, rerender: rerenderFn } = renderHook( () => useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", "shared_key"), ), { wrapper }, ); hookResult = result; rerender = rerenderFn; }); await waitFor(() => { expect(hookResult.current[0]?.value).toBe("instance_one"); }); await act(async () => { current = lix2; rerender(); }); await waitFor(() => { expect(hookResult.current[0]?.value).toBe("instance_two"); }); await lix1.close(); await lix2.close(); }); test("akeFirst re-emits when aggregate result returns to the initial value", async () => { const lix = await openLix({}); const key = "agg_count_test"; const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow[] } | undefined; await act(async () => { const { result } = renderHook( () => useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", key), ), { wrapper }, ); hookResult = result; }); await waitFor(() => { expect(hookResult!.current).toHaveLength(0); }); await act(async () => { await qb(lix) .insertInto("lix_key_value") .values({ key, value: "v1" }) .execute(); }); await waitFor(() => { expect(hookResult!.current).toHaveLength(1); }); await act(async () => { await lix.execute("DELETE FROM lix_key_value WHERE key = ?1", [key]); }); await waitFor(() => { expect(hookResult!.current).toHaveLength(0); }); await lix.close(); }); test("return type is properly typed", async () => { const lix = await openLix({}); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow[] } | undefined; await act(async () => { const { result } = renderHook( () => { const data = useQuery((lix) => qb(lix).selectFrom("lix_key_value").selectAll(), ); return data; }, { wrapper }, ); hookResult = result; }); // Wait for data to be available await waitFor(() => { expect(hookResult).toBeDefined(); expect(Array.isArray(hookResult!.current)).toBe(true); }); // Type test: data should be properly typed as an array of KeyValue // This should pass without any type errors if the types are working correctly hookResult!.current satisfies KeyValueRow[]; await lix.close(); }); test("error handling with ErrorBoundary", async () => { const lix = await openLix({}); let caught: Error | undefined; // Suppress console errors for this test const originalError = console.error; console.error = () => {}; const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading…}> (caught = e)}> {children} ); await act(async () => { renderHook( () => useQuery((lix) => // invalid table: will reject then throw qb(lix) .selectFrom("non_existent_table" as never) .selectAll(), ), { wrapper }, ); }); await waitFor(() => { expect(caught).toBeDefined(); }); const caughtMessage = caught instanceof Error ? caught.message : caught ? String(caught) : ""; expect(caughtMessage).toMatch(/no such table|non_existent_table/i); // Restore console.error console.error = originalError; await lix.close(); }); test("akeFirstOrThrow returns data when result exists", async () => { const lix = await openLix({}); // Insert test data await qb(lix) .insertInto("lix_key_value") .values({ key: "throw_test", value: "exists" }) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow }; await act(async () => { const { result } = renderHook( () => { const data = useQueryTakeFirstOrThrow((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", "throw_test"), ); return data; }, { wrapper }, ); hookResult = result; }); await waitFor(() => { expect(hookResult.current).toMatchObject({ key: "throw_test", value: "exists", }); }); await lix.close(); }); test("akeFirstOrThrow throws when no result found", async () => { const lix = await openLix({}); let caught: Error | undefined; // Suppress console errors for this test const originalError = console.error; console.error = () => {}; const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> (caught = e)}> {children} ); await act(async () => { renderHook( () => { const data = useQueryTakeFirstOrThrow((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", "does_not_exist"), ); return data; }, { wrapper }, ); }); await waitFor(() => { expect(caught).toBeDefined(); expect(caught!.message).toBe("No result found"); }); // Restore console.error console.error = originalError; await lix.close(); }); test("re-executes when query function changes (dependency array fix)", async () => { const lix = await openLix({}); // Insert test data with different prefixes await qb(lix) .insertInto("lix_key_value") .values([ { key: "prefix_a_1", value: "value_a_1" }, { key: "prefix_a_2", value: "value_a_2" }, { key: "prefix_b_1", value: "value_b_1" }, { key: "prefix_b_2", value: "value_b_2" }, ]) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); // State to control which prefix to query let hookResult: { current: KeyValueRow[] }; let rerender: (props?: { prefix: string }) => void; await act(async () => { const { result, rerender: rerenderFn } = renderHook( ({ prefix = "prefix_a" }: { prefix?: string } = {}) => { // Create a new query function each time prefix changes const data = useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "like", `${prefix}_%`) .orderBy("key", "asc"), ); return data; }, { wrapper, initialProps: { prefix: "prefix_a" }, }, ); hookResult = result; rerender = rerenderFn; }); // Wait for initial data (prefix_a results) await waitFor(() => { expect(hookResult.current).toHaveLength(2); expect(hookResult.current[0]).toMatchObject({ key: "prefix_a_1", value: "value_a_1", }); expect(hookResult.current[1]).toMatchObject({ key: "prefix_a_2", value: "value_a_2", }); }); // Change the query prefix - this should trigger a re-execution await act(async () => { rerender({ prefix: "prefix_b" }); }); // Wait for the query to re-execute with new prefix await waitFor(() => { expect(hookResult.current).toHaveLength(2); expect(hookResult.current[0]).toMatchObject({ key: "prefix_b_1", value: "value_b_1", }); expect(hookResult.current[1]).toMatchObject({ key: "prefix_b_2", value: "value_b_2", }); }); await lix.close(); }); test("useQuery with subscribe: false executes once without live updates", async () => { const lix = await openLix({}); // Insert initial test data await qb(lix) .insertInto("lix_key_value") .values({ key: "once_test", value: "initial" }) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow[] }; await act(async () => { const { result } = renderHook( () => { const data = useQuery( (lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", "once_test"), { subscribe: false }, ); return data; }, { wrapper }, ); hookResult = result; }); // Wait for initial data await waitFor(() => { expect(hookResult.current).toHaveLength(1); expect(hookResult.current[0]).toMatchObject({ key: "once_test", value: "initial", }); }); // Update the data in the database await act(async () => { await qb(lix) .updateTable("lix_key_value") .set({ value: "updated" }) .where("key", "=", "once_test") .execute(); }); // Give some time for potential updates (there shouldn't be any) await new Promise((resolve) => setTimeout(resolve, 100)); // Data should NOT have updated because subscribe: false expect(hookResult!.current).toHaveLength(1); expect(hookResult!.current[0]).toMatchObject({ key: "once_test", value: "initial", // Still the initial value }); await lix.close(); }); test("useQuery subscription updates when query dependencies change", async () => { const lix = await openLix({}); // Insert initial test data await qb(lix) .insertInto("lix_key_value") .values([ { key: "sub_test_a_1", value: "initial_a" }, { key: "sub_test_b_1", value: "initial_b" }, ]) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow[] }; let rerender: (props?: { filter: string }) => void; await act(async () => { const { result, rerender: rerenderFn } = renderHook( ({ filter = "sub_test_a" }: { filter?: string } = {}) => { const data = useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "like", `${filter}_%`), ); return data; }, { wrapper, initialProps: { filter: "sub_test_a" }, }, ); hookResult = result; rerender = rerenderFn; }); // Verify initial subscription works await waitFor(() => { expect(hookResult.current).toHaveLength(1); expect(hookResult.current[0]?.key).toBe("sub_test_a_1"); }); // Switch to different filter - new subscription should be created await act(async () => { rerender({ filter: "sub_test_b" }); }); // Verify new subscription works await waitFor(() => { expect(hookResult.current).toHaveLength(1); expect(hookResult.current[0]?.key).toBe("sub_test_b_1"); }); // Insert new data that matches the current filter await act(async () => { await qb(lix) .insertInto("lix_key_value") .values({ key: "sub_test_b_2", value: "new_b" }) .execute(); }); // The subscription should pick up the new data await waitFor(() => { expect(hookResult.current).toHaveLength(2); expect(hookResult.current.some((item) => item.key === "sub_test_b_2")).toBe( true, ); }); await lix.close(); }); test("identical useQuery subscriptions share observe event payloads", async () => { const lix = await openLix({}); const key = "shared_subscriptions_engine_key"; await qb(lix) .insertInto("lix_key_value") .values({ key, value: "before" }) .execute(); const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: | { current: { left: KeyValueRow[]; right: KeyValueRow[]; }; } | undefined; await act(async () => { const { result } = renderHook( () => { const left = useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .select([ "key", "value", sql`CAST(random() AS TEXT)`.as("nonce"), ]) .where("key", "=", key), ); const right = useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .select([ "key", "value", sql`CAST(random() AS TEXT)`.as("nonce"), ]) .where("key", "=", key), ); return { left, right }; }, { wrapper }, ); hookResult = result; }); await waitFor(() => { expect(hookResult!.current.left).toHaveLength(1); expect(hookResult!.current.right).toHaveLength(1); }); await act(async () => { await qb(lix) .updateTable("lix_key_value") .set({ value: "after" }) .where("key", "=", key) .execute(); }); await waitFor(() => { expect(hookResult!.current.left[0]?.value).toBe("after"); expect(hookResult!.current.right[0]?.value).toBe("after"); }); expect(String(hookResult!.current.left[0]?.nonce)).toBe( String(hookResult!.current.right[0]?.nonce), ); await lix.close(); }); test("useQuery refreshes when lix instance is switched", async () => { const switchKey = "switch_instance_value"; // Create two separate lix instances const lix1 = await openLix({}); const lix2 = await openLix({}); await qb(lix1) .insertInto("lix_key_value") .values({ key: switchKey, value: "instance_one" }) .execute(); await qb(lix2) .insertInto("lix_key_value") .values({ key: switchKey, value: "instance_two" }) .execute(); // Check that they have different values for the same key const lix1IdDirect = await qb(lix1) .selectFrom("lix_key_value") .selectAll() .where("key", "=", switchKey) .executeTakeFirst(); const lix2IdDirect = await qb(lix2) .selectFrom("lix_key_value") .selectAll() .where("key", "=", switchKey) .executeTakeFirst(); // Ensure the test is valid - the two instances should have different values expect(lix1IdDirect?.value).not.toBe(lix2IdDirect?.value); // Use a state variable to control which lix instance is used let currentLix = lix1; // Wrapper function that uses the current lix const TestComponent = () => { const data = useQuery((lix) => qb(lix) .selectFrom("lix_key_value") .selectAll() .where("key", "=", switchKey), ); return data; }; const wrapper = ({ children }: { children: React.ReactNode }) => ( Loading...}> {children} ); let hookResult: { current: KeyValueRow[] }; let rerender: () => void; await act(async () => { const { result, rerender: rerenderFn } = renderHook(() => TestComponent(), { wrapper, }); hookResult = result; rerender = rerenderFn; }); // Verify we get data from lix1 await waitFor(() => { expect(hookResult.current).toHaveLength(1); expect(hookResult.current[0]?.key).toBe(switchKey); }); // Store the initial lix_id value const lix1Id = hookResult!.current[0]?.value; // Switch to lix2 by changing the current lix and rerendering await act(async () => { currentLix = lix2; rerender(); }); // Verify the query refreshes and we now get data from lix2 await waitFor(() => { expect(hookResult.current).toHaveLength(1); expect(hookResult.current[0]?.key).toBe(switchKey); // The lix_id value should be different from lix1 expect(hookResult.current[0]?.value).not.toBe(lix1Id); }); await lix1.close(); await lix2.close(); }); ================================================ FILE: packages/react-utils/src/hooks/use-query.ts ================================================ import { useContext, useEffect, useState, use } from "react"; import type { Lix } from "@lix-js/sdk"; import { LixContext } from "../provider.js"; // Map to cache promises by query key const queryPromiseCache = new Map>(); const lixInstanceIds = new WeakMap(); let nextLixInstanceId = 1; interface UseQueryOptions { subscribe?: boolean; } // Query factory receives a lix instance and returns a compilable+executable query. interface QueryLike { compile(): { sql: string; parameters: ReadonlyArray; }; execute(): Promise; } type QueryFactory = (lix: Lix) => QueryLike; /** * Subscribe to a live query using React 19 Suspense. * * The hook suspends on first render and re-suspends whenever its SQL changes, * so wrap consuming components with React Suspense and an ErrorBoundary. * * @param query - Factory function that creates a compiled+executable query object. Preferred shape: `(lix) => qb(lix).selectFrom(...)`. * @param options - Optional configuration * @param options.subscribe - Whether to subscribe to live updates (default: true) * * @example * // Basic list * function KeyValueList() { * const keyValues = useQuery((lix) => * qb(lix).selectFrom('lix_key_value') * .where('key', 'like', 'example_%') * .selectAll() * ); * return ( *
    * {keyValues.map(item => ( *
  • {item.key}: {item.value}
  • * ))} *
* ); * } * * @example * // With Suspense + ErrorBoundary * import { Suspense } from 'react'; * import { ErrorBoundary } from 'react-error-boundary'; * * function App() { * return ( * Loading…}> *
Failed to load.
}> * *
*
* ); * } * * @example * // One-time query without live updates * const config = useQuery( * (lix) => qb(lix).selectFrom('config').selectAll(), * { subscribe: false } * ); */ export function useQuery( query: QueryFactory, options: UseQueryOptions = {}, ): TRow[] { const lix = useContext(LixContext); if (!lix) throw new Error("useQuery must be used inside ."); const { subscribe = true } = options; const builder = query(lix); const compiled = builder.compile(); const observeQuery = { sql: compiled.sql, params: [...compiled.parameters] as any, }; const cacheKey = `${getLixInstanceId(lix)}:${subscribe ? "sub" : "once"}:` + `${compiled.sql}:${JSON.stringify(compiled.parameters)}`; // Get or create promise. Cache key includes parameters so different queries // resolve independently while reuse avoids duplicating in-flight requests. const cached = queryPromiseCache.get(cacheKey) as Promise | undefined; const promise: Promise = cached ?? (() => { const p = builder.execute() as Promise; queryPromiseCache.set(cacheKey, p); return p; })(); // Use the promise (suspends on first render) const initialRows = use(promise); // Local state for updates const [rows, setRows] = useState(initialRows); useEffect(() => { setRows(initialRows); }, [cacheKey]); // Subscribe for ongoing updates (only if subscribe is true) useEffect(() => { if (!subscribe) return; let closed = false; const events = lix.observe(observeQuery); void (async () => { try { while (!closed) { const event = await events.next(); if (closed || event === undefined) { break; } const nextRows = queryResultToRows(event.rows); setRows(nextRows); } } catch (err) { if (closed) { return; } // Clear promise to allow retry queryPromiseCache.delete(cacheKey); // Surface error to ErrorBoundary setRows(() => { throw err instanceof Error ? err : new Error(String(err)); }); } })(); return () => { closed = true; events.close(); }; }, [cacheKey, subscribe, lix]); if (!subscribe) { return initialRows; } return rows; } function queryResultToRows(result: { rows?: ReadonlyArray>; columns?: ReadonlyArray; }): TRow[] { const columns = Array.isArray(result?.columns) ? result.columns : []; const rows = Array.isArray(result?.rows) ? result.rows : []; return rows.map((row) => { const output: Record = {}; for (let index = 0; index < columns.length; index += 1) { const column = columns[index]; if (typeof column !== "string") { continue; } output[column] = row[index]; } return output as TRow; }); } function getLixInstanceId(lix: Lix): number { const asObject = lix as object; const cached = lixInstanceIds.get(asObject); if (cached !== undefined) { return cached; } const next = nextLixInstanceId++; lixInstanceIds.set(asObject, next); return next; } /* ------------------------------------------------------------------------- */ /* Optional single-row helper */ /* ------------------------------------------------------------------------- */ /** * Subscribe to a live query and return only the first result inside React. * Equivalent to calling `.executeTakeFirst()` on a Kysely query. * * @example * ```tsx * function ExampleComponent({ itemId }: { itemId: string }) { * const item = useQueryTakeFirst((lix) => * qb(lix).selectFrom('lix_key_value') * .where('key', '=', `example_${itemId}`) * .selectAll() * ); * * // No loading/error states needed - Suspense and ErrorBoundary handle them * if (!item) { * return
Item not found
; * } * * return
Value: {item.value}
; * } * * // Wrap with Suspense and ErrorBoundary: * Loading...}> * Error occurred}> * * * * ``` */ export const useQueryTakeFirst = ( query: QueryFactory, options: UseQueryOptions = {}, ): TResult | undefined => { const rows = useQuery(query, options); return rows[0] as TResult | undefined; }; /** * Subscribe to a live query and return only the first result inside React. * Throws an error if no result is found. * * @param query - Factory function that creates a Kysely SelectQueryBuilder * * @throws Error if no result is found * * @example * ```tsx * function ExampleDetail({ itemId }: { itemId: string }) { * const item = useQueryTakeFirstOrThrow((lix) => * qb(lix).selectFrom('lix_key_value') * .where('key', '=', `example_${itemId}`) * .selectAll() * ); * * // No need to check for undefined - will throw to ErrorBoundary if not found * return
Value: {item.value}
; * } * * // Wrap with Suspense and ErrorBoundary: * Loading...}> * Item not found}> * * * * ``` */ export const useQueryTakeFirstOrThrow = ( query: QueryFactory, options: UseQueryOptions = {}, ): TResult => { const data = useQueryTakeFirst(query, options); if (data === undefined) throw new Error("No result found"); return data; }; ================================================ FILE: packages/react-utils/src/index.ts ================================================ export { LixProvider, LixContext } from "./provider.js"; export { useQuery, useQueryTakeFirst, useQueryTakeFirstOrThrow, } from "./hooks/use-query.js"; export { useLix } from "./hooks/use-lix.js"; ================================================ FILE: packages/react-utils/src/provider.tsx ================================================ import { createContext, type ReactNode } from "react"; import type { Lix } from "@lix-js/sdk"; export const LixContext = createContext(null); export function LixProvider(props: { lix: Lix; children: ReactNode }) { return ( {props.children} ); } ================================================ FILE: packages/react-utils/test-setup.ts ================================================ import { Blob as BlobPolyfill } from "node:buffer"; // https://github.com/jsdom/jsdom/issues/2555#issuecomment-1864762292 global.Blob = BlobPolyfill as any; ================================================ FILE: packages/react-utils/tsconfig.json ================================================ { "include": [ "src/**/*" ], "compilerOptions": { "skipDefaultLibCheck": true, "emitDeclarationOnly": false, "experimentalDecorators": true, "emitDecoratorMetadata": true, "useDefineForClassFields": false, "lib": [ "ESNext", "DOM" ], "outDir": "./dist", "rootDir": "./src", "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true, "jsx": "react-jsx", "sourceMap": true, "module": "Node16", "moduleResolution": "Node16", "target": "ES2022", "allowSyntheticDefaultImports": true, "resolveJsonModule": false, "declaration": true, "strict": true, "checkJs": true, "verbatimModuleSyntax": true, "noUncheckedIndexedAccess": true, "declarationMap": true, "noImplicitAny": true, "noImplicitReturns": true, "noFallthroughCasesInSwitch": true, "noImplicitOverride": true, "allowUnreachableCode": false } } ================================================ FILE: packages/react-utils/vitest.config.ts ================================================ import { defineConfig } from 'vitest/config'; export default defineConfig({ test: { environment: 'jsdom', setupFiles: ['./test-setup.ts'], }, }); ================================================ FILE: packages/rs-sdk/Cargo.toml ================================================ [package] name = "lix_rs_sdk" version = "0.1.0" edition = "2021" [dependencies] lix_engine = { path = "../engine" } async-trait = "0.1" [dev-dependencies] tokio = { version = "1", features = ["rt", "macros"] } ================================================ FILE: packages/rs-sdk/src/in_memory_backend.rs ================================================ use std::collections::BTreeMap; use std::sync::{Arc, Mutex}; use async_trait::async_trait; use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, LixError, }; type KvKey = (String, Vec); type KvMap = BTreeMap>; #[derive(Debug, Clone, Default)] pub(crate) struct InMemoryBackend { kv: Arc>, } impl InMemoryBackend { pub(crate) fn new() -> Self { Self::default() } } #[async_trait] impl Backend for InMemoryBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { let snapshot = self .kv .lock() .map_err(|_| lock_error("rs-sdk in-memory backend kv"))? .clone(); Ok(Box::new(InMemoryReadTransaction { kv: snapshot })) } async fn begin_write_transaction( &self, ) -> Result, LixError> { let snapshot = self .kv .lock() .map_err(|_| lock_error("rs-sdk in-memory backend kv"))? .clone(); Ok(Box::new(InMemoryWriteTransaction { parent: Arc::clone(&self.kv), kv: snapshot, })) } } struct InMemoryReadTransaction { kv: KvMap, } #[async_trait] impl BackendReadTransaction for InMemoryReadTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { Ok(get_values_from_map(&self.kv, request)) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { Ok(exists_many_from_map(&self.kv, request)) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_keys(&self.kv, request)) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_values(&self.kv, request)) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_entries(&self.kv, request)) } async fn rollback(self: Box) -> Result<(), LixError> { Ok(()) } } struct InMemoryWriteTransaction { parent: Arc>, kv: KvMap, } #[async_trait] impl BackendReadTransaction for InMemoryWriteTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { Ok(get_values_from_map(&self.kv, request)) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { Ok(exists_many_from_map(&self.kv, request)) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_keys(&self.kv, request)) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_values(&self.kv, request)) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_entries(&self.kv, request)) } async fn rollback(self: Box) -> Result<(), LixError> { Ok(()) } } #[async_trait] impl BackendWriteTransaction for InMemoryWriteTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let mut stats = BackendKvWriteStats::default(); for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); self.kv .insert((namespace.clone(), key.to_vec()), value.to_vec()); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; stats.deletes += 1; stats.bytes_written += key.len(); self.kv.remove(&(namespace.clone(), key.to_vec())); } } Ok(stats) } async fn commit(self: Box) -> Result<(), LixError> { *self .parent .lock() .map_err(|_| lock_error("rs-sdk in-memory backend kv"))? = self.kv; Ok(()) } } fn get_values_from_map(kv: &KvMap, request: BackendKvGetRequest) -> BackendKvValueBatch { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { if let Some(value) = kv.get(&(namespace.clone(), key)) { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } BackendKvValueBatch { groups } } fn exists_many_from_map(kv: &KvMap, request: BackendKvGetRequest) -> BackendKvExistsBatch { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let exists = group .keys .into_iter() .map(|key| kv.contains_key(&(namespace.clone(), key))) .collect(); groups.push(BackendKvExistsGroup { namespace, exists }); } BackendKvExistsBatch { groups } } fn scan_map_keys(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvKeyPage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, _)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); keys.push(key); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvKeyPage { keys: keys.finish(), resume_after, } } fn scan_map_values(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvValuePage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, value)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); values.push(value); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvValuePage { values: values.finish(), resume_after, } } fn scan_map_entries(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvEntryPage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, value)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); keys.push(key); values.push(value); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, } } fn scan_filtered_pairs<'a>( kv: &'a KvMap, request: &BackendKvScanRequest, ) -> Vec<(&'a Vec, &'a Vec)> { let scan_limit = request .limit .checked_add(1 + usize::from(request.after.is_some())) .unwrap_or(request.limit); let mut pairs = kv .iter() .filter(|((candidate_namespace, key), _)| { candidate_namespace == &request.namespace && key_matches_range(key, &request.range) }) .filter(|((_, key), _)| { request .after .as_deref() .is_none_or(|after| key.as_slice() > after) }) .collect::>(); pairs.sort_by(|left, right| left.0 .1.cmp(&right.0 .1)); pairs.truncate(scan_limit); pairs .into_iter() .filter(|((_, key), _)| { request .after .as_deref() .is_none_or(|after| key.as_slice() > after) }) .map(|((_, key), value)| (key, value)) .collect() } fn key_matches_range(key: &[u8], range: &BackendKvScanRange) -> bool { match range { BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix), BackendKvScanRange::Range { start, end } => start.as_slice() <= key && key < end.as_slice(), } } fn lock_error(name: &str) -> LixError { LixError::new("LIX_ERROR_UNKNOWN", format!("{name} mutex was poisoned")) } ================================================ FILE: packages/rs-sdk/src/lib.rs ================================================ //! Rust SDK for Lix. //! //! The public API mirrors `@lix-js/sdk`: `open_lix()` opens the workspace //! session, and the returned [`Lix`] handle owns the small application-facing //! surface. mod in_memory_backend; mod lix; pub use lix::{open_lix, Lix, OpenLixOptions}; pub use lix_engine::{ Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteGroup, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePage, BytePageBuilder, CreateVersionOptions, CreateVersionReceipt as CreateVersionResult, ExecuteResult, LixError, LixNotice, MergeChangeStats, MergeConflict, MergeConflictChangeKind, MergeConflictKind, MergeConflictSide, MergeVersionOptions, MergeVersionOutcome, MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt as MergeVersionResult, Row, SqlQueryResult, SwitchVersionOptions, SwitchVersionReceipt as SwitchVersionResult, TryFromValue, Value, }; ================================================ FILE: packages/rs-sdk/src/lix.rs ================================================ use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; use async_trait::async_trait; use lix_engine::{ Backend, BackendReadTransaction, BackendWriteTransaction, CreateVersionOptions, CreateVersionReceipt as CreateVersionResult, Engine, ExecuteResult, LixError, MergeVersionOptions, MergeVersionPreview, MergeVersionPreviewOptions, MergeVersionReceipt as MergeVersionResult, SessionContext, SwitchVersionOptions, SwitchVersionReceipt as SwitchVersionResult, Value, }; use crate::in_memory_backend::InMemoryBackend; /// Options for opening a Lix workspace session. #[derive(Default)] pub struct OpenLixOptions { pub backend: Option>, } /// Workspace-session handle for a Lix repository. pub struct Lix { _engine: Engine, session: SessionContext, backend: SharedBackend, backend_closed: AtomicBool, } /// Opens a Lix workspace session. /// /// If `options.backend` is omitted, a fresh in-memory backend is used. If a /// backend is supplied, it is opened when already initialized and initialized /// first when empty. pub async fn open_lix(options: OpenLixOptions) -> Result { let backend: Box = options .backend .unwrap_or_else(|| Box::new(InMemoryBackend::new())); let backend = SharedBackend::new(backend); let engine = open_or_initialize_engine(&backend).await?; let session = engine.open_workspace_session().await?; Ok(Lix { _engine: engine, session, backend, backend_closed: AtomicBool::new(false), }) } impl Lix { /// Executes one DataFusion SQL statement against this Lix session. /// /// The SQL dialect is DataFusion SQL, not SQLite SQL. Positional /// placeholders use `$1`, `$2`, and so on. SQLite-specific catalog tables /// and transaction statements such as `sqlite_master`, `BEGIN`, and /// `COMMIT` are not part of this contract; use `information_schema` for /// catalog inspection. Lix owns transaction boundaries for each statement. pub async fn execute(&self, sql: &str, params: &[Value]) -> Result { self.session.execute(sql, params).await } pub async fn active_version_id(&self) -> Result { self.session.active_version_id().await } pub async fn create_version( &self, options: CreateVersionOptions, ) -> Result { self.session.create_version(options).await } pub async fn switch_version( &self, options: SwitchVersionOptions, ) -> Result { let (_session, receipt) = self.session.switch_version(options).await?; Ok(receipt) } pub async fn merge_version( &self, options: MergeVersionOptions, ) -> Result { self.session.merge_version(options).await } pub async fn merge_version_preview( &self, options: MergeVersionPreviewOptions, ) -> Result { self.session.merge_version_preview(options).await } pub async fn close(&self) -> Result<(), LixError> { self.session.close().await?; if !self.backend_closed.swap(true, Ordering::SeqCst) { self.backend.close().await?; } Ok(()) } } async fn open_or_initialize_engine(backend: &SharedBackend) -> Result { match Engine::new(Box::new(backend.clone())).await { Ok(engine) => Ok(engine), Err(error) if error.code == "LIX_ERROR_NOT_INITIALIZED" => { Engine::initialize(Box::new(backend.clone())).await?; Engine::new(Box::new(backend.clone())).await } Err(error) => Err(error), } } #[derive(Clone)] struct SharedBackend { inner: Arc, } impl SharedBackend { fn new(backend: Box) -> Self { Self { inner: Arc::from(backend), } } } #[async_trait] impl Backend for SharedBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { self.inner.begin_read_transaction().await } async fn begin_write_transaction( &self, ) -> Result, LixError> { self.inner.begin_write_transaction().await } async fn destroy(&self) -> Result<(), LixError> { self.inner.destroy().await } async fn close(&self) -> Result<(), LixError> { self.inner.close().await } } ================================================ FILE: packages/rs-sdk/tests/e2e.rs ================================================ use std::collections::BTreeMap; use std::sync::{Arc, Mutex}; use async_trait::async_trait; use lix_rs_sdk::{ open_lix, Backend, BackendKvEntryPage, BackendKvExistsBatch, BackendKvExistsGroup, BackendKvGetRequest, BackendKvKeyPage, BackendKvScanRange, BackendKvScanRequest, BackendKvValueBatch, BackendKvValueGroup, BackendKvValuePage, BackendKvWriteBatch, BackendKvWriteStats, BackendReadTransaction, BackendWriteTransaction, BytePageBuilder, CreateVersionOptions, LixError, MergeVersionOptions, MergeVersionOutcome, OpenLixOptions, SwitchVersionOptions, Value, }; #[tokio::test] async fn rs_sdk_open_register_write_query_version_and_merge_flow() { let lix = open_lix(OpenLixOptions::default()).await.unwrap(); let main_version_id = lix.active_version_id().await.unwrap(); register_crm_task_schema(&lix).await; lix.execute( "INSERT INTO crm_task (id, title, done, meta) VALUES ($1, $2, $3, lix_json($4))", &[ Value::Text("task-1".to_string()), Value::Text("Draft RS SDK flow".to_string()), Value::Boolean(false), Value::Text(r#"{"priority":"high","tags":["sdk","json"]}"#.to_string()), ], ) .await .unwrap(); let projected = lix .execute( "SELECT title, done, meta, lixcol_snapshot_content FROM crm_task WHERE id = $1", &[Value::Text("task-1".to_string())], ) .await .unwrap(); assert_crm_task_projection(&projected); assert_eq!(task_done(&lix, "task-1").await, false); let draft = lix .create_version(CreateVersionOptions { id: Some("draft-version".to_string()), name: "Draft".to_string(), from_commit_id: None, }) .await .unwrap(); assert_eq!(draft.id, "draft-version"); assert_eq!(draft.name, "Draft"); assert!(!draft.hidden); lix.switch_version(SwitchVersionOptions { version_id: draft.id.clone(), }) .await .unwrap(); lix.execute( "UPDATE crm_task SET done = $1 WHERE id = $2", &[Value::Boolean(true), Value::Text("task-1".to_string())], ) .await .unwrap(); assert_eq!(task_done(&lix, "task-1").await, true); lix.switch_version(SwitchVersionOptions { version_id: main_version_id.clone(), }) .await .unwrap(); assert_eq!(task_done(&lix, "task-1").await, false); let merge = lix .merge_version(MergeVersionOptions { source_version_id: draft.id, }) .await .unwrap(); assert_eq!(merge.outcome, MergeVersionOutcome::FastForward); assert_eq!(merge.target_version_id, main_version_id); assert_eq!(merge.change_stats.total, 1); assert_eq!(merge.change_stats.modified, 1); assert_eq!(merge.created_merge_commit_id, None); assert_eq!(task_done(&lix, "task-1").await, true); lix.close().await.unwrap(); } #[tokio::test] async fn rs_sdk_close_is_idempotent_and_rejects_later_operations() { let backend = SharedTestBackend::new(); let close_count = backend.close_count(); let lix = open_lix(OpenLixOptions { backend: Some(Box::new(backend)), }) .await .unwrap(); lix.close().await.unwrap(); lix.close().await.unwrap(); assert_eq!( close_count .lock() .map(|count| *count) .expect("close count lock should be available"), 1 ); let error = lix .execute("SELECT value FROM lix_key_value WHERE key = 'lix_id'", &[]) .await .expect_err("execute after close should fail"); assert_closed(error); let error = lix .active_version_id() .await .expect_err("active_version_id after close should fail"); assert_closed(error); } #[tokio::test] async fn rs_sdk_close_does_not_destroy_committed_data() { let backend = SharedTestBackend::new(); let first = open_lix(OpenLixOptions { backend: Some(Box::new(backend.clone())), }) .await .unwrap(); first .execute( "INSERT INTO lix_key_value (key, value) VALUES ('close-key', 'close-value')", &[], ) .await .unwrap(); first.close().await.unwrap(); let error = first .execute( "SELECT value FROM lix_key_value WHERE key = 'close-key'", &[], ) .await .expect_err("closed handle should not be usable"); assert_closed(error); let second = open_lix(OpenLixOptions { backend: Some(Box::new(backend)), }) .await .unwrap(); let result = second .execute( "SELECT key FROM lix_key_value WHERE key = 'close-key' AND value = lix_json('\"close-value\"')", &[], ) .await .unwrap(); assert_eq!(result.len(), 1); assert_eq!( result.rows()[0].values(), &[Value::Text("close-key".to_string())] ); second.close().await.unwrap(); } #[tokio::test] async fn failed_write_validation_does_not_poison_backend_transaction() { let backend = SharedTestBackend::rejecting_nested_transactions(); let rollback_count = backend.rollback_count(); let lix = open_lix(OpenLixOptions { backend: Some(Box::new(backend)), }) .await .unwrap(); register_poison_task_schema(&lix).await; let error = lix .execute( "INSERT INTO poison_task (id, title) VALUES ($1, $2)", &[ Value::Text("bad-task".to_string()), Value::Text("missing meta".to_string()), ], ) .await .expect_err("schema validation should reject missing required field"); assert_eq!(error.code, "LIX_ERROR_SCHEMA_VALIDATION"); let result = lix.execute("SELECT 1 AS ok", &[]).await.unwrap(); assert_eq!(result.len(), 1); assert_eq!(result.rows()[0].values(), &[Value::Integer(1)]); assert!( *rollback_count .lock() .expect("rollback count lock should be available") > 0, "failed commit validation should rollback the backend transaction" ); lix.execute( "INSERT INTO poison_task (id, title, meta) VALUES ($1, $2, lix_json($3))", &[ Value::Text("good-task".to_string()), Value::Text("valid".to_string()), Value::Text(r#"{"priority":"high"}"#.to_string()), ], ) .await .expect("valid write after failed write should succeed"); lix.close().await.unwrap(); } async fn register_crm_task_schema(lix: &lix_rs_sdk::Lix) { let schema = r#"{ "$schema": "https://json-schema.org/draft/2020-12/schema", "x-lix-key": "crm_task", "x-lix-primary-key": ["/id"], "type": "object", "required": ["id", "title", "done", "meta"], "properties": { "id": { "type": "string" }, "title": { "type": "string" }, "done": { "type": "boolean" }, "meta": { "type": "object" } }, "additionalProperties": false }"#; lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", &[Value::Text(schema.to_string())], ) .await .unwrap(); } fn assert_crm_task_projection(result: &lix_rs_sdk::ExecuteResult) { assert_eq!(result.len(), 1); let row = &result.rows()[0]; assert_eq!( row.get::("title").unwrap(), "Draft RS SDK flow".to_string() ); assert_eq!(row.get::("done").unwrap(), false); let meta = row.get::("meta").unwrap(); let Value::Json(meta) = meta else { panic!("expected meta JSON value, got {meta:?}"); }; assert_eq!( meta.get("priority").and_then(|value| value.as_str()), Some("high") ); assert_eq!( meta.get("tags") .and_then(|value| value.as_array()) .map(|tags| tags.len()), Some(2) ); let snapshot = row.get::("lixcol_snapshot_content").unwrap(); let Value::Json(snapshot) = snapshot else { panic!("expected snapshot JSON value, got {snapshot:?}"); }; assert_eq!( snapshot.get("id").and_then(|value| value.as_str()), Some("task-1") ); assert_eq!( snapshot .get("meta") .and_then(|value| value.get("priority")) .and_then(|value| value.as_str()), Some("high") ); let missing = row .value("missing") .expect_err("missing column should return a structured error"); assert_eq!(missing.code, "LIX_COLUMN_NOT_FOUND"); } async fn register_poison_task_schema(lix: &lix_rs_sdk::Lix) { let schema = r#"{ "$schema": "https://json-schema.org/draft/2020-12/schema", "x-lix-key": "poison_task", "x-lix-primary-key": ["/id"], "type": "object", "required": ["id", "title", "meta"], "properties": { "id": { "type": "string" }, "title": { "type": "string" }, "meta": { "type": "object" } }, "additionalProperties": false }"#; lix.execute( "INSERT INTO lix_registered_schema (value) VALUES (lix_json($1))", &[Value::Text(schema.to_string())], ) .await .unwrap(); } async fn task_done(lix: &lix_rs_sdk::Lix, task_id: &str) -> bool { let result = lix .execute( "SELECT done FROM crm_task WHERE id = $1", &[Value::Text(task_id.to_string())], ) .await .unwrap(); let rows = result; assert_eq!(rows.len(), 1); match rows.rows()[0].values().first() { Some(Value::Boolean(done)) => *done, value => panic!("expected boolean done value, got {value:?}"), } } fn assert_closed(error: LixError) { assert_eq!(error.code, LixError::CODE_CLOSED); } type KvMap = BTreeMap<(String, Vec), Vec>; #[derive(Clone, Default)] struct SharedTestBackend { kv: Arc>, close_count: Arc>, rollback_count: Arc>, active_transaction: Arc>, reject_nested_transactions: bool, } impl SharedTestBackend { fn new() -> Self { Self::default() } fn rejecting_nested_transactions() -> Self { Self { reject_nested_transactions: true, ..Self::default() } } fn close_count(&self) -> Arc> { Arc::clone(&self.close_count) } fn rollback_count(&self) -> Arc> { Arc::clone(&self.rollback_count) } fn begin_test_transaction(&self) -> Result { let mut active_transaction = self .active_transaction .lock() .map_err(|_| LixError::unknown("test backend active transaction lock poisoned"))?; if *active_transaction && self.reject_nested_transactions { return Err(LixError::unknown( "cannot open nested Lix backend transaction", )); } *active_transaction = true; drop(active_transaction); let snapshot = self .kv .lock() .map_err(|_| LixError::unknown("test backend lock poisoned"))? .clone(); Ok(SharedTestTransaction { parent: Arc::clone(&self.kv), kv: snapshot, active_transaction: Arc::clone(&self.active_transaction), rollback_count: Arc::clone(&self.rollback_count), }) } } #[async_trait] impl Backend for SharedTestBackend { async fn begin_read_transaction( &self, ) -> Result, LixError> { Ok(Box::new(self.begin_test_transaction()?)) } async fn begin_write_transaction( &self, ) -> Result, LixError> { Ok(Box::new(self.begin_test_transaction()?)) } async fn close(&self) -> Result<(), LixError> { *self .close_count .lock() .map_err(|_| LixError::unknown("test backend close count lock poisoned"))? += 1; Ok(()) } } struct SharedTestTransaction { parent: Arc>, kv: KvMap, active_transaction: Arc>, rollback_count: Arc>, } #[async_trait] impl BackendReadTransaction for SharedTestTransaction { async fn get_values( &mut self, request: BackendKvGetRequest, ) -> Result { Ok(get_values_from_map(&self.kv, request)) } async fn exists_many( &mut self, request: BackendKvGetRequest, ) -> Result { Ok(exists_many_from_map(&self.kv, request)) } async fn scan_keys( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_keys(&self.kv, request)) } async fn scan_values( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_values(&self.kv, request)) } async fn scan_entries( &mut self, request: BackendKvScanRequest, ) -> Result { Ok(scan_map_entries(&self.kv, request)) } async fn rollback(self: Box) -> Result<(), LixError> { *self .rollback_count .lock() .map_err(|_| LixError::unknown("test backend rollback count lock poisoned"))? += 1; *self .active_transaction .lock() .map_err(|_| LixError::unknown("test backend active transaction lock poisoned"))? = false; Ok(()) } } #[async_trait] impl BackendWriteTransaction for SharedTestTransaction { async fn write_kv_batch( &mut self, batch: BackendKvWriteBatch, ) -> Result { let mut stats = BackendKvWriteStats::default(); for group in batch.groups { let namespace = group.namespace().to_string(); for index in 0..group.put_count() { let key = group.put_key(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put key") })?; let value = group.put_value(index).ok_or_else(|| { LixError::new("LIX_ERROR_UNKNOWN", "backend write batch missing put value") })?; stats.puts += 1; stats.bytes_written += key.len() + value.len(); self.kv .insert((namespace.clone(), key.to_vec()), value.to_vec()); } for index in 0..group.delete_count() { let key = group.delete_key(index).ok_or_else(|| { LixError::new( "LIX_ERROR_UNKNOWN", "backend write batch missing delete key", ) })?; stats.deletes += 1; stats.bytes_written += key.len(); self.kv.remove(&(namespace.clone(), key.to_vec())); } } Ok(stats) } async fn commit(self: Box) -> Result<(), LixError> { *self .parent .lock() .map_err(|_| LixError::unknown("test backend lock poisoned"))? = self.kv; *self .active_transaction .lock() .map_err(|_| LixError::unknown("test backend active transaction lock poisoned"))? = false; Ok(()) } } fn get_values_from_map(kv: &KvMap, request: BackendKvGetRequest) -> BackendKvValueBatch { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let mut values = BytePageBuilder::with_capacity(group.keys.len(), 0); let mut present = Vec::with_capacity(group.keys.len()); for key in group.keys { if let Some(value) = kv.get(&(namespace.clone(), key)) { values.push(value); present.push(true); } else { values.push([]); present.push(false); } } groups.push(BackendKvValueGroup::new( namespace, values.finish(), present, )); } BackendKvValueBatch { groups } } fn exists_many_from_map(kv: &KvMap, request: BackendKvGetRequest) -> BackendKvExistsBatch { let mut groups = Vec::with_capacity(request.groups.len()); for group in request.groups { let namespace = group.namespace.clone(); let exists = group .keys .into_iter() .map(|key| kv.contains_key(&(namespace.clone(), key))) .collect(); groups.push(BackendKvExistsGroup { namespace, exists }); } BackendKvExistsBatch { groups } } fn scan_map_keys(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvKeyPage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, _)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); keys.push(key); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvKeyPage { keys: keys.finish(), resume_after, } } fn scan_map_values(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvValuePage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, value)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); values.push(value); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvValuePage { values: values.finish(), resume_after, } } fn scan_map_entries(kv: &KvMap, request: BackendKvScanRequest) -> BackendKvEntryPage { let pairs = scan_filtered_pairs(kv, &request); let has_more = pairs.len() > request.limit; let mut keys = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut values = BytePageBuilder::with_capacity(request.limit.min(pairs.len()), 0); let mut resume_after = None; for (index, (key, value)) in pairs.into_iter().enumerate() { if index >= request.limit { break; } resume_after = Some(key.clone()); keys.push(key); values.push(value); } let resume_after = has_more.then_some(resume_after).flatten(); BackendKvEntryPage { keys: keys.finish(), values: values.finish(), resume_after, } } fn scan_filtered_pairs<'a>( kv: &'a KvMap, request: &BackendKvScanRequest, ) -> Vec<(&'a Vec, &'a Vec)> { let scan_limit = request .limit .checked_add(1 + usize::from(request.after.is_some())) .unwrap_or(request.limit); let mut pairs = kv .iter() .filter(|((candidate_namespace, key), _)| { candidate_namespace == &request.namespace && key_matches_range(key, &request.range) }) .collect::>(); pairs.sort_by(|left, right| left.0 .1.cmp(&right.0 .1)); pairs.truncate(scan_limit); pairs .into_iter() .filter(|((_, key), _)| { request .after .as_deref() .is_none_or(|after| key.as_slice() > after) }) .map(|((_, key), value)| (key, value)) .collect() } fn key_matches_range(key: &[u8], range: &BackendKvScanRange) -> bool { match range { BackendKvScanRange::Prefix(prefix) => key.starts_with(prefix), BackendKvScanRange::Range { start, end } => start.as_slice() <= key && key < end.as_slice(), } } ================================================ FILE: packages/text-plugin/Cargo.toml ================================================ [package] name = "text_plugin" version = "0.1.0" edition = "2021" publish = false [lib] crate-type = ["cdylib", "rlib"] [dependencies] base64 = "0.22" imara-diff = "0.2" serde = { version = "1", features = ["derive"] } serde_json = "1" sha1 = "0.10" wit-bindgen = "0.40" [dev-dependencies] criterion = "0.5" [[bench]] name = "detect_changes" harness = false [[bench]] name = "apply_changes" harness = false ================================================ FILE: packages/text-plugin/README.md ================================================ # text-plugin Rust/WASM component plugin that models files as line entities for the Lix engine. - Uses `packages/engine/wit/lix-plugin.wit`. - Provides `manifest.json` for install metadata (`text_plugin`). - Provides Lix schema docs: - `schema/text_line.json` - `schema/text_document.json` - `detect-changes` emits: - `text_line` rows for inserted/deleted lines (order-preserving line matching, Git-style) - one `text_document` row with ordered `line_ids` - `apply-changes` rebuilds exact bytes from the latest projection. This plugin is byte-safe (works with non-UTF-8 files) by storing line content as base64 in snapshot payloads. ## Benchmarks Run plugin micro-benchmarks: ```bash cargo bench -p text_plugin --bench detect_changes cargo bench -p text_plugin --bench apply_changes ``` ================================================ FILE: packages/text-plugin/benches/apply_changes.rs ================================================ mod common; use common::{apply_scenarios, file_from_bytes}; use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion}; use std::time::Duration; use text_plugin::apply_changes; fn bench_apply_changes(c: &mut Criterion) { let scenarios = apply_scenarios(); let mut group = c.benchmark_group("apply_changes"); group.sample_size(20); group.measurement_time(Duration::from_secs(15)); for scenario in scenarios { group.bench_function(scenario.name, |b| { b.iter_batched( || { ( file_from_bytes("f1", "/yarn.lock", &scenario.base), scenario.changes.clone(), ) }, |(base, changes)| { let reconstructed = apply_changes(base, changes) .expect("apply_changes benchmark should succeed"); black_box(reconstructed); }, BatchSize::SmallInput, ); }); } group.finish(); } criterion_group!(benches, bench_apply_changes); criterion_main!(benches); ================================================ FILE: packages/text-plugin/benches/common/mod.rs ================================================ #![allow(dead_code)] use text_plugin::{detect_changes, PluginEntityChange, PluginFile}; pub struct DetectScenario { pub name: &'static str, pub before: Option>, pub after: Vec, } pub struct ApplyScenario { pub name: &'static str, pub base: Vec, pub changes: Vec, } pub fn file_from_bytes(id: &str, path: &str, data: &[u8]) -> PluginFile { PluginFile { id: id.to_string(), path: path.to_string(), data: data.to_vec(), } } pub fn detect_scenarios() -> Vec { vec![ DetectScenario { name: "small_single_line_edit", before: Some(build_small_before()), after: build_small_after(), }, DetectScenario { name: "lockfile_large_create", before: None, after: build_lockfile(1200), }, DetectScenario { name: "lockfile_large_patch", before: Some(build_lockfile(1800)), after: build_lockfile_with_patch(1800), }, DetectScenario { name: "lockfile_large_block_move_and_patch", before: Some(build_lockfile(2200)), after: build_lockfile_with_block_move_and_patch(2200), }, ] } pub fn apply_scenarios() -> Vec { let small_before = build_small_before(); let small_after = build_small_after(); let lockfile_base_1800 = build_lockfile(1800); let lockfile_patch_1800 = build_lockfile_with_patch(1800); let lockfile_base_2200 = build_lockfile(2200); let lockfile_move_patch_2200 = build_lockfile_with_block_move_and_patch(2200); vec![ ApplyScenario { name: "small_projection_from_empty", base: Vec::new(), changes: detect_changes(None, file_from_bytes("f1", "/doc.txt", &small_after)) .expect("small projection should be constructible for apply bench"), }, ApplyScenario { name: "small_delta_on_base", base: small_before.clone(), changes: detect_changes( Some(file_from_bytes("f1", "/doc.txt", &small_before)), file_from_bytes("f1", "/doc.txt", &small_after), ) .expect("small delta should be constructible for apply bench"), }, ApplyScenario { name: "lockfile_projection_from_empty", base: Vec::new(), changes: detect_changes( None, file_from_bytes("f1", "/yarn.lock", &lockfile_patch_1800), ) .expect("lockfile projection should be constructible for apply bench"), }, ApplyScenario { name: "lockfile_delta_patch_on_base", base: lockfile_base_1800.clone(), changes: detect_changes( Some(file_from_bytes("f1", "/yarn.lock", &lockfile_base_1800)), file_from_bytes("f1", "/yarn.lock", &lockfile_patch_1800), ) .expect("lockfile delta should be constructible for apply bench"), }, ApplyScenario { name: "lockfile_delta_move_patch_on_base", base: lockfile_base_2200.clone(), changes: detect_changes( Some(file_from_bytes("f1", "/yarn.lock", &lockfile_base_2200)), file_from_bytes("f1", "/yarn.lock", &lockfile_move_patch_2200), ) .expect("lockfile move+patch delta should be constructible for apply bench"), }, ] } fn build_small_before() -> Vec { b"const a = 1;\nconst b = 2;\nconst c = a + b;\n".to_vec() } fn build_small_after() -> Vec { b"const a = 1;\nconst b = 3;\nconst c = a + b;\n".to_vec() } fn build_lockfile(pkg_count: usize) -> Vec { let mut out = String::with_capacity(pkg_count * 170); for idx in 0..pkg_count { out.push_str(&package_block(idx)); } out.into_bytes() } fn build_lockfile_with_patch(pkg_count: usize) -> Vec { let mut blocks = (0..pkg_count).map(package_block).collect::>(); let patch_index = pkg_count / 2; blocks[patch_index] = patched_package_block(patch_index); let insert_at = pkg_count / 3; let inserted = (0..120) .map(|offset| package_block(pkg_count + offset + 10_000)) .collect::>(); blocks.splice(insert_at..insert_at, inserted); blocks.join("").into_bytes() } fn build_lockfile_with_block_move_and_patch(pkg_count: usize) -> Vec { let mut blocks = (0..pkg_count).map(package_block).collect::>(); let move_start = pkg_count / 5; let move_end = move_start + (pkg_count / 8); let moved = blocks.drain(move_start..move_end).collect::>(); let insert_at = pkg_count / 2; blocks.splice(insert_at..insert_at, moved); for idx in (pkg_count / 3)..(pkg_count / 3 + 64) { let clamped = idx.min(blocks.len().saturating_sub(1)); blocks[clamped] = patched_package_block(90_000 + idx); } blocks.join("").into_bytes() } fn package_block(idx: usize) -> String { let major = (idx % 9) + 1; let minor = (idx * 7) % 40; let patch = (idx * 13) % 70; let integrity_a = idx.wrapping_mul(31).wrapping_add(17); let integrity_b = idx.wrapping_mul(53).wrapping_add(29); format!( "\"pkg-{idx}@^1.0.0\":\n version \"{major}.{minor}.{patch}\"\n resolved \"https://registry.yarnpkg.com/pkg-{idx}/-/pkg-{idx}-{major}.{minor}.{patch}.tgz\"\n integrity sha512-{integrity_a:016x}{integrity_b:016x}\n\n" ) } fn patched_package_block(idx: usize) -> String { let major = (idx % 9) + 2; let minor = (idx * 11) % 50; let patch = (idx * 17) % 80; let integrity_a = idx.wrapping_mul(67).wrapping_add(23); let integrity_b = idx.wrapping_mul(79).wrapping_add(31); format!( "\"pkg-{idx}@^1.0.0\":\n version \"{major}.{minor}.{patch}\"\n resolved \"https://registry.yarnpkg.com/pkg-{idx}/-/pkg-{idx}-{major}.{minor}.{patch}.tgz\"\n integrity sha512-{integrity_a:016x}{integrity_b:016x}\n\n" ) } ================================================ FILE: packages/text-plugin/benches/detect_changes.rs ================================================ mod common; use common::{detect_scenarios, file_from_bytes}; use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion}; use std::time::Duration; use text_plugin::detect_changes; fn bench_detect_changes(c: &mut Criterion) { let scenarios = detect_scenarios(); let mut group = c.benchmark_group("detect_changes"); group.sample_size(20); group.measurement_time(Duration::from_secs(15)); for scenario in scenarios { group.bench_function(scenario.name, |b| { b.iter_batched( || { let before = scenario .before .as_ref() .map(|bytes| file_from_bytes("f1", "/yarn.lock", bytes)); let after = file_from_bytes("f1", "/yarn.lock", &scenario.after); (before, after) }, |(before, after)| { let changes = detect_changes(before, after) .expect("detect_changes benchmark should succeed"); black_box(changes); }, BatchSize::SmallInput, ); }); } group.finish(); } criterion_group!(benches, bench_detect_changes); criterion_main!(benches); ================================================ FILE: packages/text-plugin/manifest.json ================================================ { "key": "text_plugin", "runtime": "wasm-component-v1", "api_version": "0.1.0", "match": { "path_glob": "*", "content_type": "text" }, "entry": "plugin.wasm" } ================================================ FILE: packages/text-plugin/schema/text_document.json ================================================ { "x-lix-key": "text_document", "x-lix-override-lixcols": { "lixcol_plugin_key": "'text_plugin'" }, "type": "object", "properties": { "line_ids": { "type": "array", "items": { "type": "string", "minLength": 1 }, "uniqueItems": true, "description": "Ordered line entity ids for the projected document." } }, "required": [ "line_ids" ], "additionalProperties": false } ================================================ FILE: packages/text-plugin/schema/text_line.json ================================================ { "x-lix-key": "text_line", "x-lix-override-lixcols": { "lixcol_plugin_key": "'text_plugin'" }, "type": "object", "properties": { "content_base64": { "type": "string", "contentEncoding": "base64", "contentMediaType": "application/octet-stream", "description": "Base64-encoded line bytes. Empty string represents an empty line body." }, "ending": { "type": "string", "enum": [ "", "\n", "\r\n" ], "description": "Original line ending bytes." } }, "required": [ "content_base64", "ending" ], "additionalProperties": false } ================================================ FILE: packages/text-plugin/src/lib.rs ================================================ use crate::exports::lix::plugin::api::{EntityChange, File, Guest, PluginError}; use base64::engine::general_purpose::STANDARD as BASE64_STANDARD; use base64::Engine as _; use imara_diff::{Algorithm, Diff, InternedInput}; use serde::{Deserialize, Serialize}; use serde_json::Value; use sha1::{Digest, Sha1}; use std::collections::{HashMap, HashSet}; use std::sync::OnceLock; wit_bindgen::generate!({ path: "../engine/wit", world: "plugin", }); pub const LINE_SCHEMA_KEY: &str = "text_line"; pub const DOCUMENT_SCHEMA_KEY: &str = "text_document"; pub const DOCUMENT_ENTITY_ID: &str = "__document__"; const MANIFEST_JSON: &str = include_str!("../manifest.json"); const LINE_SCHEMA_JSON: &str = include_str!("../schema/text_line.json"); const DOCUMENT_SCHEMA_JSON: &str = include_str!("../schema/text_document.json"); static LINE_SCHEMA: OnceLock = OnceLock::new(); static DOCUMENT_SCHEMA: OnceLock = OnceLock::new(); pub use crate::exports::lix::plugin::api::{ EntityChange as PluginEntityChange, File as PluginFile, PluginError as PluginApiError, }; struct TextLinesPlugin; #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] enum LineEnding { None, Lf, Crlf, } impl LineEnding { fn as_str(self) -> &'static str { match self { Self::None => "", Self::Lf => "\n", Self::Crlf => "\r\n", } } fn marker_byte(self) -> u8 { match self { Self::None => 0, Self::Lf => 1, Self::Crlf => 2, } } } #[derive(Debug, Clone, PartialEq, Eq)] struct ParsedLine { entity_id: String, content: Vec, ending: LineEnding, } #[derive(Debug, Serialize)] struct DocumentSnapshot<'a> { line_ids: &'a [String], } #[derive(Debug, Serialize, Deserialize)] #[serde(deny_unknown_fields)] struct DocumentSnapshotOwned { line_ids: Vec, } impl Guest for TextLinesPlugin { fn detect_changes( before: Option, after: File, _state_context: Option, ) -> Result, PluginError> { if let Some(previous) = before.as_ref() { if previous.data == after.data { return Ok(Vec::new()); } } let before_lines = before .as_ref() .map(|file| parse_lines_with_ids(&file.data)) .unwrap_or_default(); let after_lines = if let Some(before_file) = before.as_ref() { parse_after_lines_with_histogram_matching(&before_lines, &before_file.data, &after.data) } else { parse_lines_with_ids(&after.data) }; let before_ids = before_lines .iter() .map(|line| line.entity_id.clone()) .collect::>(); let after_ids = after_lines .iter() .map(|line| line.entity_id.clone()) .collect::>(); let before_id_set = before_ids.iter().cloned().collect::>(); let after_id_set = after_ids.iter().cloned().collect::>(); let mut changes = Vec::new(); if before.is_some() { let mut removed_ids = HashSet::::with_capacity(before_lines.len()); for line in &before_lines { if after_id_set.contains(&line.entity_id) { continue; } if removed_ids.insert(line.entity_id.clone()) { changes.push(EntityChange { entity_id: line.entity_id.clone(), schema_key: LINE_SCHEMA_KEY.to_string(), snapshot_content: None, }); } } } for line in &after_lines { if before_id_set.contains(&line.entity_id) { continue; } changes.push(EntityChange { entity_id: line.entity_id.clone(), schema_key: LINE_SCHEMA_KEY.to_string(), snapshot_content: Some(serialize_line_snapshot(line)?), }); } if before.is_none() || before_ids != after_ids { let snapshot = serde_json::to_string(&DocumentSnapshot { line_ids: &after_ids, }) .map_err(|error| { PluginError::Internal(format!("failed to encode document snapshot: {error}")) })?; changes.push(EntityChange { entity_id: DOCUMENT_ENTITY_ID.to_string(), schema_key: DOCUMENT_SCHEMA_KEY.to_string(), snapshot_content: Some(snapshot), }); } Ok(changes) } fn apply_changes(file: File, changes: Vec) -> Result, PluginError> { let expected_line_changes = changes .iter() .filter(|change| change.schema_key == LINE_SCHEMA_KEY) .count(); let mut document_snapshot: Option = None; let mut document_tombstoned = false; let mut line_by_id = parse_lines_with_ids(&file.data) .into_iter() .map(|line| (line.entity_id.clone(), line)) .collect::>(); line_by_id.reserve(expected_line_changes); let mut seen_line_change_ids = HashSet::::with_capacity(expected_line_changes); for change in changes { if change.schema_key == LINE_SCHEMA_KEY { if !seen_line_change_ids.insert(change.entity_id.clone()) { return Err(PluginError::InvalidInput( "duplicate text_line snapshot in apply_changes input".to_string(), )); } match change.snapshot_content { Some(snapshot_raw) => { let snapshot = parse_line_snapshot(&snapshot_raw, &change.entity_id)?; line_by_id.insert( change.entity_id.clone(), ParsedLine { entity_id: change.entity_id, content: snapshot.content, ending: snapshot.ending, }, ); } None => { line_by_id.remove(&change.entity_id); } } continue; } if change.schema_key == DOCUMENT_SCHEMA_KEY { if change.entity_id != DOCUMENT_ENTITY_ID { return Err(PluginError::InvalidInput(format!( "document snapshot entity_id must be '{DOCUMENT_ENTITY_ID}', got '{}'", change.entity_id ))); } match change.snapshot_content { Some(snapshot_raw) => { if document_snapshot.is_some() || document_tombstoned { return Err(PluginError::InvalidInput( "duplicate text_document snapshot in apply_changes input" .to_string(), )); } let parsed = parse_document_snapshot(&snapshot_raw)?; document_snapshot = Some(parsed); } None => { if document_snapshot.is_some() || document_tombstoned { return Err(PluginError::InvalidInput( "duplicate text_document snapshot in apply_changes input" .to_string(), )); } document_tombstoned = true; } } } } if document_tombstoned { return Ok(Vec::new()); } let document_snapshot = document_snapshot.ok_or_else(|| { PluginError::InvalidInput( "missing text_document snapshot; apply_changes requires full latest projection" .to_string(), ) })?; let mut output = Vec::new(); for line_id in document_snapshot.line_ids { let Some(line) = line_by_id.get(&line_id) else { return Err(PluginError::InvalidInput(format!( "document references missing text_line entity_id '{line_id}'" ))); }; output.extend_from_slice(&line.content); output.extend_from_slice(line.ending.as_str().as_bytes()); } Ok(output) } } fn parse_document_snapshot(raw: &str) -> Result { let parsed: DocumentSnapshotOwned = serde_json::from_str(raw).map_err(|error| { PluginError::InvalidInput(format!("invalid text_document snapshot_content: {error}")) })?; let mut seen = HashSet::new(); for line_id in &parsed.line_ids { if line_id.is_empty() { return Err(PluginError::InvalidInput( "text_document.line_ids must not contain empty ids".to_string(), )); } if !seen.insert(line_id.clone()) { return Err(PluginError::InvalidInput(format!( "text_document.line_ids contains duplicate id '{line_id}'" ))); } } Ok(parsed) } fn parse_line_snapshot(raw: &str, entity_id: &str) -> Result { let (content_base64, ending) = parse_line_snapshot_fields(raw).map_err(|error| { PluginError::InvalidInput(format!( "invalid text_line snapshot_content for entity_id '{entity_id}': {error}" )) })?; let content = base64_to_bytes(content_base64).map_err(|error| { PluginError::InvalidInput(format!( "invalid text_line.content_base64 for entity_id '{entity_id}': {error}" )) })?; let ending = parse_line_ending_literal(ending).map_err(|error| { PluginError::InvalidInput(format!( "invalid text_line.ending for entity_id '{entity_id}': {error}" )) })?; Ok(ParsedLine { entity_id: entity_id.to_string(), content, ending, }) } fn serialize_line_snapshot(line: &ParsedLine) -> Result { let content_base64 = bytes_to_base64(&line.content); let ending = line_ending_json_literal(line.ending); let mut encoded = String::with_capacity( LINE_SNAPSHOT_PREFIX.len() + content_base64.len() + LINE_SNAPSHOT_SEPARATOR.len() + ending.len() + LINE_SNAPSHOT_SUFFIX.len(), ); encoded.push_str(LINE_SNAPSHOT_PREFIX); encoded.push_str(&content_base64); encoded.push_str(LINE_SNAPSHOT_SEPARATOR); encoded.push_str(ending); encoded.push_str(LINE_SNAPSHOT_SUFFIX); Ok(encoded) } fn parse_lines_with_ids(data: &[u8]) -> Vec { parse_lines_with_ids_from_split(split_lines(data)) } fn parse_lines_with_ids_from_split(split: Vec<(Vec, LineEnding)>) -> Vec { let mut occurrence_by_key = HashMap::<[u8; 20], u32>::new(); let mut lines = Vec::with_capacity(split.len()); for (content, ending) in split { let fingerprint = line_fingerprint(&content, ending); let occurrence = occurrence_by_key.entry(fingerprint).or_insert(0); let entity_id = format!("line:{}:{}", bytes_to_hex(&fingerprint), occurrence); *occurrence += 1; lines.push(ParsedLine { entity_id, content, ending, }); } lines } fn parse_after_lines_with_histogram_matching( before_lines: &[ParsedLine], before_data: &[u8], after_data: &[u8], ) -> Vec { let after_split = split_lines(after_data); let matching_pairs = compute_histogram_line_matching_pairs(before_data, after_data); let mut matched_after_to_before = HashMap::::new(); for (before_index, after_index) in matching_pairs { matched_after_to_before.insert(after_index, before_index); } let mut used_ids = before_lines .iter() .map(|line| line.entity_id.clone()) .collect::>(); let mut occurrence_by_key = HashMap::<[u8; 20], u32>::new(); let mut after_lines = Vec::with_capacity(after_split.len()); for (after_index, (content, ending)) in after_split.into_iter().enumerate() { let fingerprint = line_fingerprint(&content, ending); let occurrence = occurrence_by_key.entry(fingerprint).or_insert(0); let canonical_occurrence = *occurrence; *occurrence += 1; let entity_id = if let Some(before_index) = matched_after_to_before.get(&after_index) { before_lines[*before_index].entity_id.clone() } else { let canonical_entity_id = format!( "line:{}:{}", bytes_to_hex(&fingerprint), canonical_occurrence ); allocate_inserted_line_id(&canonical_entity_id, &used_ids) }; used_ids.insert(entity_id.clone()); after_lines.push(ParsedLine { entity_id, content, ending, }); } after_lines } fn compute_histogram_line_matching_pairs( before_data: &[u8], after_data: &[u8], ) -> Vec<(usize, usize)> { let input = InternedInput::new(before_data, after_data); let mut diff = Diff::compute(Algorithm::Histogram, &input); diff.postprocess_lines(&input); let mut pairs = Vec::new(); let mut before_pos = 0usize; let mut after_pos = 0usize; for hunk in diff.hunks() { let hunk_before_start = hunk.before.start as usize; let hunk_after_start = hunk.after.start as usize; let unchanged_before_len = hunk_before_start.saturating_sub(before_pos); let unchanged_after_len = hunk_after_start.saturating_sub(after_pos); let unchanged_len = unchanged_before_len.min(unchanged_after_len); for offset in 0..unchanged_len { pairs.push((before_pos + offset, after_pos + offset)); } before_pos = hunk.before.end as usize; after_pos = hunk.after.end as usize; } let before_tail = input.before.len().saturating_sub(before_pos); let after_tail = input.after.len().saturating_sub(after_pos); let tail_len = before_tail.min(after_tail); for offset in 0..tail_len { pairs.push((before_pos + offset, after_pos + offset)); } pairs } fn allocate_inserted_line_id(base: &str, used_ids: &HashSet) -> String { if !used_ids.contains(base) { return base.to_string(); } let mut suffix = 0u32; loop { let candidate = format!("{base}:ins:{suffix}"); if !used_ids.contains(&candidate) { return candidate; } suffix += 1; } } fn split_lines(data: &[u8]) -> Vec<(Vec, LineEnding)> { if data.is_empty() { return Vec::new(); } let mut lines = Vec::new(); let mut start = 0usize; for index in 0..data.len() { if data[index] != b'\n' { continue; } if index > start && data[index - 1] == b'\r' { lines.push((data[start..index - 1].to_vec(), LineEnding::Crlf)); } else { lines.push((data[start..index].to_vec(), LineEnding::Lf)); } start = index + 1; } if start < data.len() { lines.push((data[start..].to_vec(), LineEnding::None)); } lines } fn line_fingerprint(content: &[u8], ending: LineEnding) -> [u8; 20] { let mut hasher = Sha1::new(); hasher.update(content); hasher.update([0xff, ending.marker_byte()]); let digest = hasher.finalize(); let mut fingerprint = [0u8; 20]; fingerprint.copy_from_slice(&digest); fingerprint } const LINE_SNAPSHOT_PREFIX: &str = "{\"content_base64\":\""; const LINE_SNAPSHOT_SEPARATOR: &str = "\",\"ending\":\""; const LINE_SNAPSHOT_SUFFIX: &str = "\"}"; fn parse_line_snapshot_fields(raw: &str) -> Result<(&str, &str), String> { let inner = raw .strip_prefix(LINE_SNAPSHOT_PREFIX) .and_then(|value| value.strip_suffix(LINE_SNAPSHOT_SUFFIX)) .ok_or_else(|| "expected {\"content_base64\":\"...\",\"ending\":\"...\"}".to_string())?; inner .split_once(LINE_SNAPSHOT_SEPARATOR) .ok_or_else(|| "missing content_base64 or ending field".to_string()) } fn line_ending_json_literal(ending: LineEnding) -> &'static str { match ending { LineEnding::None => "", LineEnding::Lf => "\\n", LineEnding::Crlf => "\\r\\n", } } fn parse_line_ending_literal(value: &str) -> Result { match value { "" => Ok(LineEnding::None), "\\n" => Ok(LineEnding::Lf), "\\r\\n" => Ok(LineEnding::Crlf), _ => Err( "unsupported ending literal; expected \"\", \"\\\\n\", or \"\\\\r\\\\n\"".to_string(), ), } } fn bytes_to_hex(bytes: &[u8]) -> String { let mut output = String::with_capacity(bytes.len() * 2); for byte in bytes { output.push(hex_char(byte >> 4)); output.push(hex_char(byte & 0x0f)); } output } fn hex_char(value: u8) -> char { match value { 0..=9 => (b'0' + value) as char, 10..=15 => (b'a' + (value - 10)) as char, _ => '?', } } fn bytes_to_base64(bytes: &[u8]) -> String { BASE64_STANDARD.encode(bytes) } fn base64_to_bytes(raw: &str) -> Result, String> { BASE64_STANDARD .decode(raw) .map_err(|error| format!("invalid base64: {error}")) } pub fn detect_changes(before: Option, after: File) -> Result, PluginError> { ::detect_changes(before, after, None) } pub fn detect_changes_with_state_context( before: Option, after: File, state_context: Option, ) -> Result, PluginError> { ::detect_changes(before, after, state_context) } pub fn apply_changes(file: File, changes: Vec) -> Result, PluginError> { ::apply_changes(file, changes) } pub fn manifest_json() -> &'static str { MANIFEST_JSON } pub fn line_schema_json() -> &'static str { LINE_SCHEMA_JSON } pub fn line_schema_definition() -> &'static Value { LINE_SCHEMA.get_or_init(|| { serde_json::from_str(LINE_SCHEMA_JSON).expect("text line schema must parse") }) } pub fn document_schema_json() -> &'static str { DOCUMENT_SCHEMA_JSON } pub fn document_schema_definition() -> &'static Value { DOCUMENT_SCHEMA.get_or_init(|| { serde_json::from_str(DOCUMENT_SCHEMA_JSON).expect("text document schema must parse") }) } #[cfg(target_arch = "wasm32")] export!(TextLinesPlugin); ================================================ FILE: packages/text-plugin/tests/apply_changes.rs ================================================ mod common; use common::{file_from_bytes, parse_document_snapshot}; use text_plugin::{ apply_changes, detect_changes, PluginApiError, PluginEntityChange, DOCUMENT_SCHEMA_KEY, LINE_SCHEMA_KEY, }; #[test] fn applies_full_projection_and_reconstructs_bytes() { let expected = b"line 1\nline 2\r\nline 3"; let after = file_from_bytes("f1", "/doc.txt", expected); let changes = detect_changes(None, after).expect("detect_changes should succeed"); let output = apply_changes(file_from_bytes("f1", "/doc.txt", b""), changes) .expect("apply_changes should succeed"); assert_eq!(output, expected); } #[test] fn supports_binary_bytes() { let expected = vec![0xff, b'\n', 0x00, b'\r', b'\n', 0x7f]; let after = file_from_bytes("f1", "/bin.dat", &expected); let changes = detect_changes(None, after).expect("detect_changes should succeed"); let output = apply_changes(file_from_bytes("f1", "/bin.dat", b""), changes) .expect("apply_changes should succeed"); assert_eq!(output, expected); } #[test] fn rejects_missing_document_snapshot() { let changes = vec![PluginEntityChange { entity_id: "line:abc:0".to_string(), schema_key: LINE_SCHEMA_KEY.to_string(), snapshot_content: Some(r#"{"content_base64":"YQ==","ending":"\n"}"#.to_string()), }]; let error = apply_changes(file_from_bytes("f1", "/doc.txt", b""), changes) .expect_err("apply_changes should fail"); match error { PluginApiError::InvalidInput(message) => { assert!(message.contains("missing text_document snapshot")); } PluginApiError::Internal(message) => { panic!("expected InvalidInput, got Internal({message})"); } } } #[test] fn document_order_drives_output_order() { let after = file_from_bytes("f1", "/doc.txt", b"a\nb\n"); let mut changes = detect_changes(None, after).expect("detect_changes should succeed"); let document_index = changes .iter() .position(|change| change.schema_key == DOCUMENT_SCHEMA_KEY) .expect("document row should exist"); let mut doc = parse_document_snapshot(&changes[document_index]); doc.line_ids.reverse(); changes[document_index].snapshot_content = Some( serde_json::json!({ "line_ids": doc.line_ids, }) .to_string(), ); let output = apply_changes(file_from_bytes("f1", "/doc.txt", b""), changes) .expect("apply_changes should succeed"); assert_eq!(output, b"b\na\n"); } ================================================ FILE: packages/text-plugin/tests/common/mod.rs ================================================ #![allow(dead_code)] use serde::Deserialize; use text_plugin::{PluginEntityChange, PluginFile}; #[derive(Debug, Deserialize)] pub struct LineSnapshot { pub content_base64: String, pub ending: String, } #[derive(Debug, Deserialize)] pub struct DocumentSnapshot { pub line_ids: Vec, } pub fn file_from_bytes(id: &str, path: &str, data: &[u8]) -> PluginFile { PluginFile { id: id.to_string(), path: path.to_string(), data: data.to_vec(), } } pub fn parse_line_snapshot(change: &PluginEntityChange) -> LineSnapshot { let raw = change .snapshot_content .as_ref() .expect("line snapshot should exist"); serde_json::from_str(raw).expect("line snapshot should parse") } pub fn parse_document_snapshot(change: &PluginEntityChange) -> DocumentSnapshot { let raw = change .snapshot_content .as_ref() .expect("document snapshot should exist"); serde_json::from_str(raw).expect("document snapshot should parse") } ================================================ FILE: packages/text-plugin/tests/detect_changes.rs ================================================ mod common; use common::{file_from_bytes, parse_document_snapshot}; use text_plugin::{detect_changes, DOCUMENT_ENTITY_ID, DOCUMENT_SCHEMA_KEY, LINE_SCHEMA_KEY}; #[test] fn creation_returns_full_projection() { let after = file_from_bytes("f1", "/doc.txt", b"a\nb\n"); let changes = detect_changes(None, after).expect("detect_changes should succeed"); let line_changes = changes .iter() .filter(|change| change.schema_key == LINE_SCHEMA_KEY) .collect::>(); assert_eq!(line_changes.len(), 2); assert!(line_changes .iter() .all(|change| change.snapshot_content.is_some())); let document_change = changes .iter() .find(|change| change.schema_key == DOCUMENT_SCHEMA_KEY) .expect("document snapshot should exist"); assert_eq!(document_change.entity_id, DOCUMENT_ENTITY_ID); let doc = parse_document_snapshot(document_change); assert_eq!(doc.line_ids.len(), 2); } #[test] fn insertion_in_middle_emits_inserted_line_and_document_change() { let before = file_from_bytes("f1", "/doc.txt", b"a\nb\n"); let after = file_from_bytes("f1", "/doc.txt", b"a\nx\nb\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let line_inserts = changes .iter() .filter(|change| change.schema_key == LINE_SCHEMA_KEY) .filter(|change| change.snapshot_content.is_some()) .collect::>(); let line_tombstones = changes .iter() .filter(|change| change.schema_key == LINE_SCHEMA_KEY) .filter(|change| change.snapshot_content.is_none()) .collect::>(); assert_eq!(line_inserts.len(), 1); assert_eq!(line_tombstones.len(), 0); assert!(changes .iter() .any(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)); } #[test] fn deletion_emits_line_tombstone_and_document_change() { let before = file_from_bytes("f1", "/doc.txt", b"a\nb\n"); let after = file_from_bytes("f1", "/doc.txt", b"a\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let line_tombstones = changes .iter() .filter(|change| change.schema_key == LINE_SCHEMA_KEY) .filter(|change| change.snapshot_content.is_none()) .collect::>(); assert_eq!(line_tombstones.len(), 1); assert!(changes .iter() .any(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)); } #[test] fn unchanged_file_returns_no_changes() { let before = file_from_bytes("f1", "/doc.txt", b"unchanged\n"); let after = file_from_bytes("f1", "/doc.txt", b"unchanged\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); assert!(changes.is_empty()); } #[test] fn line_reorder_emits_delete_and_insert() { let before = file_from_bytes("f1", "/doc.txt", b"a\nb\n"); let after = file_from_bytes("f1", "/doc.txt", b"b\na\n"); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let line_inserts = changes .iter() .filter(|change| change.schema_key == LINE_SCHEMA_KEY) .filter(|change| change.snapshot_content.is_some()) .collect::>(); let line_tombstones = changes .iter() .filter(|change| change.schema_key == LINE_SCHEMA_KEY) .filter(|change| change.snapshot_content.is_none()) .collect::>(); assert_eq!(line_inserts.len(), 1); assert_eq!(line_tombstones.len(), 1); assert_ne!(line_inserts[0].entity_id, line_tombstones[0].entity_id); assert!(changes .iter() .any(|change| change.schema_key == DOCUMENT_SCHEMA_KEY)); } ================================================ FILE: packages/text-plugin/tests/roundtrip.rs ================================================ mod common; use common::file_from_bytes; use std::collections::BTreeMap; use text_plugin::{apply_changes, detect_changes, PluginEntityChange}; #[test] fn detect_then_apply_roundtrips_exact_bytes() { let payload = b"first line\nsecond line\r\nthird line\n"; let file = file_from_bytes("f1", "/doc.txt", payload); let changes = detect_changes(None, file).expect("detect_changes should succeed"); let reconstructed = apply_changes(file_from_bytes("f1", "/doc.txt", b""), changes) .expect("apply_changes should succeed"); assert_eq!(reconstructed, payload); } #[test] fn update_roundtrip_preserves_exact_target_bytes() { let before_payload = b"a\nb\nc\n"; let before = file_from_bytes("f1", "/doc.txt", before_payload); let after_payload = b"a\nx\nc\n"; let after = file_from_bytes("f1", "/doc.txt", after_payload); let changes = detect_changes(Some(before), after).expect("detect_changes should succeed"); let reconstructed = apply_changes(file_from_bytes("f1", "/doc.txt", before_payload), changes) .expect("apply_changes should succeed"); assert_eq!(reconstructed, after_payload); } #[test] fn projected_change_log_reconstructs_from_empty_base() { let before_payload = b"a\nb\nc\n"; let before_for_initial = file_from_bytes("f1", "/doc.txt", before_payload); let before_for_delta = file_from_bytes("f1", "/doc.txt", before_payload); let after_payload = b"a\nx\nc\n"; let after = file_from_bytes("f1", "/doc.txt", after_payload); let initial_changes = detect_changes(None, before_for_initial).expect("initial detect_changes should succeed"); let delta_changes = detect_changes(Some(before_for_delta), after).expect("delta detect_changes should succeed"); let projected_changes = collapse_to_latest_projection([initial_changes, delta_changes]); let reconstructed = apply_changes(file_from_bytes("f1", "/doc.txt", b""), projected_changes) .expect("apply_changes should succeed for projected changes"); assert_eq!(reconstructed, after_payload); } fn collapse_to_latest_projection(batches: [Vec; 2]) -> Vec { let mut latest = BTreeMap::<(String, String), PluginEntityChange>::new(); for batch in batches { for change in batch { latest.insert( (change.schema_key.clone(), change.entity_id.clone()), change, ); } } latest.into_values().collect() } ================================================ FILE: packages/text-plugin/tests/schema.rs ================================================ use text_plugin::{ document_schema_definition, document_schema_json, line_schema_definition, line_schema_json, manifest_json, DOCUMENT_SCHEMA_KEY, LINE_SCHEMA_KEY, }; #[test] fn line_schema_matches_constants() { let schema = line_schema_definition(); assert_eq!( schema .get("x-lix-key") .and_then(serde_json::Value::as_str) .expect("x-lix-key must be string"), LINE_SCHEMA_KEY ); } #[test] fn document_schema_matches_constants() { let schema = document_schema_definition(); assert_eq!( schema .get("x-lix-key") .and_then(serde_json::Value::as_str) .expect("x-lix-key must be string"), DOCUMENT_SCHEMA_KEY ); } #[test] fn schema_json_accessors_return_expected_text() { let line = line_schema_json(); let document = document_schema_json(); assert!(line.contains("\"x-lix-key\": \"text_line\"")); assert!(document.contains("\"x-lix-key\": \"text_document\"")); } #[test] fn manifest_json_has_expected_plugin_identity() { let manifest: serde_json::Value = serde_json::from_str(manifest_json()).expect("manifest must be valid JSON"); assert_eq!( manifest .get("key") .and_then(serde_json::Value::as_str) .expect("manifest.key must be string"), "text_plugin" ); assert_eq!( manifest .get("runtime") .and_then(serde_json::Value::as_str) .expect("manifest.runtime must be string"), "wasm-component-v1" ); } ================================================ FILE: packages/website/.gitignore ================================================ node_modules .DS_Store dist dist-ssr *.local src/routeTree.gen.ts count.txt .env .nitro .tanstack .wrangler .output .vinxi todos.json content/plugins/*.md !content/plugins/index.md *.gen.* ================================================ FILE: packages/website/.vscode/settings.json ================================================ { "files.watcherExclude": { "**/routeTree.gen.ts": true }, "search.exclude": { "**/routeTree.gen.ts": true }, "files.readonlyInclude": { "**/routeTree.gen.ts": true } } ================================================ FILE: packages/website/HTML_DIFF_LIX_DEV_SEO_FOLLOWUP.md ================================================ # html-diff.lix.dev SEO Follow-up This checklist is for the separate `html-diff.lix.dev` deployment and codebase. It is not implemented in this workspace. - Replace internal `.html` navigation links with canonical extensionless URLs. - Add canonical, Open Graph, and X/Twitter metadata to the home, guide, example, and playground/test pages. - Generate a sitemap that includes every indexable page and excludes redirects or test-only routes. - Update internal links so they point directly to the final destination URL instead of relying on redirects. ================================================ FILE: packages/website/README.md ================================================ # Lix Website Triggering a docs site rebuild. ================================================ FILE: packages/website/content/plugins/index.md ================================================ # Plugins Plugins are coming soon. We are rewriting this section as part of the website cleanup. ================================================ FILE: packages/website/package.json ================================================ { "name": "@lix-js/website", "private": true, "type": "module", "scripts": { "dev": "vite dev --port 3000", "build": "vite build", "postbuild": "node ./scripts/post-build-seo.js", "preview": "vite preview", "test": "vitest run && tsc --noEmit", "format": "prettier --write ." }, "dependencies": { "@cloudflare/vite-plugin": "^1.36.0", "@lix-js/plugin-json": "1.0.1", "@lix-js/sdk": "workspace:*", "@opral/markdown-wc": "0.9.0", "@tailwindcss/vite": "^4.2.4", "@tanstack/react-router": "^1.169.2", "@tanstack/react-start": "^1.167.64", "@tanstack/router-plugin": "^1.167.34", "lucide-react": "^0.544.0", "posthog-js": "^1.321.2", "react": "^19.2.0", "react-dom": "^19.2.0", "shiki": "^3.2.2", "tailwindcss": "^4.2.4" }, "devDependencies": { "@testing-library/dom": "^10.4.0", "@testing-library/react": "^16.2.0", "@types/node": "^22.10.2", "@types/react": "^19.2.0", "@types/react-dom": "^19.2.0", "@vitejs/plugin-react": "^6.0.1", "@vitest/browser": "^4.1.5", "@vitest/coverage-v8": "^4.1.5", "jsdom": "^27.0.0", "prettier": "^3.6.0", "typescript": "^5.7.2", "vite": "^8.0.10", "vite-plugin-static-copy": "^4.1.0", "vitest": "^4.1.5", "web-vitals": "^5.1.0", "wrangler": "^4.88.0" } } ================================================ FILE: packages/website/public/_redirects ================================================ /docs /docs/what-is-lix 308 /guide /docs/what-is-lix 308 /guide/* /docs/:splat 308 ================================================ FILE: packages/website/public/manifest.json ================================================ { "short_name": "Lix", "name": "Lix - Change Control System", "icons": [ { "src": "favicon.svg", "type": "image/svg+xml", "sizes": "any" } ], "start_url": ".", "display": "standalone", "theme_color": "#07B6D4", "background_color": "#ffffff" } ================================================ FILE: packages/website/public/robots.txt ================================================ # https://www.robotstxt.org/robotstxt.html User-agent: * Disallow: ================================================ FILE: packages/website/scripts/plugin-readme-sync.test.ts ================================================ import { describe, expect, test } from "vitest"; import { buildSeoFrontmatter } from "./plugin-readme-sync"; describe("buildSeoFrontmatter", () => { test("does not add a second period when the base description already ends with one", () => { const frontmatter = buildSeoFrontmatter({ key: "plugin_json", name: "JSON Plugin", description: "Tracks JSON changes.", readme: "https://example.com/README.md", }); expect(frontmatter).toContain( 'description: "Tracks JSON changes. Learn how to install it, supported file types, and how it fits into Lix workflows."', ); expect(frontmatter).not.toContain(".. Learn"); }); test("adds sentence punctuation when the base description is missing it", () => { const frontmatter = buildSeoFrontmatter({ key: "plugin_json", name: "JSON Plugin", description: "Tracks JSON changes", readme: "https://example.com/README.md", }); expect(frontmatter).toContain( 'description: "Tracks JSON changes. Learn how to install it, supported file types, and how it fits into Lix workflows."', ); }); }); ================================================ FILE: packages/website/scripts/plugin-readme-sync.ts ================================================ import { mkdir, readFile, writeFile } from "node:fs/promises"; import { existsSync } from "node:fs"; import path from "node:path"; import type { Plugin } from "vite"; type PluginRegistry = { plugins?: Array<{ key: string; name?: string; description?: string; readme?: string; }>; }; /** * Rewrites relative image links to absolute GitHub raw URLs. * * @example * rewriteRelativeImages("![Alt](./assets/img.png)", "https://raw.githubusercontent.com/opral/lix/main/packages/plugin-md/README.md") */ function rewriteRelativeImages(markdown: string, readmeUrl: string) { const base = readmeUrl.replace(/\/README\.md$/, "/"); return markdown.replace( /!\[([^\]]*)\]\((?!https?:\/\/)([^)]+)\)/g, (match, alt, url) => { void match; const normalized = url.replace(/^\.?\//, ""); return `![${alt}](${base}${normalized})`; }, ); } /** * Rewrites relative links to GitHub tree URLs. * * @example * rewriteRelativeLinks("[Example](./example)", "https://raw.githubusercontent.com/opral/lix/main/packages/plugin-md/README.md") */ function rewriteRelativeLinks(markdown: string, readmeUrl: string) { const repoBase = readmeUrl .replace("https://raw.githubusercontent.com/", "https://github.com/") .replace(/\/README\.md$/, ""); return markdown.replace( /\[([^\]]+)\]\((?!https?:\/\/)([^)]+)\)/g, (match, text, url) => { if (url.startsWith("#")) { return match; } const normalized = url.replace(/^\.?\//, ""); return `[${text}](${repoBase}/${normalized})`; }, ); } /** * Loads the plugin registry from disk. * * @example * const registry = await loadRegistry("/path/to/plugin.registry.json"); */ async function loadRegistry(registryPath: string): Promise { const raw = await readFile(registryPath, "utf8"); return JSON.parse(raw) as PluginRegistry; } function ensureTrailingSentence(value: string) { return /[.!?]$/.test(value) ? value : `${value}.`; } export function buildSeoFrontmatter( plugin: NonNullable[number], ) { const title = plugin.name ?? plugin.key; const description = plugin.description ? `${ensureTrailingSentence(plugin.description.trim())} Learn how to install it, supported file types, and how it fits into Lix workflows.` : `Learn how to install ${title}, supported file types, and how it fits into Lix workflows.`; return [ "---", `title: ${JSON.stringify(title)}`, `description: ${JSON.stringify(description)}`, "---", "", ].join("\n"); } /** * Downloads plugin readmes and writes them to the content directory. * * @example * await syncPluginReadmes(registry, "/content/plugins"); */ async function syncPluginReadmes(registry: PluginRegistry, contentDir: string) { const plugins = Array.isArray(registry.plugins) ? registry.plugins : []; await mkdir(contentDir, { recursive: true }); await Promise.all( plugins.map(async (plugin) => { if (!plugin?.key || !plugin?.readme) { throw new Error(`Missing readme entry for plugin ${plugin?.key ?? ""}`); } const destination = path.join(contentDir, `${plugin.key}.md`); let response: Response; try { response = await fetch(plugin.readme); } catch (error) { if (existsSync(destination)) { console.warn( `Failed to fetch ${plugin.readme}; using cached ${destination}`, ); return; } throw error; } if (!response.ok) { if (existsSync(destination)) { console.warn( `Failed to fetch ${plugin.readme} (${response.status} ${response.statusText}); using cached ${destination}`, ); return; } throw new Error( `Failed to fetch ${plugin.readme} (${response.status} ${response.statusText})`, ); } const markdown = rewriteRelativeLinks( rewriteRelativeImages(await response.text(), plugin.readme), plugin.readme, ); const content = `${buildSeoFrontmatter(plugin)}${markdown}`; await writeFile(destination, content); }), ); } /** * Vite plugin that syncs plugin READMEs into local content. * * @example * pluginReadmeSync() */ export function pluginReadmeSync(): Plugin { return { name: "plugin-readme-sync", async buildStart() { const root = process.cwd(); const registryPath = path.join( root, "src/routes/plugins/plugin.registry.json", ); const contentDir = path.join(root, "content/plugins"); const registry = await loadRegistry(registryPath); await syncPluginReadmes(registry, contentDir); console.log("copied plugin readmes"); }, }; } ================================================ FILE: packages/website/scripts/post-build-seo.js ================================================ import fs from "node:fs"; import path from "node:path"; const SITE_URL = "https://lix.dev"; const SITEMAP_PATH = path.resolve("dist/client/sitemap.xml"); const ALIAS_URLS = new Set([`${SITE_URL}/docs`, `${SITE_URL}/guide`]); function isAliasUrl(url) { return ALIAS_URLS.has(url) || url.startsWith(`${SITE_URL}/guide/`); } if (fs.existsSync(SITEMAP_PATH)) { const sitemap = fs.readFileSync(SITEMAP_PATH, "utf8"); const filtered = sitemap.replace( /\s*([^<]+)<\/loc>[\s\S]*?<\/url>/g, (match, loc) => (isAliasUrl(loc) ? "" : match), ); fs.writeFileSync(SITEMAP_PATH, filtered.trimEnd().concat("\n")); } ================================================ FILE: packages/website/src/blog/blogMetadata.ts ================================================ import { getMarkdownDescription, getMarkdownTitle } from "../lib/seo"; type BlogMetadataInput = { rawMarkdown: string; frontmatter?: Record; }; export function getBlogTitle({ rawMarkdown, frontmatter }: BlogMetadataInput) { return getMarkdownTitle({ rawMarkdown, frontmatter }); } export function getBlogDescription({ rawMarkdown, frontmatter, }: BlogMetadataInput) { return getMarkdownDescription({ rawMarkdown, frontmatter }); } ================================================ FILE: packages/website/src/blog/og-image.ts ================================================ import { buildCanonicalUrl } from "../lib/seo"; export function resolveOgImageUrl(value: string, folderName: string): string { if (isAbsoluteUrl(value)) return value; const base = `${buildCanonicalUrl(`/blog/${folderName}`)}/`; return new URL(value, base).toString(); } export function resolveBlogAssetPath( value: string, folderName: string, ): string { if (isAbsoluteUrl(value)) return value; if (value.startsWith("/")) return value; const normalized = value.replace(/^\.\//, ""); return `/blog/${folderName}/${normalized}`; } function isAbsoluteUrl(value: string): boolean { return /^[a-z][a-z0-9+.-]*:/.test(value); } ================================================ FILE: packages/website/src/components/code-snippet.tsx ================================================ import { useEffect, useState } from "react"; import { bundledLanguages, createHighlighter, type Highlighter } from "shiki"; // Global highlighter instance. let highlighterPromise: Promise | null = null; async function getHighlighter(): Promise { if (!highlighterPromise) { highlighterPromise = createHighlighter({ themes: ["github-light", "github-dark"], langs: Object.keys(bundledLanguages), }); } return highlighterPromise; } interface CodeBlockProps { code: string; language?: string; showLineNumbers?: boolean; } function CodeBlock({ code, language = "typescript", showLineNumbers = false, }: CodeBlockProps) { const [isCopied, setIsCopied] = useState(false); const [highlightedHtml, setHighlightedHtml] = useState(""); useEffect(() => { let cancelled = false; const highlight = async () => { try { const highlighter = await getHighlighter(); if (cancelled) return; const html = highlighter.codeToHtml(code, { lang: language, theme: "github-light", transformers: showLineNumbers ? [ { line(node: any, line: number) { if (node.properties) { node.properties["data-line"] = String(line); } return node; }, }, ] : [], }); setHighlightedHtml(html); } catch (error) { console.error("Failed to highlight code:", error); setHighlightedHtml(`
${escapeHtml(code)}
`); } }; highlight(); return () => { cancelled = true; }; }, [code, language, showLineNumbers]); const displayHtml = highlightedHtml || `
${escapeHtml(code)}
`; const handleCopy = async () => { try { await navigator.clipboard.writeText(code); setIsCopied(true); setTimeout(() => setIsCopied(false), 2000); } catch (err) { console.error("Failed to copy:", err); } }; return (
); } function escapeHtml(code: string): string { return code .replace(/&/g, "&") .replace(//g, ">") .replace(/"/g, """) .replace(/'/g, "'"); } function formatConsoleOutput( outputs: Array<{ level: string; args: Array<{ type: string; content: string }>; timestamp: string; section?: string; }>, ): string { return outputs .map((entry) => { const prefix = entry.level !== "log" ? `// ${entry.level.toUpperCase()}: ` : ""; const content = entry.args.map((arg) => arg.content).join(" "); return prefix + content; }) .join("\n\n"); } interface CodeSnippetProps { module: any; srcCode: string; sections?: string[]; } function dedentCode(code: string): string { const lines = code.split("\n"); let startIndex = 0; let endIndex = lines.length - 1; while (startIndex < lines.length && lines[startIndex].trim() === "") { startIndex++; } while (endIndex > startIndex && lines[endIndex].trim() === "") { endIndex--; } const trimmedLines = lines.slice(startIndex, endIndex + 1); const minIndent = trimmedLines .filter((line) => line.trim().length > 0) .reduce((min, line) => { const match = line.match(/^(\s*)/); const indent = match ? match[1].length : 0; return Math.min(min, indent); }, Infinity); if (minIndent === Infinity || minIndent === 0) return trimmedLines.join("\n"); return trimmedLines.map((line) => line.slice(minIndent)).join("\n"); } function parseSections(code: string): { sections: Record; imports: string; fullCode: string; sectionRanges: Record; } { const sections: Record = {}; const sectionRanges: Record = {}; const lines = code.split("\n"); const importLines: string[] = []; let currentSection: string | null = null; let sectionContent: string[] = []; let sectionStartLine = 0; let inSection = false; for (let i = 0; i < lines.length; i++) { const line = lines[i]; const sectionStartMatch = line.match(/SECTION\s+START\s+['"]([^'"]+)['"]/); if (sectionStartMatch) { currentSection = sectionStartMatch[1]; sectionContent = []; sectionStartLine = i + 1; inSection = true; continue; } const sectionEndMatch = line.match(/SECTION\s+END\s+['"]([^'"]+)['"]/); if (sectionEndMatch) { if (currentSection && sectionContent.length > 0) { sections[currentSection] = dedentCode(sectionContent.join("\n")); sectionRanges[currentSection] = { start: sectionStartLine, end: i - 1 }; } currentSection = null; inSection = false; continue; } if (!inSection && line.match(/^import\s+/)) { importLines.push(line); } if ( inSection && currentSection && !line.includes("export default async function") && !line.match(/^}$/) ) { sectionContent.push(line); } } const fullCode = lines .filter( (line) => !line.match(/SECTION\s+(START|END)\s+['"]([^'"]+)['"]/) && !line.includes("SECTION"), ) .filter((line) => !line.includes("export default async function")) .filter((line) => !line.match(/^}$/)) .join("\n") .trim(); return { sections, imports: importLines.join("\n"), fullCode, sectionRanges }; } function transformDynamicImports(code: string): string { let transformedCode = code; transformedCode = transformedCode.replace( /const\s*{\s*(\w+)\s*:\s*(\w+)\s*}\s*=\s*await\s+import\s*\(\s*["']([^"']+)["']\s*\)\s*;/g, 'import { $1 as $2 } from "$3";', ); transformedCode = transformedCode.replace( /const\s*{\s*([^}]+)\s*}\s*=\s*await\s+import\s*\(\s*["']([^"']+)["']\s*\)\s*;/g, 'import { $1 } from "$2";', ); transformedCode = transformedCode.replace( /const\s+(\w+)\s*=\s*await\s+import\s*\(\s*["']([^"']+)["']\s*\)\s*;/g, 'import $1 from "$2";', ); return transformedCode; } function combineSections( allSections: Record, selectedSections?: string[], ): string { if (!selectedSections || selectedSections.length === 0) { return Object.values(allSections).join("\n\n"); } return selectedSections .map((sectionName) => allSections[sectionName]) .filter(Boolean) .join("\n\n") .replace(/console1\.log/g, "console.log"); } function getPrerequisiteCode( allSections: Record, selectedSections: string[], imports: string, ): string { const sectionNames = Object.keys(allSections); const firstSelectedIndex = Math.min( ...selectedSections.map((s) => sectionNames.indexOf(s)), ); const prerequisiteSections = sectionNames .slice(0, firstSelectedIndex) .map((name) => allSections[name]) .filter(Boolean) .join("\n\n"); return [imports, prerequisiteSections].filter(Boolean).join("\n\n"); } /** * Normalizes raw source input from bundlers into usable code text. * * @example * decodeSource('import { foo } from "./bar";'); */ function decodeSource(source: string): string { const trimmed = source.trim().replace(/;$/, ""); if (isJsonStringLiteral(trimmed)) { return JSON.parse(trimmed) as string; } return trimmed; } /** * Checks if a string is a valid JSON string literal. * * @example * isJsonStringLiteral('"hello"'); */ function isJsonStringLiteral(value: string): boolean { if (value.length < 2 || value[0] !== '"' || value[value.length - 1] !== '"') { return false; } return /^"(?:\\["\\/bfnrt]|\\u[0-9a-fA-F]{4}|[^"\\])*"$/.test(value); } /** * Interactive code example for docs, showing selected SECTIONs and output. * * The `module` must default-export an async function that accepts a mock console. * * @example * */ export default function CodeSnippet({ module, srcCode, sections, }: CodeSnippetProps) { const [setupExpanded, setSetupExpanded] = useState(false); const [outputExpanded, setOutputExpanded] = useState(false); const [hasExecuted, setHasExecuted] = useState(false); const [isExecuting, setIsExecuting] = useState(false); const [consoleOutput, setConsoleOutput] = useState< Array<{ level: string; args: Array<{ type: string; content: string }>; timestamp: string; section?: string; }> >([]); const decodedSrcCode = decodeSource(srcCode); const { sections: allSections, imports } = parseSections( decodedSrcCode.replace(/console1\.log/g, "console.log"), ); const currentCode = transformDynamicImports( combineSections(allSections, sections), ); const prerequisiteCode = sections ? transformDynamicImports( getPrerequisiteCode(allSections, sections, imports), ) : ""; const executeCode = async () => { if (isExecuting) return; setIsExecuting(true); setConsoleOutput([]); setHasExecuted(true); try { const outputs: Array<{ level: string; args: Array<{ type: string; content: string }>; timestamp: string; section?: string; }> = []; const logOutput = ( level: string, section: string | undefined, ...args: any[] ) => { const formattedArgs = args.map((arg: any) => { if (typeof arg === "object" && arg !== null) { return { type: "object", content: JSON.stringify(arg, null, 2), }; } return { type: "primitive", content: String(arg), }; }); outputs.push({ level, args: formattedArgs, timestamp: new Date().toLocaleTimeString(), section, }); }; try { if (module.default && typeof module.default === "function") { let currentSection: string | undefined = undefined; const mockConsole = { log: (...args: any[]) => { const firstArg = String(args[0]); const sectionStartMatch = firstArg.match( /SECTION\s+START\s+['"]([^'"]+)['"]/, ); if (sectionStartMatch) { currentSection = sectionStartMatch[1]; return; } const sectionEndMatch = firstArg.match( /SECTION\s+END\s+['"]([^'"]+)['"]/, ); if (sectionEndMatch) { return; } logOutput("log", currentSection, ...args); }, warn: (...args: any[]) => { logOutput("warn", currentSection, ...args); }, error: (...args: any[]) => { logOutput("error", currentSection, ...args); }, info: (...args: any[]) => { logOutput("info", currentSection, ...args); }, }; await module.default(mockConsole); } else { logOutput( "error", undefined, "Module doesn't export default function", ); } } catch (error) { console.error("Error executing code:", error); logOutput("error", undefined, "Error executing code:", error); } setConsoleOutput(outputs); } finally { setIsExecuting(false); } }; return (
{prerequisiteCode && (
)}
{hasExecuted && ( { const filteredOutput = sections ? consoleOutput.filter( (output) => !output.section || sections.includes(output.section), ) : consoleOutput; if (filteredOutput.length === 0) { return "// No output"; } return formatConsoleOutput(filteredOutput); })()} language="javascript" showLineNumbers={false} /> )}
); } ================================================ FILE: packages/website/src/components/doc-code-snippet-element.tsx ================================================ import { createRoot, type Root } from "react-dom/client"; import CodeSnippet from "./code-snippet"; const exampleModules = import.meta.glob("../docs-examples/*.ts"); const exampleSources = import.meta.glob("../docs-examples/*.ts", { eager: true, import: "default", query: "?raw", }); function fileBasename(path: string): string { const last = path.split("/").pop() ?? path; return last.replace(/\.ts$/, ""); } const modulesByName = new Map(); const sourcesByName = new Map(); for (const [path, loader] of Object.entries(exampleModules)) { modulesByName.set(fileBasename(path), loader); } for (const [path, src] of Object.entries(exampleSources)) { sourcesByName.set(fileBasename(path), src); } function parseSectionsAttribute(value: string | null): string[] | undefined { if (!value) return undefined; const trimmed = value.trim(); if (!trimmed) return undefined; if (trimmed.startsWith("[")) { return JSON.parse(trimmed) as string[]; } return trimmed .split(",") .map((s) => s.trim()) .filter(Boolean); } class DocCodeSnippetElement extends HTMLElement { private reactRoot: Root | null = null; private mountEl: HTMLDivElement | null = null; private renderSeq = 0; static get observedAttributes() { return ["example", "sections"]; } connectedCallback() { if (!this.mountEl) { this.mountEl = document.createElement("div"); this.appendChild(this.mountEl); } this.renderReact(); } attributeChangedCallback() { this.renderReact(); } private async renderReact() { if (!this.mountEl) return; const exampleName = this.getAttribute("example")?.trim(); if (!exampleName) { throw new Error(" requires an example attribute."); } const loader = modulesByName.get(exampleName) as | (() => Promise) | undefined; const src = sourcesByName.get(exampleName); if (!loader || !src) { this.replaceChildren(); return; } const seq = ++this.renderSeq; const mod = await loader(); if (seq !== this.renderSeq) return; const sections = parseSectionsAttribute(this.getAttribute("sections")); if (!this.reactRoot) { this.reactRoot = createRoot(this.mountEl); } this.reactRoot.render( , ); } } if (typeof window !== "undefined" && !customElements.get("doc-code-snippet")) { customElements.define("doc-code-snippet", DocCodeSnippetElement); } ================================================ FILE: packages/website/src/components/docs-layout.tsx ================================================ import { Link } from "@tanstack/react-router"; import { useEffect, useState } from "react"; import { Footer } from "./footer"; import { Header, MenuIcon } from "./header"; export type SidebarSection = { label: string; items: Array<{ label: string; href: string; relativePath: string; }>; }; export type PageTocItem = { id: string; label: string; level: number; }; /** * VitePress-style documentation shell with header, left sidebar, and main content. * * The sidebar is driven from the docs table of contents and highlights the * active entry based on the current doc relative path. * * @example * * * */ export function DocsLayout({ sidebarSections, activeRelativePath, pageToc, children, }: { sidebarSections: SidebarSection[]; activeRelativePath?: string; pageToc?: PageTocItem[]; children: React.ReactNode; }) { const [isMobileMenuOpen, setIsMobileMenuOpen] = useState(false); const hasPageToc = Boolean(pageToc && pageToc.length > 0); const [activeTocId, setActiveTocId] = useState(null); useEffect(() => { if (!pageToc || pageToc.length === 0) return; const headings = pageToc .map((item) => document.getElementById(item.id)) .filter((node): node is HTMLElement => Boolean(node)); if (headings.length === 0) return; const updateActiveHeading = () => { const activationOffset = 96; let activeHeading = headings[0]; for (const heading of headings) { if (heading.getBoundingClientRect().top <= activationOffset) { activeHeading = heading; } else { break; } } setActiveTocId((current) => current === activeHeading.id ? current : activeHeading.id, ); }; updateActiveHeading(); window.addEventListener("scroll", updateActiveHeading, { passive: true }); window.addEventListener("resize", updateActiveHeading); return () => { window.removeEventListener("scroll", updateActiveHeading); window.removeEventListener("resize", updateActiveHeading); }; }, [pageToc]); const SidebarContent = () => ( ); return (
{/* Mobile menu bar - below header, above content */}
{/* Mobile sidebar overlay */} {isMobileMenuOpen && ( <>
setIsMobileMenuOpen(false)} aria-hidden="true" /> )}
{children}
{hasPageToc && ( )}
); } ================================================ FILE: packages/website/src/components/docs-prev-next.tsx ================================================ import { PrevNextNav } from "./prev-next-nav"; type DocRoute = { slug: string; title?: string; }; const navTitleOverrides: Record = { "next-js": "Next.js", "api-reference": "API Reference", }; function formatNavTitle(input: string) { const normalized = input.toLowerCase(); if (normalized in navTitleOverrides) { return navTitleOverrides[normalized]; } return normalized .split("-") .filter(Boolean) .map((word) => word[0]?.toUpperCase() + word.slice(1)) .join(" "); } export function DocsPrevNext({ currentSlug, routes, }: { currentSlug: string; routes: DocRoute[]; }) { const currentIndex = routes.findIndex((item) => item.slug === currentSlug); if (currentIndex === -1 || routes.length <= 1) return null; const prevRoute = currentIndex > 0 ? routes[currentIndex - 1] : null; const nextRoute = currentIndex < routes.length - 1 ? routes[currentIndex + 1] : null; const prev = prevRoute ? { slug: prevRoute.slug, title: prevRoute.title ?? formatNavTitle(prevRoute.slug), } : null; const next = nextRoute ? { slug: nextRoute.slug, title: nextRoute.title ?? formatNavTitle(nextRoute.slug), } : null; return ( ); } ================================================ FILE: packages/website/src/components/footer.tsx ================================================ import { getGithubStars } from "../github-stars-cache"; const footerLinks = [ { href: "/docs", label: "Docs", emoji: "📘" }, { href: "/blog", label: "Blog", emoji: "📝" }, { href: "/rfc", label: "RFCs", emoji: "📄" }, ]; export function Footer() { const githubStars = getGithubStars("opral/lix"); const formatStars = (count: number) => { if (count >= 1000) { return `${(count / 1000).toFixed(1).replace(/\.0$/, "")}k`; } return count.toString(); }; return ( ); } ================================================ FILE: packages/website/src/components/header.tsx ================================================ import { Link, useRouterState } from "@tanstack/react-router"; import { getGithubStars } from "../github-stars-cache"; /** * Lix logo used across the site. * * @example * */ export const LixLogo = ({ className = "" }) => ( ); /** * GitHub mark icon used in the site header. * * @example * */ export const GitHubIcon = ({ className = "" }) => ( ); /** * Discord icon used in the site header. * * @example * */ export const DiscordIcon = ({ className = "" }) => ( ); /** * X (formerly Twitter) icon used in the site header. * * @example * */ export const XIcon = ({ className = "" }) => ( ); /** * Hamburger menu icon for mobile navigation. * * @example * */ export const MenuIcon = ({ className = "" }) => ( ); const navLinks = [ { href: "/docs/what-is-lix", label: "Docs", activePrefix: "/docs" }, { href: "/plugins", label: "Plugins", activePrefix: "/plugins" }, { href: "/blog", label: "Blog", activePrefix: "/blog" }, ]; const socialLinks = [ { href: "https://discord.gg/gdMPPWy57R", label: "Discord", Icon: DiscordIcon, sizeClass: "h-5 w-5", }, { href: "https://x.com/lixCCS", label: "X", Icon: XIcon, sizeClass: "h-4 w-4", }, ]; /** * Site header with logo, navigation, and social links. * * @example *
*/ export function Header() { const pathname = useRouterState({ select: (state) => state.location.pathname, }); const githubStars = getGithubStars("opral/lix"); const formatStars = (count: number) => { if (count >= 1000) { return `${(count / 1000).toFixed(1).replace(/\.0$/, "")}k`; } return count.toString(); }; const isActive = (href: string, activePrefix?: string) => { const candidate = activePrefix ?? href; const normalized = candidate === "/" ? "/" : candidate.replace(/\/$/, ""); if (normalized === "/") return pathname === "/"; return pathname === normalized || pathname.startsWith(`${normalized}/`); }; return (
lix
); } ================================================ FILE: packages/website/src/components/landing-page.tsx ================================================ import { useRouterState } from "@tanstack/react-router"; import { getGithubStars } from "../github-stars-cache"; import { Footer } from "./footer"; /** * Lix logo used across the landing page. * * @example * */ const LixLogo = ({ className = "" }) => ( ); /** * GitHub mark icon used in the site header. * * @example * */ const GitHubIcon = ({ className = "" }) => ( ); /** * Discord icon used in the site header. * * @example * */ const DiscordIcon = ({ className = "" }) => ( ); /** * X (formerly Twitter) icon used in the site header. * * @example * */ const XIcon = ({ className = "" }) => ( ); /** * JavaScript icon for code tabs. */ const JsIcon = ({ className = "" }) => ( JS ); /** * Python icon for code tabs. */ const PythonIcon = ({ className = "" }) => ( ); /** * Rust icon for code tabs. */ const RustIcon = ({ className = "" }) => ( ); /** * Go icon for code tabs. */ const GoIcon = ({ className = "" }) => ( ); /** * Landing page for the Lix documentation site. * * @example * */ function LandingPage({ readmeHtml }: { readmeHtml?: string }) { const docsPath = "/docs/what-is-lix"; const pathname = useRouterState({ select: (state) => state.location.pathname, }); const githubStars = getGithubStars("opral/lix"); const formatStars = (count: number) => { if (count >= 1000) { return `${(count / 1000).toFixed(1).replace(/\.0$/, "")}k`; } return count.toString(); }; const navLinks = [ { href: docsPath, label: "Docs", activePrefix: "/docs" }, { href: "/plugins", label: "Plugins", activePrefix: "/plugins" }, { href: "/blog", label: "Blog", activePrefix: "/blog" }, ]; const isActive = (href: string, activePrefix?: string) => { const candidate = activePrefix ?? href; const normalized = candidate === "/" ? "/" : candidate.replace(/\/$/, ""); if (normalized === "/") return pathname === "/"; return pathname === normalized || pathname.startsWith(`${normalized}/`); }; const socialLinks = [ { href: "https://discord.gg/gdMPPWy57R", label: "Discord", Icon: DiscordIcon, sizeClass: "h-5 w-5", }, { href: "https://x.com/lixCCS", label: "X", Icon: XIcon, sizeClass: "h-4 w-4", }, ]; return (
lix
{/* Main content */}
{/* Hero Section - Simplified */}
{/* Beta Chip */} Lix is in alpha · Follow progress to v1.0

Embeddable version control system for AI agents

Lix is a version control system that can be imported as a library. Use it to, for example, enable human-in-the-loop workflows for AI agents like diffs and reviews.

{/* Trust signals */}
90k+
Weekly downloads
MIT
Open Source
{/* Hero code snippet with language tabs */}
import{" "} {"{ openLix }"}{" "} from{" "} "@lix-js/sdk" ;
const{" "} lix{" "} = await{" "} openLix {"()"}
{/* Value Props - Lightweight */}
{/* Library/dependency illustration */}
dependencies
http
2.1
db
3.0
lix
1.0

Fits into your tech stack

Import Lix and get branching, diff, and rollback without changing your architecture.

{/* Diff illustration - semantic/field-level */}
config.json
title
Draft Final
price
10 12

Tracks semantic changes

Lix stores semantic changes via plugins. Diffs, blame, and history are queryable via SQL.

{/* Trace illustration */}
12:03 edit config.json
12:04 update data.xlsx
12:05
approved
12:06 edit report.pdf

Human in the loop for agents

Agents propose changes in isolated versions. Humans review, approve, and merge.

{/* README Content */} {readmeHtml && (
{/* GitHub README banner */}
README.md from opral/lix
View on GitHub
)}
); } export default LandingPage; ================================================ FILE: packages/website/src/components/markdown-page.interactive.js ================================================ import "./doc-code-snippet-element"; const COPY_BUTTON_ATTR = "data-mwc-copy-button"; function ensureCopyButtons(root = document) { const blocks = root.querySelectorAll("pre[data-mwc-codeblock]"); for (const pre of blocks) { if (pre.querySelector(`[${COPY_BUTTON_ATTR}]`)) continue; const button = document.createElement("button"); button.type = "button"; button.setAttribute(COPY_BUTTON_ATTR, ""); button.className = "mwc-copy-button"; button.textContent = "Copy"; pre.appendChild(button); } } function handleCopyClick(event) { const target = event.target; if (!(target instanceof HTMLElement)) return; const button = target.closest(`[${COPY_BUTTON_ATTR}]`); if (!button) return; const pre = button.closest("pre[data-mwc-codeblock]"); const code = pre?.querySelector("code")?.textContent ?? ""; navigator.clipboard.writeText(code); const previous = button.textContent; button.textContent = "Copied"; window.setTimeout(() => { button.textContent = previous || "Copy"; }, 1500); } function initCopyButtons() { if (window.__lixDocsCopyButtonsInitialized) return; window.__lixDocsCopyButtonsInitialized = true; ensureCopyButtons(); document.addEventListener("click", handleCopyClick); const observer = new MutationObserver(() => ensureCopyButtons()); observer.observe(document.body, { childList: true, subtree: true }); } if (typeof window !== "undefined") { initCopyButtons(); } ================================================ FILE: packages/website/src/components/markdown-page.style.css ================================================ /** * VitePress-style markdown content styles * Based on VitePress's vp-doc styling */ /* CSS Custom Properties matching VitePress */ :root { --vp-c-brand-1: #3451b2; --vp-c-brand-2: #3a5ccc; --vp-c-brand-soft: rgba(100, 108, 255, 0.14); --vp-c-text-1: rgba(60, 60, 67); --vp-c-text-2: rgba(60, 60, 67, 0.78); --vp-c-text-3: rgba(60, 60, 67, 0.56); --vp-c-divider: rgba(60, 60, 67, 0.12); --vp-c-bg: #ffffff; --vp-c-bg-soft: #f6f6f7; --vp-c-bg-alt: #f6f6f7; --vp-c-border: rgba(60, 60, 67, 0.12); --vp-c-gutter: rgba(60, 60, 67, 0.05); /* Tip colors - Green */ --vp-c-tip-1: #059669; --vp-c-tip-2: rgba(5, 150, 105, 0.16); --vp-c-tip-3: rgba(5, 150, 105, 0.08); /* Warning colors - Orange */ --vp-c-warning-1: #ea580c; --vp-c-warning-2: rgba(234, 88, 12, 0.16); --vp-c-warning-3: rgba(234, 88, 12, 0.08); /* Danger colors */ --vp-c-danger-1: #b8272c; --vp-c-danger-2: rgba(244, 63, 94, 0.16); --vp-c-danger-3: rgba(244, 63, 94, 0.08); /* Note/Info colors */ --vp-c-note-1: #3451b2; --vp-c-note-2: rgba(100, 108, 255, 0.16); --vp-c-note-3: rgba(100, 108, 255, 0.08); /* Important colors */ --vp-c-important-1: #8250df; --vp-c-important-2: rgba(130, 80, 223, 0.16); --vp-c-important-3: rgba(130, 80, 223, 0.08); /* Caution colors */ --vp-c-caution-1: #b8272c; --vp-c-caution-2: rgba(244, 63, 94, 0.16); --vp-c-caution-3: rgba(244, 63, 94, 0.08); --vp-code-font-size: 0.9375em; --vp-code-line-height: 1.6; } /* Base markdown body styles */ .markdown-wc-body { color: var(--vp-c-text-1); line-height: 1.7; font-size: 16px; } /* Headings */ .markdown-wc-body h1, .markdown-wc-body h2, .markdown-wc-body h3, .markdown-wc-body h4, .markdown-wc-body h5, .markdown-wc-body h6 { position: relative; font-weight: 600; outline: none; color: var(--vp-c-text-1); } .markdown-wc-body h1 { letter-spacing: -0.02em; line-height: 40px; font-size: 28px; margin: 0 0 16px 0; } .markdown-wc-body h2 { margin: 48px 0 16px; border-top: 1px solid var(--vp-c-divider); padding-top: 24px; letter-spacing: -0.02em; line-height: 32px; font-size: 24px; } .markdown-wc-body h3 { margin: 32px 0 0; letter-spacing: -0.01em; line-height: 28px; font-size: 20px; } .markdown-wc-body h4 { margin: 24px 0 0; letter-spacing: -0.01em; line-height: 24px; font-size: 18px; } /* Anchor scroll offset so hash links don't stick to viewport top */ .markdown-wc-body h2[id], .markdown-wc-body h3[id], .markdown-wc-body h4[id], .markdown-wc-body h5[id], .markdown-wc-body h6[id] { scroll-margin-top: 64px; } /* Heading anchor hash on hover (VitePress-style) */ .markdown-wc-body h2:has(> a), .markdown-wc-body h3:has(> a), .markdown-wc-body h4:has(> a), .markdown-wc-body h5:has(> a), .markdown-wc-body h6:has(> a) { position: relative; } .markdown-wc-body h2:has(> a)::before, .markdown-wc-body h3:has(> a)::before, .markdown-wc-body h4:has(> a)::before, .markdown-wc-body h5:has(> a)::before, .markdown-wc-body h6:has(> a)::before { content: "#"; position: absolute; left: 0; top: 0; transform: translateX(-1.1em); color: var(--vp-c-brand-1); opacity: 0; font-weight: 600; line-height: inherit; transition: opacity 0.25s; } /* h2 has extra padding-top for the divider; align hash with text */ .markdown-wc-body h2:has(> a)::before { top: 24px; } /* First h2 after h1 has no divider/padding */ .markdown-wc-body h1 + h2:has(> a)::before { top: 0; } .markdown-wc-body h2:has(> a):hover::before, .markdown-wc-body h3:has(> a):hover::before, .markdown-wc-body h4:has(> a):hover::before, .markdown-wc-body h5:has(> a):hover::before, .markdown-wc-body h6:has(> a):hover::before, .markdown-wc-body h2:has(> a):focus-within::before, .markdown-wc-body h3:has(> a):focus-within::before, .markdown-wc-body h4:has(> a):focus-within::before, .markdown-wc-body h5:has(> a):focus-within::before, .markdown-wc-body h6:has(> a):focus-within::before { opacity: 1; } /* First h2 should not have border-top */ .markdown-wc-body h1 + h2 { margin-top: 24px; border-top: none; padding-top: 0; } /* Paragraphs */ .markdown-wc-body p { margin: 16px 0; line-height: 28px; } /* First paragraph after heading */ .markdown-wc-body h1 + p, .markdown-wc-body h2 + p, .markdown-wc-body h3 + p, .markdown-wc-body h4 + p { margin-top: 8px; } /* Links */ .markdown-wc-body a { font-weight: 500; color: var(--vp-c-brand-1); text-decoration: underline; text-underline-offset: 2px; transition: color 0.25s, opacity 0.25s; } .markdown-wc-body a:hover { color: var(--vp-c-brand-2); } /* Links inside headings should inherit heading styles */ .markdown-wc-body h1 a, .markdown-wc-body h2 a, .markdown-wc-body h3 a, .markdown-wc-body h4 a, .markdown-wc-body h5 a, .markdown-wc-body h6 a { color: inherit; font-weight: inherit; text-decoration: none; } .markdown-wc-body h1 a:hover, .markdown-wc-body h2 a:hover, .markdown-wc-body h3 a:hover, .markdown-wc-body h4 a:hover, .markdown-wc-body h5 a:hover, .markdown-wc-body h6 a:hover { color: inherit; } /* Lists */ .markdown-wc-body ul, .markdown-wc-body ol { padding-left: 1.25rem; margin: 16px 0; } .markdown-wc-body ul { list-style: disc; } .markdown-wc-body ol { list-style: decimal; } .markdown-wc-body li + li { margin-top: 8px; } .markdown-wc-body li > ol, .markdown-wc-body li > ul { margin: 8px 0 0; } .markdown-wc-body ul.contains-task-list { padding-left: 0; list-style: none; } .markdown-wc-body li.task-list-item { display: flex; align-items: flex-start; gap: 8px; } .markdown-wc-body li.task-list-item + li.task-list-item { margin-top: 8px; } .markdown-wc-body li.task-list-item > input[type="checkbox"] { appearance: none; display: inline-grid; place-content: center; flex: 0 0 auto; width: 14px; height: 14px; margin: 5px 0 0; border: 1px solid var(--vp-c-divider); border-radius: 3px; background: var(--vp-c-bg); opacity: 1; } .markdown-wc-body li.task-list-item > input[type="checkbox"]:checked { border-color: var(--vp-c-brand-1); background: var(--vp-c-brand-1); } .markdown-wc-body li.task-list-item > input[type="checkbox"]:checked::before { content: ""; width: 7px; height: 4px; border: solid white; border-width: 0 0 2px 2px; transform: translateY(-1px) rotate(-45deg); } /* Inline code */ .markdown-wc-body :not(pre) > code { font-size: var(--vp-code-font-size); color: var(--vp-c-text-1); background-color: var(--vp-c-bg-soft); border-radius: 4px; padding: 2px 6px; font-weight: 500; transition: color 0.25s, background-color 0.5s; } /* Ensure code inside pre doesn't have nested background */ .markdown-wc-body pre code { background: transparent !important; } /* Syntax highlighting color overrides for better contrast */ .markdown-wc-body pre .hljs { color: var(--vp-c-text-1); } /* Keywords (export, default, const, etc.) */ .markdown-wc-body pre .hljs-keyword, .markdown-wc-body pre .hljs-selector-tag, .markdown-wc-body pre .hljs-built_in { color: #d73a49; } /* Strings */ .markdown-wc-body pre .hljs-string, .markdown-wc-body pre .hljs-attr { color: #032f62; } /* Object property names / attributes */ .markdown-wc-body pre .hljs-attr { color: #005cc5; } /* Comments */ .markdown-wc-body pre .hljs-comment, .markdown-wc-body pre .hljs-quote { color: #6a737d; font-style: italic; } /* Numbers */ .markdown-wc-body pre .hljs-number { color: #005cc5; } /* Functions */ .markdown-wc-body pre .hljs-title, .markdown-wc-body pre .hljs-title.function_ { color: #6f42c1; } /* Types */ .markdown-wc-body pre .hljs-type, .markdown-wc-body pre .hljs-class { color: #d73a49; } /* Diff styling for code blocks */ .markdown-wc-body pre code .hljs-deletion { background-color: #ffebee; color: #b71c1c; padding: 0 2px; } .markdown-wc-body pre code .hljs-addition { background-color: #e8f5e9; color: #1b5e20; padding: 0 2px; } .markdown-wc-body pre code .hljs-deletion * { color: #b71c1c !important; } .markdown-wc-body pre code .hljs-addition * { color: #1b5e20 !important; } .markdown-wc-body pre code .hljs-meta { color: inherit; font-weight: 600; } .markdown-wc-body a > code { color: var(--vp-c-brand-1); } /* Code blocks */ .markdown-wc-body pre { margin: 16px 0; padding: 8px 12px; background-color: var(--vp-c-bg-soft); border-radius: 8px; overflow-x: auto; position: relative; } /* Copy button for code blocks */ .markdown-wc-body pre { position: relative; } .markdown-wc-body pre > button.mwc-copy-button { position: absolute; top: 8px; right: 8px; padding: 4px 8px; font-size: 12px; font-weight: 500; border-radius: 6px; background: var(--vp-c-bg); color: var(--vp-c-text-2); border: 1px solid var(--vp-c-divider); opacity: 0; cursor: pointer; transition: opacity 0.2s, color 0.2s, border-color 0.2s, background-color 0.2s; } .markdown-wc-body pre:hover > button.mwc-copy-button { opacity: 1; } .markdown-wc-body pre > button.mwc-copy-button:hover { color: var(--vp-c-text-1); border-color: var(--vp-c-border); background: var(--vp-c-bg-soft); } .markdown-wc-body pre code { display: block; padding: 0; width: fit-content; min-width: 100%; line-height: var(--vp-code-line-height); font-size: var(--vp-code-font-size); color: var(--vp-c-text-1); background: transparent; border-radius: 0; font-weight: 400; transition: color 0.25s; } /* Blockquotes */ .markdown-wc-body blockquote { margin: 16px 0; border-left: 2px solid var(--vp-c-divider); padding-left: 16px; color: var(--vp-c-text-2); transition: border-color 0.5s, color 0.5s; } .markdown-wc-body blockquote > p { margin: 0; font-size: 16px; line-height: 28px; } .markdown-wc-body blockquote[data-mwc-alert] > p { font-size: 0.875rem; line-height: 1.5rem; } /* GitHub-style alerts/callouts */ .markdown-wc-body blockquote[data-mwc-alert] { border-left: none; border-radius: 8px; padding: 16px; margin: 16px 0; color: var(--vp-c-text-1); font-size: 0.875rem; line-height: 1.5rem; } .markdown-wc-body blockquote[data-mwc-alert] [data-mwc-alert-marker] { display: none; } .markdown-wc-body blockquote[data-mwc-alert]::before { display: block; font-weight: 600; margin-bottom: 8px; } .markdown-wc-body blockquote[data-mwc-alert="note"] { background-color: var(--vp-c-note-3); border: 1px solid var(--vp-c-note-2); } .markdown-wc-body blockquote[data-mwc-alert="note"]::before { content: "Note"; color: var(--vp-c-note-1); } .markdown-wc-body blockquote[data-mwc-alert="tip"] { background-color: var(--vp-c-tip-3); border: 1px solid var(--vp-c-tip-2); } .markdown-wc-body blockquote[data-mwc-alert="tip"]::before { content: "Tip"; color: var(--vp-c-tip-1); } .markdown-wc-body blockquote[data-mwc-alert="important"] { background-color: var(--vp-c-important-3); border: 1px solid var(--vp-c-important-2); } .markdown-wc-body blockquote[data-mwc-alert="important"]::before { content: "Important"; color: var(--vp-c-important-1); } .markdown-wc-body blockquote[data-mwc-alert="warning"] { background-color: var(--vp-c-warning-3); border: 1px solid var(--vp-c-warning-2); } .markdown-wc-body blockquote[data-mwc-alert="warning"]::before { content: "Warning"; color: var(--vp-c-warning-1); } .markdown-wc-body blockquote[data-mwc-alert="caution"] { background-color: var(--vp-c-caution-3); border: 1px solid var(--vp-c-caution-2); } .markdown-wc-body blockquote[data-mwc-alert="caution"]::before { content: "Caution"; color: var(--vp-c-caution-1); } /* Custom callout styling */ .markdown-wc-body .callout, .markdown-wc-body .custom-block { margin: 16px 0; border-radius: 8px; padding: 16px 16px 8px; line-height: 24px; font-size: 14px; color: var(--vp-c-text-1); } .markdown-wc-body .callout p, .markdown-wc-body .custom-block p { margin: 8px 0; line-height: 24px; } .markdown-wc-body .callout-title, .markdown-wc-body .custom-block-title { display: flex; align-items: center; gap: 8px; font-weight: 600; margin-bottom: 8px; } /* TIP callout */ .markdown-wc-body .callout.tip, .markdown-wc-body .custom-block.tip { background-color: var(--vp-c-tip-3); border: 1px solid var(--vp-c-tip-2); } .markdown-wc-body .callout.tip .callout-title, .markdown-wc-body .custom-block.tip .custom-block-title { color: var(--vp-c-tip-1); } /* NOTE/INFO callout */ .markdown-wc-body .callout.note, .markdown-wc-body .callout.info, .markdown-wc-body .custom-block.info { background-color: var(--vp-c-note-3); border: 1px solid var(--vp-c-note-2); } .markdown-wc-body .callout.note .callout-title, .markdown-wc-body .callout.info .callout-title, .markdown-wc-body .custom-block.info .custom-block-title { color: var(--vp-c-note-1); } /* WARNING callout */ .markdown-wc-body .callout.warning, .markdown-wc-body .custom-block.warning { background-color: var(--vp-c-warning-3); border: 1px solid var(--vp-c-warning-2); } .markdown-wc-body .callout.warning .callout-title, .markdown-wc-body .custom-block.warning .custom-block-title { color: var(--vp-c-warning-1); } /* DANGER/CAUTION callout */ .markdown-wc-body .callout.danger, .markdown-wc-body .callout.caution, .markdown-wc-body .custom-block.danger { background-color: var(--vp-c-danger-3); border: 1px solid var(--vp-c-danger-2); } .markdown-wc-body .callout.danger .callout-title, .markdown-wc-body .callout.caution .callout-title, .markdown-wc-body .custom-block.danger .custom-block-title { color: var(--vp-c-danger-1); } /* IMPORTANT callout */ .markdown-wc-body .callout.important { background-color: var(--vp-c-important-3); border: 1px solid var(--vp-c-important-2); } .markdown-wc-body .callout.important .callout-title { color: var(--vp-c-important-1); } /* Tables */ .markdown-wc-body table { display: block; border-collapse: collapse; margin: 20px 0; overflow-x: auto; } .markdown-wc-body tr { background-color: var(--vp-c-bg); border-top: 1px solid var(--vp-c-divider); transition: background-color 0.5s; } .markdown-wc-body tr:nth-child(2n) { background-color: var(--vp-c-bg-soft); } .markdown-wc-body th, .markdown-wc-body td { border: 1px solid var(--vp-c-divider); padding: 8px 16px; } .markdown-wc-body th { font-size: 14px; font-weight: 600; color: var(--vp-c-text-1); background-color: var(--vp-c-bg-soft); } .markdown-wc-body td { font-size: 14px; } /* Horizontal rule */ .markdown-wc-body hr { margin: 16px 0; border: none; border-top: 1px solid var(--vp-c-divider); } /* Images - General */ .markdown-wc-body img { max-width: 100%; border-radius: 8px; } /* Respect explicit height/width attributes on images */ .markdown-wc-body img[height] { height: attr(height px); } .markdown-wc-body img[width] { width: attr(width px); } /* Fallback for browsers that don't support attr() for height/width */ .markdown-wc-body img[height="14"] { height: 14px; } .markdown-wc-body img[height="18"] { height: 18px; } .markdown-wc-body img[height="20"] { height: 20px; } .markdown-wc-body img[height="24"] { height: 24px; } .markdown-wc-body img[height="32"] { height: 32px; } .markdown-wc-body img[width="18"] { width: 18px; } .markdown-wc-body img[width="20"] { width: 20px; } .markdown-wc-body img[width="24"] { width: 24px; } .markdown-wc-body img[width="32"] { width: 32px; } /* Images - Standalone content images */ .markdown-wc-body p > img:only-child { display: block; margin: 16px auto; } /* Images inside links - Always inline (badges, logos, icons) */ .markdown-wc-body a > img { display: inline-block; vertical-align: middle; margin: 0; border: none; border-radius: 3px; background-color: transparent; } /* Links containing images - inline with spacing */ .markdown-wc-body a:has(> img) { display: inline-block; text-decoration: none; } /* Badge-only paragraphs (no br, sub, or other block content) - flex layout */ .markdown-wc-body p:has(a > img):not(:has(br)):not(:has(sub)):not(:has(sup)) { display: flex; flex-wrap: wrap; gap: 4px; align-items: center; } /* Paragraphs with mixed content (br, sub, etc.) - keep normal flow */ .markdown-wc-body p:has(a > img):has(br), .markdown-wc-body p:has(a > img):has(sub) { display: block; } .markdown-wc-body p:has(a > img):has(br) a:has(> img), .markdown-wc-body p:has(a > img):has(sub) a:has(> img) { margin-right: 8px; } /* Centered content via align attribute */ .markdown-wc-body [align="center"], .markdown-wc-body p[align="center"], .markdown-wc-body div[align="center"] { text-align: center; } /* Badge-only centered paragraphs need justify-content */ .markdown-wc-body p[align="center"]:has(a > img):not(:has(br)):not(:has(sub)):not(:has(sup)) { justify-content: center; } /* Strong/Bold */ .markdown-wc-body strong { font-weight: 600; } /* Definition lists */ .markdown-wc-body dt { font-weight: 600; margin-top: 16px; } .markdown-wc-body dd { margin-left: 1.25rem; margin-top: 4px; } /* Keyboard shortcuts */ .markdown-wc-body kbd { display: inline-block; padding: 0 6px; font-size: 12px; font-weight: 500; line-height: 20px; color: var(--vp-c-text-1); background-color: var(--vp-c-bg-soft); border: 1px solid var(--vp-c-border); border-radius: 4px; box-shadow: 0 1px 1px rgba(0, 0, 0, 0.04); } ================================================ FILE: packages/website/src/components/markdown-page.tsx ================================================ import { useEffect, useState } from "react"; import { splitTitleFromHtml } from "../lib/seo"; type CopyStatus = "idle" | "copied"; /** * Copy icon used for the markdown copy button. * * @example * */ const CopyMarkdownIcon = ({ className = "" }: { className?: string }) => ( ); /** * Check icon shown on copy success. * * @example * */ const CopyCheckIcon = ({ className = "" }: { className?: string }) => ( ); /** * Renders pre-parsed markdown HTML inside the docs layout. * * @example * */ export function MarkdownPage({ html, markdown, imports, }: { html: string; markdown?: string; imports?: string[]; }) { const [copyStatus, setCopyStatus] = useState("idle"); useEffect(() => { // @ts-expect-error - JS-only module import("./markdown-page.interactive.js"); }, [html]); useEffect(() => { if (!imports || imports.length === 0) return; for (const url of imports) { if (!url) continue; const existing = document.querySelector( `script[data-mdwc-import="${url}"]`, ); if (existing) continue; const script = document.createElement("script"); script.type = "module"; script.src = url; script.setAttribute("data-mdwc-import", url); document.head.appendChild(script); } }, [imports]); const { title, body } = splitTitleFromHtml(html); const handleCopy = () => { if (!markdown) return; const clipboard = navigator?.clipboard; if (!clipboard?.writeText) return; clipboard.writeText(markdown).then(() => { setCopyStatus("copied"); window.setTimeout(() => setCopyStatus("idle"), 2000); }); }; return (
{title && (

{title}

)}
); } ================================================ FILE: packages/website/src/components/prev-next-nav.tsx ================================================ import { Link } from "@tanstack/react-router"; type PrevNextItem = { slug: string; title: string; } | null; type PrevNextNavProps = { prev: PrevNextItem; next: PrevNextItem; basePath: string; paramName?: string; prevLabel?: string; nextLabel?: string; className?: string; }; /** * Reusable previous/next navigation component for docs, blog, and RFCs. * * @example * */ export function PrevNextNav({ prev, next, basePath, paramName = "slug", prevLabel = "Previous", nextLabel = "Next", className = "", }: PrevNextNavProps) { if (!prev && !next) return null; return ( ); } ================================================ FILE: packages/website/src/github-stars-cache.ts ================================================ import githubCache from "./github_repo_data.gen.json"; export type GithubRepoMetrics = { stars: number; forks: number; openIssues: number; closedIssues: number; contributorCount: number; }; type GithubCache = { generatedAt: string; data: Record; }; const cache = githubCache as GithubCache; function normalizeRepo(input: string): string | null { const trimmed = input.trim(); if (!trimmed) return null; const urlMatch = trimmed.match(/github\.com\/([^/]+\/[^/]+)/i); const repo = urlMatch ? urlMatch[1] : trimmed; const normalized = repo.replace(/\.git$/i, ""); return /^[^/]+\/[^/]+$/.test(normalized) ? normalized : null; } export function getGithubRepoMetrics(repo: string): GithubRepoMetrics | null { const normalized = normalizeRepo(repo); if (!normalized) return null; const key = normalized.toLowerCase(); if (!(key in cache.data)) { return null; } return cache.data[key] ?? null; } export function getGithubStars(repo: string): number | null { const metrics = getGithubRepoMetrics(repo); return metrics?.stars ?? null; } ================================================ FILE: packages/website/src/lib/build-doc-map.test.ts ================================================ import { describe, expect, test } from "vitest"; import { buildDocMaps, buildTocMap, normalizeRelativePath, resolveDocsMarkdownHref, slugifyFileName, slugifyRelativePath, } from "./build-doc-map"; describe("buildDocMaps", () => { test("creates slug records from markdown frontmatter", () => { const { bySlug } = buildDocMaps({ "/docs/guide/hello-world.md": `--- slug: hello-doc title: Hello World description: Sample doc --- # Hello world`, "/docs/reference/api.md": `--- title: API --- API contents`, }); expect(bySlug["hello-doc"].relativePath).toBe("./guide/hello-world.md"); expect(bySlug["api"].relativePath).toBe("./reference/api.md"); }); }); describe("buildTocMap", () => { test("normalizes relative file paths", () => { const tocMap = buildTocMap({ Overview: [ { path: "./what-is-lix.md", label: "What is Lix?" }, { path: "/docs/guide/setup.md", label: "Setup" }, ], }); expect(tocMap.get("./what-is-lix.md")?.label).toBe("What is Lix?"); expect(tocMap.get("./guide/setup.md")?.label).toBe("Setup"); }); }); describe("path helpers", () => { test("normalizeRelativePath removes docs prefix", () => { expect(normalizeRelativePath("/docs/guide/setup.md")).toBe( "./guide/setup.md", ); }); test("normalizeRelativePath handles website-local legacy docs paths", () => { expect(normalizeRelativePath("/content/docs/guide/setup.md")).toBe( "./guide/setup.md", ); }); test("slugifyRelativePath flattens path into url safe slug", () => { expect(slugifyRelativePath("./guide/hello-world.md")).toBe( "guide-hello-world", ); }); test("slugifyFileName uses the filename without extension", () => { expect(slugifyFileName("./guide/hello-world.md")).toBe("hello-world"); }); }); describe("resolveDocsMarkdownHref", () => { const currentDoc = { slug: "persistence", content: "", relativePath: "./persistence.md", }; const docsByRelativePath = { "./backend.md": { slug: "backend", content: "", relativePath: "./backend.md", }, "./versions.md": { slug: "versions", content: "", relativePath: "./versions.md", }, }; test("resolves portable markdown links to clean docs routes", () => { expect( resolveDocsMarkdownHref("./backend.md", currentDoc, docsByRelativePath), ).toBe("/docs/backend"); }); test("resolves page-url based markdown links to clean docs routes", () => { expect( resolveDocsMarkdownHref( "/docs/persistence/backend.md", currentDoc, docsByRelativePath, ), ).toBe("/docs/backend"); }); test("preserves heading hashes", () => { expect( resolveDocsMarkdownHref( "./versions.md#merge", currentDoc, docsByRelativePath, ), ).toBe("/docs/versions#merge"); }); }); ================================================ FILE: packages/website/src/lib/build-doc-map.ts ================================================ export type TocItem = { path: string; label: string; }; export type Toc = Record; export type DocRecord = { slug: string; /** * Raw markdown including frontmatter. */ content: string; relativePath: string; }; export type DocsByRelativePath = Record; /** * Converts file path entries in the table of contents into a quick lookup map. * * @example * buildTocMap({ Overview: [{ path: "./what-is-lix.md", label: "What is Lix?" }] }); */ export function buildTocMap(toc: Toc): Map { const map = new Map(); for (const items of Object.values(toc)) { for (const item of items) { const normalized = normalizeRelativePath(item.path); map.set(normalized, item); } } return map; } /** * Builds doc lookup maps keyed by slug. * * @example * buildDocMaps({ "/docs/what-is-lix.md": rawMarkdown }); */ export function buildDocMaps(entries: Record) { return Object.entries(entries).reduce( (acc, [filePath, raw]) => { const relativePath = normalizeRelativePath(filePath); const frontmatter = extractFrontmatter(raw); const frontmatterSlug = frontmatter?.slug?.trim() ?? ""; const normalizedSlug = frontmatterSlug ? slugifyValue(frontmatterSlug) : ""; const slug = normalizedSlug || slugifyFileName(relativePath); const record: DocRecord = { slug, content: raw, relativePath, }; acc.bySlug[slug] = record; return acc; }, { bySlug: {} as Record, }, ); } /** * Resolves portable markdown file links to clean docs routes. * * Markdown files stay portable with links like `./backend.md`, while the site * renders them as `/docs/backend`. * * @example * resolveDocsMarkdownHref("./backend.md", { slug: "persistence", content: "", relativePath: "./persistence.md" }, { "./backend.md": { slug: "backend", content: "", relativePath: "./backend.md" } }) */ export function resolveDocsMarkdownHref( href: string, currentDoc: DocRecord, docsByRelativePath: DocsByRelativePath, ) { if ( href.startsWith("#") || /^[a-z][a-z0-9+.-]*:/i.test(href) || !href.replace(/[?#].*$/, "").endsWith(".md") ) { return undefined; } const hashIndex = href.indexOf("#"); const hash = hashIndex === -1 ? "" : href.slice(hashIndex); const withoutHash = hashIndex === -1 ? href : href.slice(0, hashIndex); const queryIndex = withoutHash.indexOf("?"); const query = queryIndex === -1 ? "" : withoutHash.slice(queryIndex); const pathOnly = queryIndex === -1 ? withoutHash : withoutHash.slice(0, queryIndex); const candidates = buildDocsLinkCandidates(pathOnly, currentDoc); for (const candidate of candidates) { const doc = docsByRelativePath[candidate]; if (doc) { return `/docs/${doc.slug}${query}${hash}`; } } return undefined; } function buildDocsLinkCandidates(pathOnly: string, currentDoc: DocRecord) { const candidates = new Set(); const currentSlugPrefix = `/docs/${currentDoc.slug}/`; if (pathOnly.startsWith(currentSlugPrefix)) { candidates.add( resolveRelativeDocPath( currentDoc.relativePath, pathOnly.slice(currentSlugPrefix.length), ), ); } if (pathOnly.startsWith("/docs/")) { candidates.add(normalizeRelativePath(pathOnly)); } else { candidates.add(resolveRelativeDocPath(currentDoc.relativePath, pathOnly)); } const fileName = pathOnly.split("/").pop(); if (fileName?.endsWith(".md")) { candidates.add(resolveRelativeDocPath(currentDoc.relativePath, fileName)); } return [...candidates]; } function resolveRelativeDocPath(currentRelativePath: string, hrefPath: string) { const currentPath = currentRelativePath.replace(/^\.\//, ""); const currentDirectory = currentPath.includes("/") ? currentPath.slice(0, currentPath.lastIndexOf("/")) : "."; const normalized = posixNormalize(`${currentDirectory}/${hrefPath}`); return normalized.startsWith(".") ? normalized : `./${normalized}`; } function posixNormalize(value: string) { const parts: string[] = []; for (const part of value.replace(/\\/g, "/").split("/")) { if (!part || part === ".") continue; if (part === "..") { parts.pop(); continue; } parts.push(part); } return parts.join("/"); } /** * Normalizes a doc file path to a relative form rooted at docs. * * @example * normalizeRelativePath("/docs/guide/setup.md") // "./guide/setup.md" */ export function normalizeRelativePath(filePath: string) { return filePath .replace(/\\/g, "/") .replace(/^.*\/docs\//, "./") .replace(/^docs\//, "./"); } /** * Produces a URL-safe slug base from a relative file path. * * @example * slugifyRelativePath("./guide/hello-world.md") // "guide-hello-world" */ export function slugifyRelativePath(relativePath: string) { const withoutExt = relativePath.replace(/\.md$/, ""); return withoutExt .replace(/^\.\//, "") .replace(/[\/\\]+/g, "-") .toLowerCase() .replace(/[^a-z0-9-]+/g, "-") .replace(/^-+|-+$/g, ""); } /** * Produces a URL-safe slug from a single filename. * * @example * slugifyFileName("./guide/hello-world.md") // "hello-world" */ export function slugifyFileName(relativePath: string) { const fileName = relativePath.split(/[\\/]/).pop() ?? relativePath; const withoutExt = fileName.replace(/\.md$/, ""); return slugifyValue(withoutExt); } /** * Produces a URL-safe slug from a string value. * * @example * slugifyValue("Hello World") // "hello-world" */ export function slugifyValue(value: string) { return value .toLowerCase() .replace(/[^a-z0-9-]+/g, "-") .replace(/^-+|-+$/g, ""); } /** * Extracts a minimal YAML frontmatter object from markdown. * * Only supports simple `key: value` pairs. * * @example * extractFrontmatter("---\\ntitle: Hello\\n---\\n# Title") // { title: "Hello" } */ function extractFrontmatter(markdown: string): Record | null { const match = markdown.match(/^---\s*\n([\s\S]*?)\n---\s*\n?/); if (!match) { return null; } const lines = match[1].split("\n"); const data: Record = {}; for (const line of lines) { const trimmed = line.trim(); if (!trimmed || trimmed.startsWith("#")) { continue; } const separatorIndex = trimmed.indexOf(":"); if (separatorIndex === -1) { continue; } const key = trimmed.slice(0, separatorIndex).trim(); const value = trimmed.slice(separatorIndex + 1).trim(); if (!key) { continue; } data[key] = value.replace(/^['"]|['"]$/g, ""); } return data; } ================================================ FILE: packages/website/src/lib/plugin-sidebar.ts ================================================ import type { SidebarSection } from "../components/docs-layout"; type PluginRegistry = { plugins?: Array<{ key: string; name?: string; }>; }; /** * Builds sidebar sections for plugin pages. * * @example * buildPluginSidebarSections(registry) */ export function buildPluginSidebarSections( registry: PluginRegistry, ): SidebarSection[] { const plugins = Array.isArray(registry.plugins) ? registry.plugins : []; const items = plugins.map((plugin) => ({ label: plugin.name ?? plugin.key, href: `/plugins/${plugin.key}`, relativePath: plugin.key, })); return items.length > 0 ? [ { label: "Plugins", items, }, ] : []; } ================================================ FILE: packages/website/src/lib/seo.test.ts ================================================ import { describe, expect, test } from "vitest"; import { resolveBlogAssetPath, resolveOgImageUrl } from "../blog/og-image"; import { buildCanonicalUrl, getMarkdownDescription, getMarkdownTitle, splitTitleFromHtml, } from "./seo"; describe("buildCanonicalUrl", () => { test("keeps the site root canonical without changing it", () => { expect(buildCanonicalUrl("/")).toBe("https://lix.dev"); }); test("normalizes route paths to no-trailing-slash canonicals", () => { expect(buildCanonicalUrl("/blog")).toBe("https://lix.dev/blog"); expect(buildCanonicalUrl("docs/what-is-lix")).toBe( "https://lix.dev/docs/what-is-lix", ); expect(buildCanonicalUrl("/rfc/002-rewrite-in-rust/")).toBe( "https://lix.dev/rfc/002-rewrite-in-rust", ); }); test("keeps file-like paths canonicalized without extra slash", () => { expect(buildCanonicalUrl("/lix-features.svg")).toBe( "https://lix.dev/lix-features.svg", ); }); }); describe("getMarkdownTitle", () => { test("prefers explicit frontmatter title over og title and markdown h1", () => { expect( getMarkdownTitle({ rawMarkdown: "# Markdown Title", frontmatter: { title: "Frontmatter Title", "og:title": "OG Title", }, }), ).toBe("Frontmatter Title"); }); test("falls back to og title when explicit title is missing", () => { expect( getMarkdownTitle({ rawMarkdown: "# Markdown Title", frontmatter: { "og:title": "OG Title", }, }), ).toBe("OG Title"); }); }); describe("getMarkdownDescription", () => { test("prefers explicit frontmatter description over og description and prose", () => { expect( getMarkdownDescription({ rawMarkdown: "# Title\n\nMarkdown description.", frontmatter: { description: "Frontmatter description.", "og:description": "OG description.", }, }), ).toBe("Frontmatter description."); }); test("falls back to og description when explicit description is missing", () => { expect( getMarkdownDescription({ rawMarkdown: "# Title\n\nMarkdown description.", frontmatter: { "og:description": "OG description.", }, }), ).toBe("OG description."); }); test("extracts clean prose and skips admonitions, code, lists, tables, and images", () => { const markdown = `# Validation Rules > [!NOTE] > Proposed feature. \`\`\`ts const nope = true; \`\`\` - list item | name | value | | --- | --- | ![Diagram](/example.png) Validation rules catch **mistakes** before [changes](/docs/change-proposals) ship and keep \`agents\` and humans aligned. ## Next More content here. `; expect(getMarkdownDescription({ rawMarkdown: markdown })).toBe( "Validation rules catch mistakes before changes ship and keep agents and humans aligned.", ); }); test("clamps long fallback descriptions at a safe boundary", () => { const markdown = `# Long Form Lix tracks semantic changes across structured files so teams can review AI-generated edits, audit what changed, and restore safe states without relying on brittle line-based diffs or app-specific APIs alone. `; const description = getMarkdownDescription({ rawMarkdown: markdown }); expect(description).toBe( "Lix tracks semantic changes across structured files so teams can review AI-generated edits, audit what changed, and restore safe states without relying on...", ); expect(description?.length).toBeLessThanOrEqual(160); }); }); describe("splitTitleFromHtml", () => { test("removes the first h1 from rendered html", () => { expect( splitTitleFromHtml("

RFC & Notes

Body copy

"), ).toEqual({ title: "RFC & Notes", body: "

Body copy

", }); }); }); describe("resolveOgImageUrl", () => { test("resolves blog-local images within the post folder", () => { expect( resolveOgImageUrl( "./cover.jpg", "002-modeling-a-company-as-a-repository", ), ).toBe( "https://lix.dev/blog/002-modeling-a-company-as-a-repository/cover.jpg", ); }); }); describe("resolveBlogAssetPath", () => { test("keeps visible blog card images deployment-relative", () => { expect( resolveBlogAssetPath( "./cover.jpg", "002-modeling-a-company-as-a-repository", ), ).toBe("/blog/002-modeling-a-company-as-a-repository/cover.jpg"); }); }); ================================================ FILE: packages/website/src/lib/seo.ts ================================================ const SITE_URL = "https://lix.dev"; const DEFAULT_OG_IMAGE_PATH = "/lix-features.svg"; const DEFAULT_OG_IMAGE_ALT = "Lix"; const DESCRIPTION_MAX_LENGTH = 160; const DESCRIPTION_SENTENCE_MIN_LENGTH = 120; type MarkdownMetaInput = { rawMarkdown: string; frontmatter?: Record; }; type MetaEntry = | { name: string; content: string } | { property: string; content: string }; export function buildCanonicalUrl(pathname: string): string { if (!pathname || pathname === "/") return SITE_URL; const normalized = pathname.startsWith("/") ? pathname : `/${pathname}`; const withoutTrailingSlash = normalized.endsWith("/") && normalized.length > 1 ? normalized.slice(0, -1) : normalized; return `${SITE_URL}${withoutTrailingSlash}`; } export function resolveOgImage(frontmatter?: Record) { const ogImage = (typeof frontmatter?.["og:image"] === "string" ? frontmatter["og:image"] : undefined) ?? (typeof frontmatter?.["twitter:image"] === "string" ? frontmatter["twitter:image"] : undefined) ?? DEFAULT_OG_IMAGE_PATH; const ogImageAlt = (typeof frontmatter?.["og:image:alt"] === "string" ? frontmatter["og:image:alt"] : undefined) ?? (typeof frontmatter?.["twitter:image:alt"] === "string" ? frontmatter["twitter:image:alt"] : undefined) ?? DEFAULT_OG_IMAGE_ALT; const url = normalizeAssetUrl(ogImage); return { url, alt: ogImageAlt }; } export function getMarkdownTitle(input: MarkdownMetaInput) { const title = typeof input.frontmatter?.title === "string" ? input.frontmatter.title : undefined; if (title) { return title.trim() || undefined; } const ogTitle = typeof input.frontmatter?.["og:title"] === "string" ? input.frontmatter["og:title"] : undefined; if (ogTitle) { return ogTitle.trim() || undefined; } return extractMarkdownH1(input.rawMarkdown); } export function getMarkdownDescription(input: MarkdownMetaInput) { const description = typeof input.frontmatter?.description === "string" ? input.frontmatter.description : undefined; if (description) { return normalizeDescriptionText(description); } const ogDescription = typeof input.frontmatter?.["og:description"] === "string" ? input.frontmatter["og:description"] : undefined; if (ogDescription) { return normalizeDescriptionText(ogDescription); } return extractMarkdownDescription(input.rawMarkdown); } export function extractOgMeta( frontmatter?: Record, ): MetaEntry[] { if (!frontmatter) return []; return Object.entries(frontmatter) .filter( ([key, value]) => key.startsWith("og:") && typeof value === "string" && key !== "og:image" && key !== "og:image:alt", ) .map(([key, value]) => ({ property: key, content: value as string, })); } export function extractTwitterMeta( frontmatter?: Record, ): MetaEntry[] { if (!frontmatter) return []; return Object.entries(frontmatter) .filter( ([key, value]) => key.startsWith("twitter:") && typeof value === "string" && key !== "twitter:image" && key !== "twitter:image:alt", ) .map(([key, value]) => ({ name: key, content: value as string, })); } export function extractMarkdownH1(markdown: string) { if (!markdown) return undefined; const sanitized = stripFrontmatter(markdown); const lines = sanitized.split(/\r?\n/); for (const line of lines) { if (line.startsWith("# ")) { return line.slice(2).trim() || undefined; } } return undefined; } export function extractMarkdownDescription(markdown: string) { if (!markdown) return undefined; const sanitized = stripFrontmatter(markdown); const lines = sanitized.split(/\r?\n/); let inCodeFence = false; let collecting = false; const paragraph: string[] = []; for (const line of lines) { const trimmed = line.trim(); if (trimmed.startsWith("```") || trimmed.startsWith("~~~")) { inCodeFence = !inCodeFence; if (collecting) break; continue; } if (inCodeFence) continue; if (!trimmed) { if (collecting) break; continue; } if (trimmed.startsWith("#")) continue; if (trimmed.startsWith(">")) continue; if (trimmed.startsWith("![")) continue; if (trimmed.startsWith("<")) continue; if (isMarkdownTableLine(trimmed)) continue; if (isMarkdownListLine(trimmed)) { continue; } const normalized = normalizeDescriptionText(trimmed); if (!normalized) { if (collecting) break; continue; } collecting = true; paragraph.push(normalized); } if (!paragraph.length) return undefined; return clampDescription(paragraph.join(" ")); } export function splitTitleFromHtml(html: string): { title?: string; body: string; } { const match = html.match(/]*>([\s\S]*?)<\/h1>/i); if (!match) { return { body: html }; } const title = decodeHtmlEntities(stripHtml(match[1])).trim(); const body = html.replace(match[0], "").trimStart(); return { title: title || undefined, body }; } type WebPageJsonLdInput = { title: string; description?: string; canonicalUrl: string; image?: string; }; export function buildWebPageJsonLd(input: WebPageJsonLdInput) { return { "@context": "https://schema.org", "@type": "WebPage", name: input.title, description: input.description, url: input.canonicalUrl, ...(input.image ? { image: input.image } : {}), }; } type WebSiteJsonLdInput = { title: string; description?: string; canonicalUrl: string; }; export function buildWebSiteJsonLd(input: WebSiteJsonLdInput) { return { "@context": "https://schema.org", "@type": "WebSite", name: input.title, description: input.description, url: input.canonicalUrl, }; } type BreadcrumbItem = { name: string; item: string; }; export function buildBreadcrumbJsonLd(items: BreadcrumbItem[]) { return { "@context": "https://schema.org", "@type": "BreadcrumbList", itemListElement: items.map((entry, index) => ({ "@type": "ListItem", position: index + 1, name: entry.name, item: entry.item, })), }; } function normalizeAssetUrl(value: string) { if (value.startsWith("http://") || value.startsWith("https://")) { return value; } if (value.startsWith("/")) { return `${SITE_URL}${value}`; } return `${SITE_URL}/${value}`; } function stripFrontmatter(markdown: string) { if (!markdown.startsWith("---")) return markdown; const end = markdown.indexOf("\n---", 3); if (end === -1) return markdown; return markdown.slice(end + 4).trimStart(); } function normalizeDescriptionText(value: string) { return value .replace(/!\[([^\]]*)\]\(([^)]+)\)/g, "$1") .replace(/\[([^\]]+)\]\(([^)]+)\)/g, "$1") .replace(/`([^`]+)`/g, "$1") .replace(/\*\*([^*]+)\*\*/g, "$1") .replace(/__([^_]+)__/g, "$1") .replace(/\*([^*]+)\*/g, "$1") .replace(/_([^_]+)_/g, "$1") .replace(/~~([^~]+)~~/g, "$1") .replace(/<[^>]+>/g, " ") .replace(/\s+/g, " ") .trim(); } function clampDescription(value: string) { if (value.length <= DESCRIPTION_MAX_LENGTH) { return value; } const withinLimit = value.slice(0, DESCRIPTION_MAX_LENGTH); const sentenceBoundary = Math.max( withinLimit.lastIndexOf(". "), withinLimit.lastIndexOf("! "), withinLimit.lastIndexOf("? "), ); if (sentenceBoundary >= DESCRIPTION_SENTENCE_MIN_LENGTH - 1) { return withinLimit.slice(0, sentenceBoundary + 1).trim(); } const wordBoundary = withinLimit.lastIndexOf(" "); if (wordBoundary > 0) { return `${withinLimit.slice(0, wordBoundary).trim()}...`; } return `${withinLimit.trim()}...`; } function isMarkdownListLine(value: string) { return ( value.startsWith("- ") || value.startsWith("* ") || value.startsWith("+ ") || /^\d+\.\s/.test(value) ); } function isMarkdownTableLine(value: string) { return ( value.startsWith("|") || /^\|?[\s:-]+\|[\s|:-]*$/.test(value) || value.includes("| ---") ); } function stripHtml(input: string): string { return input.replace(/<[^>]*>/g, ""); } function decodeHtmlEntities(input: string): string { return input .replace(/&/g, "&") .replace(/</g, "<") .replace(/>/g, ">") .replace(/"/g, '"') .replace(/'/g, "'"); } ================================================ FILE: packages/website/src/router.tsx ================================================ import { createRouter } from "@tanstack/react-router"; // Import the generated route tree import { routeTree } from "./routeTree.gen"; // Create a new router instance export const getRouter = () => { const router = createRouter({ routeTree, scrollRestoration: true, defaultPreloadStaleTime: 0, trailingSlash: "never", }); return router; }; ================================================ FILE: packages/website/src/routes/-seo-smoke.test.ts ================================================ import { readFileSync } from "node:fs"; import { parse } from "@opral/markdown-wc"; import { describe, expect, test } from "vitest"; import { getBlogDescription, getBlogTitle } from "../blog/blogMetadata"; import { resolveOgImageUrl } from "../blog/og-image"; import { getMarkdownDescription, getMarkdownTitle, splitTitleFromHtml, } from "../lib/seo"; import { buildBlogPostHead } from "./blog/$slug"; import { buildDocsPageHead } from "./docs/$slugId"; import { buildRfcHead } from "./rfc/$slug"; function findLink( links: Array<{ rel: string; href: string }> | undefined, rel: string, ) { return links?.find((entry) => entry.rel === rel)?.href; } function findMetaContent( meta: | Array< | { title: string } | { name: string; content: string } | { property: string; content: string } > | undefined, key: string, ) { const entry = meta?.find( (item) => ("name" in item && item.name === key) || ("property" in item && item.property === key), ); if (!entry || !("content" in entry)) { return undefined; } return entry.content; } describe("SEO route smoke tests", () => { test("docs head stays canonical and strips the rendered h1 once", async () => { const rawMarkdown = readFileSync( new URL("../../../../docs/comparison-to-git.md", import.meta.url), "utf8", ); const parsed = await parse(rawMarkdown, { externalLinks: true, assetBaseUrl: "/docs/comparison-to-git/", }); const rendered = splitTitleFromHtml(parsed.html); const head = buildDocsPageHead({ doc: { slug: "comparison-to-git", content: rawMarkdown, }, frontmatter: parsed.frontmatter, html: rendered.body, pageToc: [], sidebarSections: [], tocEntry: undefined, } as any); expect(findLink(head.links, "canonical")).toBe( "https://lix.dev/docs/comparison-to-git", ); expect(findMetaContent(head.meta, "og:title")).toBe( "Comparison to Git | Lix Documentation", ); expect(findMetaContent(head.meta, "twitter:description")).toBe( "Git versions text files line-by-line. Lix versions any file format (DOCX, XLSX, CAD, etc.) semantically per entity.", ); expect(rendered.title).toBe("Comparison to Git"); expect(rendered.body).not.toContain(" { const slug = "002-modeling-a-company-as-a-repository"; const rawMarkdown = readFileSync( new URL(`../../../../blog/${slug}/index.md`, import.meta.url), "utf8", ); const parsed = await parse(rawMarkdown, { assetBaseUrl: `/blog/${slug}/`, }); const rendered = splitTitleFromHtml(parsed.html); const title = getBlogTitle({ rawMarkdown, frontmatter: parsed.frontmatter, }); const description = getBlogDescription({ rawMarkdown, frontmatter: parsed.frontmatter, }); const ogImage = resolveOgImageUrl( parsed.frontmatter?.["og:image"] as string, slug, ); const head = buildBlogPostHead({ post: { slug, title, description, date: parsed.frontmatter?.date as string | undefined, authors: undefined, readingTime: 4, ogImage, ogImageAlt: parsed.frontmatter?.["og:image:alt"] as string | undefined, imports: undefined, }, html: rendered.body, rawMarkdown, prevPost: null, nextPost: null, }); expect(findLink(head.links, "canonical")).toBe( `https://lix.dev/blog/${slug}`, ); expect(findMetaContent(head.meta, "og:title")).toBe( "Your Company should be a Repository for AI agents | Lix Blog", ); expect(findMetaContent(head.meta, "twitter:image")).toBe( "https://lix.dev/blog/002-modeling-a-company-as-a-repository/cover.jpg", ); expect(rendered.title).toBe( "Your Company should be a Repository for AI agents", ); expect(rendered.body).not.toContain(" { const slug = "001-preprocess-writes"; const rawMarkdown = readFileSync( new URL(`../../../../rfcs/${slug}/index.md`, import.meta.url), "utf8", ); const parsed = await parse(rawMarkdown, { assetBaseUrl: `/rfc/${slug}/`, }); const rendered = splitTitleFromHtml(parsed.html); const title = getMarkdownTitle({ rawMarkdown, frontmatter: parsed.frontmatter, }); const description = getMarkdownDescription({ rawMarkdown, frontmatter: parsed.frontmatter, }); const head = buildRfcHead({ slug, title: title ?? slug, description: description ?? `Design proposal for ${title ?? slug}.`, date: parsed.frontmatter?.date as string | undefined, html: rendered.body, frontmatter: parsed.frontmatter, prevRfc: null, nextRfc: null, }); expect(findLink(head.links, "canonical")).toBe( `https://lix.dev/rfc/${slug}`, ); expect(findMetaContent(head.meta, "og:title")).toBe( "Preprocess writes to avoid vtable overhead | Lix RFCs", ); expect(findMetaContent(head.meta, "twitter:description")).toBe( "Write operations in Lix are slow due to the vtable mechanism crossing the JS ↔ SQLite WASM boundary multiple times per row.", ); expect(rendered.title).toBe("Preprocess writes to avoid vtable overhead"); expect(rendered.body).not.toContain(" ({ meta: [ { charSet: "utf-8", }, { name: "viewport", content: "width=device-width, initial-scale=1", }, { title: "Lix", }, { name: "theme-color", content: "#ffffff", }, { name: "robots", content: "index, follow", }, ], links: [ { rel: "stylesheet", href: appCss, }, { rel: "icon", type: "image/svg+xml", href: "/favicon.svg", }, { rel: "manifest", href: "/manifest.json", }, ], scripts: [ { type: "application/ld+json", children: JSON.stringify({ "@context": "https://schema.org", "@type": "Organization", name: "Lix", url: "https://lix.dev", logo: "https://lix.dev/icon.svg", sameAs: [ "https://github.com/opral/lix", "https://x.com/lixCCS", "https://discord.gg/gdMPPWy57R", ], }), }, ], }), notFoundComponent: NotFoundPage, shellComponent: RootDocument, }); function GoogleAnalytics() { const router = useRouter(); React.useEffect(() => { if (!import.meta.env.PROD) return; if ((window as any).__gaInitialized) return; (window as any).__gaInitialized = true; (window as any).dataLayer = (window as any).dataLayer || []; function gtag(...args: unknown[]) { (window as any).dataLayer.push(args); } (window as any).gtag = gtag; const script = document.createElement("script"); script.async = true; script.src = `https://www.googletagmanager.com/gtag/js?id=${GA_MEASUREMENT_ID}`; document.head.appendChild(script); gtag("js", new Date()); gtag("config", GA_MEASUREMENT_ID, { send_page_view: false }); const sendPageView = (location: { href: string; pathname: string; search: string; hash: string; }) => { gtag("event", "page_view", { page_location: location.href, page_path: `${location.pathname}${location.search}${location.hash}`, page_title: document.title, }); }; sendPageView(router.history.location); const unsubscribe = router.history.subscribe(({ location }) => { sendPageView(location); }); return () => { unsubscribe(); }; }, []); return null; } function RootDocument({ children }: { children: React.ReactNode }) { // Only render PostHogProvider on the client side to avoid hydration mismatches. // PostHog is a client-side only library and will cause React error #418 if // rendered during SSR. const [isMounted, setIsMounted] = React.useState(false); React.useEffect(() => { setIsMounted(true); }, []); const appContent = import.meta.env.PROD && isMounted && import.meta.env.VITE_PUBLIC_POSTHOG_KEY ? ( {children} ) : ( children ); return ( {appContent} ); } /** * Fallback UI for unmatched routes. * * @example * */ function NotFoundPage() { return (

404

Page not found

The page you are looking for does not exist.

); } ================================================ FILE: packages/website/src/routes/blog/$slug.tsx ================================================ import { createFileRoute, Link, redirect } from "@tanstack/react-router"; import { parse } from "@opral/markdown-wc"; import { useEffect, useState } from "react"; import markdownPageCss from "../../components/markdown-page.style.css?url"; import { getBlogDescription, getBlogTitle } from "../../blog/blogMetadata"; import { Footer } from "../../components/footer"; import { Header } from "../../components/header"; import { PrevNextNav } from "../../components/prev-next-nav"; import { resolveOgImageUrl } from "../../blog/og-image"; import { buildCanonicalUrl, buildWebPageJsonLd, resolveOgImage, splitTitleFromHtml, } from "../../lib/seo"; const blogMarkdownFiles = import.meta.glob( "../../../../../blog/**/*.md", { query: "?raw", import: "default", }, ); const blogJsonFiles = import.meta.glob("../../../../../blog/*.json", { query: "?raw", import: "default", }); const blogRootPrefix = "../../../../../blog/"; const ogImageWidth = 1200; const ogImageHeight = 630; type Author = { name: string; role?: string; avatar?: string | null; twitter?: string; github?: string; }; type BlogPrevNext = { slug: string; title: string; } | null; function calculateReadingTime(text: string): number { const wordsPerMinute = 200; const words = text.trim().split(/\s+/).length; return Math.max(1, Math.ceil(words / wordsPerMinute)); } async function loadBlogPost(slug: string) { if (!slug) { throw new Error("Missing blog slug"); } const authorsContent = await getBlogJson("authors.json"); const authorsMap = JSON.parse(authorsContent) as Record; const tocContent = await getBlogJson("table_of_contents.json"); const toc = JSON.parse(tocContent) as Array<{ path: string; slug: string; authors?: string[]; }>; // Load all posts to get dates from frontmatter for sorting const postsWithDates = await Promise.all( toc.map(async (item) => { const relPath = item.path.startsWith("./") ? item.path.slice(2) : item.path; const md = await getBlogMarkdown(relPath); const parsedMd = await parse(md); const date = parsedMd.frontmatter?.date as string | undefined; const title = getBlogTitle({ rawMarkdown: md, frontmatter: parsedMd.frontmatter }) ?? item.slug; return { ...item, date, title }; }), ); const sortedToc = [...postsWithDates].sort((a, b) => { if (!a.date && !b.date) return 0; if (!a.date) return 1; if (!b.date) return -1; return new Date(b.date).getTime() - new Date(a.date).getTime(); }); const currentIndex = sortedToc.findIndex((item) => item.slug === slug); const entry = sortedToc[currentIndex]; if (!entry) { throw new Error(`Blog post not found: ${slug}`); } const prevEntry = currentIndex > 0 ? sortedToc[currentIndex - 1] : null; const nextEntry = currentIndex < sortedToc.length - 1 ? sortedToc[currentIndex + 1] : null; const prevPost: BlogPrevNext = prevEntry ? { slug: prevEntry.slug, title: prevEntry.title } : null; const nextPost: BlogPrevNext = nextEntry ? { slug: nextEntry.slug, title: nextEntry.title } : null; const authors = entry.authors ?.map((authorId) => authorsMap[authorId]) .filter(Boolean); const relativePath = entry.path.startsWith("./") ? entry.path.slice(2) : entry.path; // Extract folder name from path (e.g., "001-introducing-lix" from "001-introducing-lix/index.md") const folderName = relativePath.replace(/\/index\.md$/, ""); const rawMarkdown = await getBlogMarkdown(relativePath); const parsed = await parse(rawMarkdown, { assetBaseUrl: `/blog/${folderName}/`, }); const rendered = splitTitleFromHtml(parsed.html); const title = getBlogTitle({ rawMarkdown, frontmatter: parsed.frontmatter, }) ?? rendered.title; const description = getBlogDescription({ rawMarkdown, frontmatter: parsed.frontmatter, }); // Get date from frontmatter const date = parsed.frontmatter?.date as string | undefined; const ogImageOverrideRaw = typeof parsed.frontmatter?.["og:image"] === "string" ? parsed.frontmatter["og:image"] : undefined; const ogImageOverride = ogImageOverrideRaw ? resolveOgImageUrl(ogImageOverrideRaw, folderName) : undefined; const ogImageAlt = typeof parsed.frontmatter?.["og:image:alt"] === "string" ? parsed.frontmatter["og:image:alt"] : undefined; const readingTime = calculateReadingTime(rawMarkdown); const imports = parsed.frontmatter?.imports as string[] | undefined; return { post: { slug: entry.slug, title, description, date, authors, readingTime, ogImage: ogImageOverride, ogImageAlt, imports, }, html: rendered.body, rawMarkdown, prevPost, nextPost, }; } type BlogPostLoaderData = Awaited>; export function buildBlogPostHead(loaderData?: BlogPostLoaderData) { const title = loaderData?.post.title; const description = loaderData?.post.description; const slug = loaderData?.post.slug; const defaultOg = resolveOgImage(); const ogImageUrl = loaderData?.post.ogImage ?? defaultOg.url; const ogImageAlt = loaderData?.post.ogImageAlt ?? (title ? `${title} cover` : "Lix blog post"); const canonicalUrl = slug ? buildCanonicalUrl(`/blog/${slug}`) : buildCanonicalUrl("/blog"); const meta: Array< | { title: string } | { name: string; content: string } | { property: string; content: string } > = [ { title: title ? `${title} | Lix Blog` : "Lix Blog" }, { property: "og:url", content: canonicalUrl }, { property: "og:type", content: "article" }, { property: "og:site_name", content: "Lix" }, { property: "og:locale", content: "en_US" }, { property: "og:image", content: ogImageUrl }, { property: "og:image:width", content: String(ogImageWidth) }, { property: "og:image:height", content: String(ogImageHeight) }, { property: "og:image:alt", content: ogImageAlt }, { name: "twitter:card", content: "summary_large_image" }, { name: "twitter:image", content: ogImageUrl }, { name: "twitter:image:alt", content: ogImageAlt }, ]; if (description) { meta.push( { name: "description", content: description }, { property: "og:description", content: description }, { name: "twitter:description", content: description }, ); } if (title) { const pageTitle = `${title} | Lix Blog`; meta.push( { property: "og:title", content: pageTitle }, { name: "twitter:title", content: pageTitle }, ); } if (loaderData?.post.date) { meta.push({ property: "article:published_time", content: loaderData.post.date, }); } if (loaderData?.post.authors) { loaderData.post.authors.forEach((author) => { meta.push({ property: "article:author", content: author.name, }); }); } const links = [ { rel: "stylesheet", href: markdownPageCss }, { rel: "canonical", href: canonicalUrl }, ]; if (loaderData?.prevPost?.slug) { links.push({ rel: "prev", href: buildCanonicalUrl(`/blog/${loaderData.prevPost.slug}`), }); } if (loaderData?.nextPost?.slug) { links.push({ rel: "next", href: buildCanonicalUrl(`/blog/${loaderData.nextPost.slug}`), }); } const pageTitle = title ? `${title} | Lix Blog` : "Lix Blog"; const jsonLd = buildWebPageJsonLd({ title: pageTitle, description, canonicalUrl, image: ogImageUrl, }); return { meta, links, scripts: slug ? [ { type: "application/ld+json", children: JSON.stringify({ "@context": "https://schema.org", "@type": "BlogPosting", headline: title ?? slug, description, url: canonicalUrl, image: ogImageUrl, ...(loaderData?.post.date ? { datePublished: loaderData.post.date } : {}), ...(loaderData?.post.authors ? { author: loaderData.post.authors.map((author) => ({ "@type": "Person", name: author.name, ...(author.avatar ? { image: author.avatar } : {}), ...(author.twitter || author.github ? { sameAs: [author.twitter, author.github].filter( (value): value is string => Boolean(value), ), } : {}), })), } : {}), }), }, { type: "application/ld+json", children: JSON.stringify(jsonLd), }, ...(loaderData?.post.authors ? loaderData.post.authors.map((author) => ({ type: "application/ld+json", children: JSON.stringify({ "@context": "https://schema.org", "@type": "Person", name: author.name, ...(author.avatar ? { image: author.avatar } : {}), ...(author.twitter || author.github ? { sameAs: [author.twitter, author.github].filter( (value): value is string => Boolean(value), ), } : {}), }), })) : []), ] : [], }; } export const Route = createFileRoute("/blog/$slug")({ loader: async ({ params }) => { try { return await loadBlogPost(params.slug); } catch { throw redirect({ to: "/blog" }); } }, head: ({ loaderData }) => buildBlogPostHead(loaderData), component: BlogPostPage, }); function BlogPostPage() { const { post, html, prevPost, nextPost } = Route.useLoaderData(); const [copied, setCopied] = useState(false); useEffect(() => { if (!post.imports || post.imports.length === 0) return; post.imports.forEach((url) => { import(/* @vite-ignore */ url).catch((err) => { console.error(`Failed to load web component from ${url}:`, err); }); }); }, [post.imports]); useEffect(() => { // @ts-expect-error - JS-only module import("../../components/markdown-page.interactive.js"); }, [html]); const copyUrl = async () => { try { await navigator.clipboard.writeText(window.location.href); setCopied(true); setTimeout(() => setCopied(false), 2000); } catch (err) { console.error("Failed to copy URL:", err); } }; return (

{post.title}

{post.authors && post.authors.length > 0 && (
{post.authors.map((author, index) => (
{author.avatar ? ( {author.name} ) : (
{author.name.charAt(0)}
)} {author.name} {author.twitter && ( )} {author.github && ( )}
))}
)}
{post.readingTime} min read
{post.date && ( )}

Get notified about new blog posts

); } function getBlogJson(filename: string): Promise { const loader = blogJsonFiles[`${blogRootPrefix}${filename}`]; if (!loader) { throw new Error(`Missing blog file: ${filename}`); } return loader(); } function getBlogMarkdown(relativePath: string): Promise { const normalized = relativePath.replace(/^[./]+/, ""); const loader = blogMarkdownFiles[`${blogRootPrefix}${normalized}`]; if (!loader) { throw new Error(`Missing blog markdown: ${relativePath}`); } return loader(); } function formatDate(dateString: string): string { try { const date = new Date(dateString); return date.toLocaleDateString("en-US", { year: "numeric", month: "long", day: "numeric", }); } catch { return dateString; } } ================================================ FILE: packages/website/src/routes/blog/index.tsx ================================================ import { createFileRoute, Link } from "@tanstack/react-router"; import { parse } from "@opral/markdown-wc"; import { getBlogDescription, getBlogTitle } from "../../blog/blogMetadata"; import { resolveBlogAssetPath } from "../../blog/og-image"; import { Footer } from "../../components/footer"; import { Header } from "../../components/header"; import { buildCanonicalUrl, resolveOgImage } from "../../lib/seo"; type Author = { name: string; avatar?: string | null; }; const blogMarkdownFiles = import.meta.glob( "../../../../../blog/**/*.md", { query: "?raw", import: "default", }, ); const blogJsonFiles = import.meta.glob("../../../../../blog/*.json", { query: "?raw", import: "default", }); const blogRootPrefix = "../../../../../blog/"; async function loadBlogIndex() { const authorsContent = await getBlogJson("authors.json"); const authorsMap = JSON.parse(authorsContent) as Record< string, { name: string; avatar?: string | null } >; const tocContent = await getBlogJson("table_of_contents.json"); const toc = JSON.parse(tocContent) as Array<{ path: string; slug: string; authors?: string[]; }>; const posts = await Promise.all( toc.map(async (item) => { const relativePath = item.path.startsWith("./") ? item.path.slice(2) : item.path; const rawMarkdown = await getBlogMarkdown(relativePath); const parsed = await parse(rawMarkdown); const title = getBlogTitle({ rawMarkdown, frontmatter: parsed.frontmatter, }); const description = getBlogDescription({ rawMarkdown, frontmatter: parsed.frontmatter, }); const authors = item.authors ?.map((authorId) => authorsMap[authorId]) .filter(Boolean) as Author[] | undefined; // Extract folder name from path (e.g., "001-introducing-lix" from "001-introducing-lix/index.md") const folderName = relativePath.replace(/\/index\.md$/, ""); const ogImageRaw = typeof parsed.frontmatter?.["og:image"] === "string" ? parsed.frontmatter["og:image"] : undefined; const ogImage = ogImageRaw ? resolveBlogAssetPath(ogImageRaw, folderName) : undefined; const ogImageAlt = (typeof parsed.frontmatter?.["og:image:alt"] === "string" ? parsed.frontmatter["og:image:alt"] : undefined) ?? (typeof parsed.frontmatter?.["twitter:image:alt"] === "string" ? parsed.frontmatter["twitter:image:alt"] : undefined) ?? (title ? `${title} cover image` : undefined); // Get date from frontmatter const date = parsed.frontmatter?.date as string | undefined; return { slug: item.slug, title, description, date, authors, ogImage, ogImageAlt, }; }), ); posts.sort((a, b) => { if (!a.date && !b.date) return 0; if (!a.date) return 1; if (!b.date) return -1; return new Date(b.date).getTime() - new Date(a.date).getTime(); }); return { posts }; } export const Route = createFileRoute("/blog/")({ loader: async () => { return await loadBlogIndex(); }, head: () => { const canonicalUrl = buildCanonicalUrl("/blog"); const description = "Product updates, architecture notes, and experiments from building Lix for AI agents and structured file workflows."; const ogImage = resolveOgImage(); const title = "Lix Blog | Product updates, architecture notes, and AI workflow ideas"; return { links: [{ rel: "canonical", href: canonicalUrl }], scripts: [ { type: "application/ld+json", children: JSON.stringify({ "@context": "https://schema.org", "@type": "Blog", name: "Blog | Lix", description, url: canonicalUrl, }), }, ], meta: [ { title }, { name: "description", content: description }, { property: "og:title", content: title }, { property: "og:description", content: description }, { property: "og:url", content: canonicalUrl }, { property: "og:type", content: "website" }, { property: "og:site_name", content: "Lix" }, { property: "og:locale", content: "en_US" }, { property: "og:image", content: ogImage.url }, { property: "og:image:alt", content: ogImage.alt }, { name: "twitter:card", content: "summary_large_image" }, { name: "twitter:image", content: ogImage.url }, { name: "twitter:image:alt", content: ogImage.alt }, { name: "twitter:title", content: title }, { name: "twitter:description", content: description }, ], }; }, component: BlogIndexPage, }); function BlogIndexPage() { const { posts } = Route.useLoaderData(); return (

Blog

Get notified about new blog posts

{posts.map((post) => (
{post.ogImage && (
{
)}

{post.title ?? post.slug}

{post.description && (

{post.description}

)}
{post.authors && post.authors.length > 0 && ( <> {post.authors.map((author, index) => (
{author.avatar ? ( {author.name} ) : (
{author.name.charAt(0)}
)} {author.name}
))} {post.date && ( · )} )} {post.date && }
))}
); } function formatDate(dateString: string): string { try { const date = new Date(dateString); return date.toLocaleDateString("en-US", { year: "numeric", month: "long", day: "numeric", }); } catch { return dateString; } } function getBlogJson(filename: string): Promise { const loader = blogJsonFiles[`${blogRootPrefix}${filename}`]; if (!loader) { throw new Error(`Missing blog file: ${filename}`); } return loader(); } function getBlogMarkdown(relativePath: string): Promise { const normalized = relativePath.replace(/^[./]+/, ""); const loader = blogMarkdownFiles[`${blogRootPrefix}${normalized}`]; if (!loader) { throw new Error(`Missing blog markdown: ${relativePath}`); } return loader(); } ================================================ FILE: packages/website/src/routes/docs/$slugId.tsx ================================================ import { createFileRoute, notFound } from "@tanstack/react-router"; import { DocsLayout, type PageTocItem, type SidebarSection, } from "../../components/docs-layout"; import { MarkdownPage } from "../../components/markdown-page"; import tableOfContents from "../../../../../docs/table_of_contents.json"; import { DocsPrevNext } from "../../components/docs-prev-next"; import { buildDocMaps, buildTocMap, normalizeRelativePath, resolveDocsMarkdownHref, type Toc, type TocItem, } from "../../lib/build-doc-map"; import { buildCanonicalUrl, buildBreadcrumbJsonLd, buildWebPageJsonLd, extractOgMeta, extractTwitterMeta, getMarkdownDescription, getMarkdownTitle, resolveOgImage, } from "../../lib/seo"; import { parse } from "@opral/markdown-wc"; import markdownPageCss from "../../components/markdown-page.style.css?url"; const docs = import.meta.glob("../../../../../docs/**/*.md", { eager: true, import: "default", query: "?raw", }); const tocMap = buildTocMap(tableOfContents as Toc); const { bySlug: docsBySlug } = buildDocMaps(docs); const docsByRelativePath = Object.values(docsBySlug).reduce( (acc, doc) => { acc[doc.relativePath] = doc; return acc; }, {} as Record, ); /** * Builds a list of heading links from rendered HTML for the "On this page" TOC. * * @example * buildPageToc('

Intro

') // [{ id: "intro", label: "Intro", level: 2 }] */ function buildPageToc(html: string): PageTocItem[] { const headings: PageTocItem[] = []; const regex = /]*id="([^"]+)"[^>]*>([\s\S]*?)<\/h2>/g; let match: RegExpExecArray | null; while ((match = regex.exec(html)) !== null) { const id = match[1]; const label = decodeHtmlEntities(stripHtml(match[2])).trim(); if (!id || !label) continue; headings.push({ id, label, level: 2 }); } return headings; } /** * Removes HTML tags from a string. * * @example * stripHtml("Title") // "Title" */ function stripHtml(input: string): string { return input.replace(/<[^>]*>/g, ""); } /** * Decodes a minimal set of HTML entities for heading labels. * * @example * decodeHtmlEntities("Foo & Bar") // "Foo & Bar" */ function decodeHtmlEntities(input: string): string { return input .replace(/&/g, "&") .replace(/</g, "<") .replace(/>/g, ">") .replace(/"/g, '"') .replace(/'/g, "'"); } function buildSidebarSections(toc: Toc): SidebarSection[] { return Object.entries(toc) .map(([label, sectionItems]) => { const items = sectionItems .map((item) => { const relativePath = normalizeRelativePath(item.path); const doc = docsByRelativePath[relativePath]; if (!doc) { return null; } return { label: item.label, href: `/docs/${doc.slug}`, relativePath, }; }) .filter((value): value is NonNullable => Boolean(value)); return { label, items }; }) .filter((section) => section.items.length > 0); } function buildDocsNavRoutes(toc: Toc) { return Object.values(toc) .flatMap((items) => items.map((item) => { const relativePath = normalizeRelativePath(item.path); const doc = docsByRelativePath[relativePath]; return { slug: doc?.slug ?? "", title: item.label, }; }), ) .filter((item) => item.slug); } type DocsLoaderData = { doc: (typeof docsBySlug)[string]; tocEntry: TocItem | undefined; sidebarSections: SidebarSection[]; html: string; frontmatter: Record & { imports?: string[] }; pageToc: PageTocItem[]; }; export function buildDocsPageHead(loaderData?: DocsLoaderData) { const data = loaderData as DocsLoaderData | undefined; const frontmatter = data?.frontmatter; const rawMarkdown = data?.doc?.content ?? ""; const title = getMarkdownTitle({ rawMarkdown, frontmatter }); const description = getMarkdownDescription({ rawMarkdown, frontmatter }); const canonicalUrl = data?.doc?.slug ? buildCanonicalUrl(`/docs/${data.doc.slug}`) : buildCanonicalUrl("/docs/what-is-lix"); const ogImage = resolveOgImage(frontmatter); const ogMeta = extractOgMeta(frontmatter); const twitterMeta = extractTwitterMeta(frontmatter); const pageTitle = title ? `${title} | Lix Documentation` : "Lix Documentation"; const jsonLd = buildWebPageJsonLd({ title: pageTitle, description, canonicalUrl, image: ogImage.url, }); const breadcrumbJsonLd = buildBreadcrumbJsonLd( [ { name: "Lix", item: buildCanonicalUrl("/") }, { name: "Documentation", item: buildCanonicalUrl("/docs/what-is-lix") }, title ? { name: title, item: canonicalUrl } : undefined, ].filter(Boolean) as Array<{ name: string; item: string }>, ); const meta: Array< | { title: string } | { name: string; content: string } | { property: string; content: string } > = [ { title: pageTitle, }, { property: "og:url", content: canonicalUrl }, { property: "og:type", content: "article" }, { property: "og:site_name", content: "Lix" }, { property: "og:locale", content: "en_US" }, { property: "og:image", content: ogImage.url }, { property: "og:image:alt", content: ogImage.alt }, { name: "twitter:card", content: "summary_large_image" }, { name: "twitter:image", content: ogImage.url }, { name: "twitter:image:alt", content: ogImage.alt }, ]; if (description) { meta.push( { name: "description", content: description }, { property: "og:description", content: description }, { name: "twitter:description", content: description }, ); } if (title) { meta.push( { property: "og:title", content: pageTitle }, { name: "twitter:title", content: pageTitle }, ); } return { meta: [...meta, ...ogMeta, ...twitterMeta], links: [ { rel: "stylesheet", href: markdownPageCss, }, { rel: "canonical", href: canonicalUrl, }, ], scripts: [ { type: "application/ld+json", children: JSON.stringify(jsonLd), }, { type: "application/ld+json", children: JSON.stringify(breadcrumbJsonLd), }, ], }; } export const Route = createFileRoute("/docs/$slugId")({ head: ({ loaderData }) => buildDocsPageHead(loaderData), loader: (async ({ params }: { params: { slugId: string } }) => { const doc = docsBySlug[params.slugId]; if (!doc) { throw notFound(); } const tocEntry = tocMap.get(doc.relativePath); const parsedMarkdown = await parse(doc.content, { externalLinks: true, assetBaseUrl: `/docs/${doc.slug}/`, resolveHref: (href) => resolveDocsMarkdownHref(href, doc, docsByRelativePath), }); const html = parsedMarkdown.html; const pageToc = buildPageToc(html); return { doc, tocEntry, sidebarSections: buildSidebarSections(tableOfContents as Toc), html, frontmatter: parsedMarkdown.frontmatter, pageToc, }; }) as any, component: DocsPage, }); function DocsPage() { const { doc, sidebarSections, html, frontmatter, pageToc } = Route.useLoaderData() as DocsLoaderData; const navRoutes = buildDocsNavRoutes(tableOfContents as Toc); const editUrl = `https://github.com/opral/lix/blob/main/docs/${doc.relativePath.replace( /^\.\//, "", )}`; return ( ); } ================================================ FILE: packages/website/src/routes/docs/index.tsx ================================================ import { createFileRoute, notFound, redirect } from "@tanstack/react-router"; import tableOfContents from "../../../../../docs/table_of_contents.json"; import { buildDocMaps, normalizeRelativePath, type Toc, } from "../../lib/build-doc-map"; import redirects from "./redirects.json"; /** * Resolves a redirect destination from the docs redirect map. * * @example * resolveDocsRedirect("/docs") // "/docs/what-is-lix" */ function resolveDocsRedirect(pathname: string): string | undefined { const normalized = pathname.endsWith("/") ? pathname.slice(0, -1) : pathname; const redirectMap = redirects as Record; return redirectMap[normalized] ?? redirectMap[pathname]; } const docs = import.meta.glob("../../../../../docs/**/*.md", { eager: true, import: "default", query: "?raw", }); const { bySlug: docsBySlug } = buildDocMaps(docs); const docsByRelativePath = Object.values(docsBySlug).reduce( (acc, doc) => { acc[doc.relativePath] = doc; return acc; }, {} as Record, ); export const Route = createFileRoute("/docs/")({ loader: () => { const redirected = resolveDocsRedirect("/docs"); if (redirected) { throw redirect({ to: redirected, }); } const toc = tableOfContents as Toc; const firstPath = Object.values(toc)[0]?.[0]?.path; const firstRelative = firstPath ? normalizeRelativePath(firstPath) : undefined; const firstDoc = (firstRelative && docsByRelativePath[firstRelative]) || Object.values(docsBySlug)[0]; if (!firstDoc) { throw notFound(); } throw redirect({ // @ts-ignore to: `/docs/${firstDoc.slug}`, }); }, }); ================================================ FILE: packages/website/src/routes/docs/redirects.json ================================================ { "/docs": "/docs/what-is-lix" } ================================================ FILE: packages/website/src/routes/guide/$slugId.tsx ================================================ import { createFileRoute, redirect } from "@tanstack/react-router"; export const Route = createFileRoute("/guide/$slugId")({ loader: ({ params }) => { throw redirect({ to: "/docs/$slugId", params: { slugId: params.slugId }, }); }, }); ================================================ FILE: packages/website/src/routes/guide/index.tsx ================================================ import { createFileRoute, redirect } from "@tanstack/react-router"; export const Route = createFileRoute("/guide/")({ loader: () => { throw redirect({ to: "/docs/$slugId", params: { slugId: "what-is-lix" }, }); }, }); ================================================ FILE: packages/website/src/routes/index.tsx ================================================ import { createFileRoute } from "@tanstack/react-router"; import { parse } from "@opral/markdown-wc"; import LandingPage from "../components/landing-page"; import { buildCanonicalUrl, buildWebSiteJsonLd, resolveOgImage, } from "../lib/seo"; import markdownPageCss from "../components/markdown-page.style.css?url"; import readmeMarkdown from "../../../../README.md?raw"; async function loadReadmeContent() { const parsed = await parse(readmeMarkdown); return { html: parsed.html }; } export const Route = createFileRoute("/")({ loader: async () => { return await loadReadmeContent(); }, head: () => { const title = "Lix | Version control as a library for AI agents and structured data"; const description = "Lix gives AI agents and applications branchable, reviewable change control for structured files, binary formats, and SQL-backed workflows."; const canonicalUrl = buildCanonicalUrl("/"); const ogImage = resolveOgImage(); const jsonLd = buildWebSiteJsonLd({ title, description, canonicalUrl, }); return { meta: [ { title }, { name: "description", content: description }, { property: "og:title", content: title }, { property: "og:description", content: description }, { property: "og:url", content: canonicalUrl }, { property: "og:type", content: "website" }, { property: "og:site_name", content: "Lix" }, { property: "og:locale", content: "en_US" }, { property: "og:image", content: ogImage.url }, { property: "og:image:alt", content: ogImage.alt }, { name: "twitter:card", content: "summary_large_image" }, { name: "twitter:title", content: title }, { name: "twitter:description", content: description }, { name: "twitter:image", content: ogImage.url }, { name: "twitter:image:alt", content: ogImage.alt }, ], links: [ { rel: "canonical", href: canonicalUrl }, { rel: "stylesheet", href: markdownPageCss }, ], scripts: [ { type: "application/ld+json", children: JSON.stringify(jsonLd), }, ], }; }, component: LandingPageWrapper, }); function LandingPageWrapper() { const { html } = Route.useLoaderData(); return ; } ================================================ FILE: packages/website/src/routes/plugins/$pluginKey.tsx ================================================ import { createFileRoute } from "@tanstack/react-router"; import { Header } from "../../components/header"; import { Footer } from "../../components/footer"; import { buildBreadcrumbJsonLd, buildCanonicalUrl, buildWebPageJsonLd, } from "../../lib/seo"; const title = "Lix Plugins"; const description = "Plugins are coming soon."; export const Route = createFileRoute("/plugins/$pluginKey")({ head: () => { const canonicalUrl = buildCanonicalUrl("/plugins"); const jsonLd = buildWebPageJsonLd({ title, description, canonicalUrl, }); const breadcrumbJsonLd = buildBreadcrumbJsonLd([ { name: "Lix", item: buildCanonicalUrl("/") }, { name: "Plugins", item: canonicalUrl }, ]); return { meta: [ { title }, { name: "description", content: description }, { property: "og:title", content: title }, { property: "og:description", content: description }, { property: "og:url", content: canonicalUrl }, { property: "og:type", content: "website" }, { property: "og:site_name", content: "Lix" }, { name: "twitter:card", content: "summary" }, { name: "twitter:title", content: title }, { name: "twitter:description", content: description }, ], links: [{ rel: "canonical", href: canonicalUrl }], scripts: [ { type: "application/ld+json", children: JSON.stringify(jsonLd), }, { type: "application/ld+json", children: JSON.stringify(breadcrumbJsonLd), }, ], }; }, component: PluginsComingSoonPage, }); function PluginsComingSoonPage() { return (

Plugins

Plugins are coming soon

We are rewriting this section as part of the website cleanup.

); } ================================================ FILE: packages/website/src/routes/plugins/index.tsx ================================================ import { createFileRoute } from "@tanstack/react-router"; import { Header } from "../../components/header"; import { Footer } from "../../components/footer"; import { buildBreadcrumbJsonLd, buildCanonicalUrl, buildWebPageJsonLd, } from "../../lib/seo"; const title = "Lix Plugins"; const description = "Plugins are coming soon."; export const Route = createFileRoute("/plugins/")({ head: () => { const canonicalUrl = buildCanonicalUrl("/plugins"); const jsonLd = buildWebPageJsonLd({ title, description, canonicalUrl, }); const breadcrumbJsonLd = buildBreadcrumbJsonLd([ { name: "Lix", item: buildCanonicalUrl("/") }, { name: "Plugins", item: canonicalUrl }, ]); return { meta: [ { title }, { name: "description", content: description }, { property: "og:title", content: title }, { property: "og:description", content: description }, { property: "og:url", content: canonicalUrl }, { property: "og:type", content: "website" }, { property: "og:site_name", content: "Lix" }, { name: "twitter:card", content: "summary" }, { name: "twitter:title", content: title }, { name: "twitter:description", content: description }, ], links: [{ rel: "canonical", href: canonicalUrl }], scripts: [ { type: "application/ld+json", children: JSON.stringify(jsonLd), }, { type: "application/ld+json", children: JSON.stringify(breadcrumbJsonLd), }, ], }; }, component: PluginsComingSoonPage, }); function PluginsComingSoonPage() { return (

Plugins

Plugins are coming soon

We are rewriting this section as part of the website cleanup.

); } ================================================ FILE: packages/website/src/routes/plugins/plugin.registry.json ================================================ { "$schema": "https://json-schema.org/draft/2020-12/schema", "plugins": [ { "key": "plugin_json", "name": "JSON Plugin", "package": "@lix-js/plugin-json", "description": "Tracks JSON files with JSON Pointer entities", "file_types": ["*.json"], "readme": "https://raw.githubusercontent.com/opral/lix/main/packages/plugin-json/README.md", "links": { "npm": "https://www.npmjs.com/package/@lix-js/plugin-json", "github": "https://github.com/opral/lix/tree/main/packages/plugin-json", "docs": "/plugins/plugin_json" } }, { "key": "lix_plugin_csv", "name": "CSV Plugin", "package": "@lix-js/plugin-csv", "description": "Tracks CSV files with row-level entities", "file_types": ["*.csv"], "readme": "https://raw.githubusercontent.com/opral/lix/main/packages/plugin-csv/README.md", "links": { "npm": "https://www.npmjs.com/package/@lix-js/plugin-csv", "github": "https://github.com/opral/lix/tree/main/packages/plugin-csv", "docs": "/plugins/lix_plugin_csv" } }, { "key": "plugin_md", "name": "Markdown Plugin", "package": "@lix-js/plugin-md", "description": "Tracks Markdown files using markdown-wc AST", "file_types": ["*.md"], "readme": "https://raw.githubusercontent.com/opral/lix/main/packages/plugin-md/README.md", "links": { "npm": "https://www.npmjs.com/package/@lix-js/plugin-md", "github": "https://github.com/opral/lix/tree/main/packages/plugin-md", "docs": "/plugins/plugin_md" } }, { "key": "plugin_prosemirror", "name": "ProseMirror Plugin", "package": "@lix-js/plugin-prosemirror", "description": "Tracks rich text edits in ProseMirror documents", "file_types": ["/prosemirror.json"], "readme": "https://raw.githubusercontent.com/opral/lix/main/packages/plugin-prosemirror/README.md", "links": { "npm": "https://www.npmjs.com/package/@lix-js/plugin-prosemirror", "github": "https://github.com/opral/lix/tree/main/packages/plugin-prosemirror", "docs": "/plugins/plugin_prosemirror", "example": "https://prosemirror-example.onrender.com/" } } ] } ================================================ FILE: packages/website/src/routes/rfc/$slug.tsx ================================================ import { createFileRoute, Link, redirect } from "@tanstack/react-router"; import { parse } from "@opral/markdown-wc"; import { useEffect } from "react"; import markdownPageCss from "../../components/markdown-page.style.css?url"; import { Footer } from "../../components/footer"; import { Header } from "../../components/header"; import { PrevNextNav } from "../../components/prev-next-nav"; import { buildBreadcrumbJsonLd, buildCanonicalUrl, buildWebPageJsonLd, extractOgMeta, extractTwitterMeta, getMarkdownDescription, getMarkdownTitle, resolveOgImage, splitTitleFromHtml, } from "../../lib/seo"; const rfcMarkdownFiles = import.meta.glob( "../../../../../rfcs/**/index.md", { query: "?raw", import: "default", }, ); const rfcRootPrefix = "../../../../../rfcs/"; type RfcPrevNext = { slug: string; title: string; } | null; async function getTitleForSlug(slug: string): Promise { const path = `${rfcRootPrefix}${slug}/index.md`; const loader = rfcMarkdownFiles[path]; if (!loader) return slug; const rawMarkdown = await loader(); const parsed = await parse(rawMarkdown); return ( getMarkdownTitle({ rawMarkdown, frontmatter: parsed.frontmatter, }) ?? slug ); } /** * Rewrite RFC links to remove index.md suffix * Handles both relative paths (../001-slug/index.md) and absolute paths (/rfc/001-slug/index.md) */ function rewriteRfcLinks(html: string): string { return ( html // Handle relative paths: ../001-slug/index.md or ./001-slug/index.md .replace(/href="\.\.?\/([\d]+-[^/]+)\/index\.md"/g, 'href="/rfc/$1"') // Handle absolute paths that were resolved by assetBaseUrl: /rfc/001-slug/index.md .replace(/href="\/rfc\/([\d]+-[^/]+)\/index\.md"/g, 'href="/rfc/$1"') ); } async function loadRfc(slug: string) { if (!slug) { throw new Error("Missing RFC slug"); } const path = `${rfcRootPrefix}${slug}/index.md`; const loader = rfcMarkdownFiles[path]; if (!loader) { throw new Error(`RFC not found: ${slug}`); } // Auto-discover all RFCs for prev/next navigation const rfcPaths = Object.keys(rfcMarkdownFiles); const allSlugs = rfcPaths .map((p) => p.replace(rfcRootPrefix, "").replace("/index.md", "")) .sort((a, b) => b.localeCompare(a)); // Sort Z-A const currentIndex = allSlugs.findIndex((s) => s === slug); const prevSlug = currentIndex > 0 ? allSlugs[currentIndex - 1] : null; const nextSlug = currentIndex < allSlugs.length - 1 ? allSlugs[currentIndex + 1] : null; const prevRfc: RfcPrevNext = prevSlug ? { slug: prevSlug, title: await getTitleForSlug(prevSlug) } : null; const nextRfc: RfcPrevNext = nextSlug ? { slug: nextSlug, title: await getTitleForSlug(nextSlug) } : null; const rawMarkdown = await loader(); const parsed = await parse(rawMarkdown, { assetBaseUrl: `/rfc/${slug}/`, }); const rendered = splitTitleFromHtml(rewriteRfcLinks(parsed.html)); const title = getMarkdownTitle({ rawMarkdown, frontmatter: parsed.frontmatter, }) ?? rendered.title ?? slug; const description = getMarkdownDescription({ rawMarkdown, frontmatter: parsed.frontmatter, }) ?? `Design proposal for ${title}.`; const date = parsed.frontmatter?.date as string | undefined; return { slug, title, description, date, html: rendered.body, frontmatter: parsed.frontmatter, prevRfc, nextRfc, }; } type RfcLoaderData = Awaited>; export function buildRfcHead(loaderData?: RfcLoaderData) { const title = loaderData?.title; const description = loaderData?.description; const slug = loaderData?.slug; const canonicalUrl = slug ? buildCanonicalUrl(`/rfc/${slug}`) : buildCanonicalUrl("/rfc"); const ogImage = resolveOgImage(loaderData?.frontmatter); const ogMeta = extractOgMeta(loaderData?.frontmatter); const twitterMeta = extractTwitterMeta(loaderData?.frontmatter); const pageTitle = title ? `${title} | Lix RFCs` : "Lix RFCs"; const links: Array<{ rel: string; href: string }> = [ { rel: "stylesheet", href: markdownPageCss }, { rel: "canonical", href: canonicalUrl }, ]; if (loaderData?.prevRfc?.slug) { links.push({ rel: "prev", href: buildCanonicalUrl(`/rfc/${loaderData.prevRfc.slug}`), }); } if (loaderData?.nextRfc?.slug) { links.push({ rel: "next", href: buildCanonicalUrl(`/rfc/${loaderData.nextRfc.slug}`), }); } const meta: Array< | { title: string } | { name: string; content: string } | { property: string; content: string } > = [ { title: pageTitle }, { property: "og:title", content: pageTitle }, { property: "og:description", content: description ?? "Lix RFC" }, { property: "og:url", content: canonicalUrl }, { property: "og:type", content: "article" }, { property: "og:site_name", content: "Lix" }, { property: "og:locale", content: "en_US" }, { property: "og:image", content: ogImage.url }, { property: "og:image:alt", content: ogImage.alt }, { name: "twitter:card", content: "summary_large_image" }, { name: "twitter:title", content: pageTitle }, { name: "twitter:description", content: description ?? "Lix RFC" }, { name: "twitter:image", content: ogImage.url }, { name: "twitter:image:alt", content: ogImage.alt }, ]; if (description) { meta.push({ name: "description", content: description }); } if (loaderData?.date) { meta.push({ property: "article:published_time", content: loaderData.date, }); } const webPageJsonLd = buildWebPageJsonLd({ title: pageTitle, description, canonicalUrl, image: ogImage.url, }); const breadcrumbJsonLd = buildBreadcrumbJsonLd([ { name: "Lix", item: buildCanonicalUrl("/") }, { name: "RFCs", item: buildCanonicalUrl("/rfc") }, ...(title ? [{ name: title, item: canonicalUrl }] : []), ]); const scripts = [ { type: "application/ld+json", children: JSON.stringify({ "@context": "https://schema.org", "@type": "TechArticle", headline: title ?? "Lix RFC", description, url: canonicalUrl, image: ogImage.url, ...(loaderData?.date ? { datePublished: loaderData.date } : {}), }), }, { type: "application/ld+json", children: JSON.stringify(webPageJsonLd), }, { type: "application/ld+json", children: JSON.stringify(breadcrumbJsonLd), }, ]; return { meta: [...meta, ...ogMeta, ...twitterMeta], links, scripts, }; } export const Route = createFileRoute("/rfc/$slug")({ loader: async ({ params }) => { try { return await loadRfc(params.slug); } catch { throw redirect({ to: "/rfc" }); } }, head: ({ loaderData }) => buildRfcHead(loaderData), component: RfcPage, }); function RfcPage() { const { title, html, prevRfc, nextRfc } = Route.useLoaderData(); useEffect(() => { // @ts-expect-error - JS-only module import("../../components/markdown-page.interactive.js"); }, [html]); return (

{title}

); } ================================================ FILE: packages/website/src/routes/rfc/index.tsx ================================================ import { createFileRoute, Link } from "@tanstack/react-router"; import { parse } from "@opral/markdown-wc"; import { Footer } from "../../components/footer"; import { Header } from "../../components/header"; import { buildCanonicalUrl, buildWebPageJsonLd, resolveOgImage, } from "../../lib/seo"; const rfcMarkdownFiles = import.meta.glob( "../../../../../rfcs/**/index.md", { query: "?raw", import: "default", }, ); const rfcRootPrefix = "../../../../../rfcs/"; type RfcEntry = { slug: string; title: string; date?: string; }; async function loadRfcIndex(): Promise<{ rfcs: RfcEntry[] }> { const rfcPaths = Object.keys(rfcMarkdownFiles); const rfcs = await Promise.all( rfcPaths.map(async (path) => { // Extract slug from path like "../../../../../rfcs/001-preprocess-writes/index.md" const slug = path.replace(rfcRootPrefix, "").replace("/index.md", ""); const rawMarkdown = await rfcMarkdownFiles[path](); const parsed = await parse(rawMarkdown); // Extract title from frontmatter or first h1 let title = slug; if (parsed.frontmatter?.title) { title = parsed.frontmatter.title as string; } else { const h1Match = rawMarkdown.match(/^#\s+(.+)$/m); if (h1Match) { title = h1Match[1]; } } // Extract date from frontmatter const date = parsed.frontmatter?.date as string | undefined; return { slug, title, date }; }), ); // Sort Z-A (descending by slug, so 002 comes before 001) rfcs.sort((a, b) => b.slug.localeCompare(a.slug)); return { rfcs }; } export function buildRfcIndexHead() { const title = "Lix RFCs | Design proposals, architecture decisions, and roadmap notes"; const description = "Read Lix RFCs covering architecture decisions, engine design, and upcoming changes before they land in the product."; const canonicalUrl = buildCanonicalUrl("/rfc"); const ogImage = resolveOgImage(); const jsonLd = buildWebPageJsonLd({ title, description, canonicalUrl, image: ogImage.url, }); return { links: [{ rel: "canonical", href: canonicalUrl }], scripts: [ { type: "application/ld+json", children: JSON.stringify(jsonLd), }, ], meta: [ { title }, { name: "description", content: description }, { property: "og:title", content: title }, { property: "og:description", content: description }, { property: "og:url", content: canonicalUrl }, { property: "og:type", content: "website" }, { property: "og:site_name", content: "Lix" }, { property: "og:locale", content: "en_US" }, { property: "og:image", content: ogImage.url }, { property: "og:image:alt", content: ogImage.alt }, { name: "twitter:card", content: "summary_large_image" }, { name: "twitter:title", content: title }, { name: "twitter:description", content: description }, { name: "twitter:image", content: ogImage.url }, { name: "twitter:image:alt", content: ogImage.alt }, ], }; } export const Route = createFileRoute("/rfc/")({ loader: async () => { return await loadRfcIndex(); }, head: () => buildRfcIndexHead(), component: RfcIndexPage, }); function formatDate(dateString: string): string { try { const date = new Date(dateString); return date.toLocaleDateString("en-US", { year: "numeric", month: "long", day: "numeric", }); } catch { return dateString; } } function RfcIndexPage() { const { rfcs } = Route.useLoaderData(); return (

RFCs

Request for Comments capture the design proposals, architectural tradeoffs, and implementation plans behind major Lix changes.

{rfcs.map((rfc) => { const rfcNumber = rfc.slug.match(/^(\d+)/)?.[1] ?? ""; return (
RFC {rfcNumber} {rfc.date && ( {formatDate(rfc.date)} )}
{rfc.title} ); })}
); } ================================================ FILE: packages/website/src/ssg/github-stars-plugin.ts ================================================ import fs from "node:fs/promises"; import { fileURLToPath } from "node:url"; export type GithubRepoMetrics = { stars: number; forks: number; openIssues: number; closedIssues: number; contributorCount: number; }; type GithubCache = { generatedAt: string; data: Record; }; const GITHUB_CACHE_TTL_MINUTES = 60; const githubCachePath = fileURLToPath( new URL("../github_repo_data.gen.json", import.meta.url), ); let didLogGithubToken = false; export function githubStarsPlugin({ token }: { token?: string }) { return { name: "lix:github-data", async buildStart() { await ensureGithubCache(token); }, async configureServer() { await ensureGithubCache(token); }, }; } async function ensureGithubCache(token?: string) { if (token && !didLogGithubToken) { console.info("Using LIX_WEBSITE_GITHUB_TOKEN for GitHub API requests."); didLogGithubToken = true; } const cached = await readGithubCache(); if (cached && !isCacheExpired(cached)) return; const repos = new Set(["opral/lix"]); const data: Record = {}; for (const repo of repos) { const metrics = await fetchGithubRepoMetrics(repo, token); data[repo.toLowerCase()] = metrics; } const payload: GithubCache = { generatedAt: new Date().toISOString(), data, }; await fs.writeFile(githubCachePath, JSON.stringify(payload, null, 2) + "\n"); } async function readGithubCache(): Promise { try { const raw = await fs.readFile(githubCachePath, "utf8"); return JSON.parse(raw) as GithubCache; } catch { return null; } } function isCacheExpired(cache: GithubCache) { const generatedAt = Date.parse(cache.generatedAt); if (Number.isNaN(generatedAt)) return true; const ttlMs = GITHUB_CACHE_TTL_MINUTES * 60 * 1000; return Date.now() - generatedAt > ttlMs; } function getHeaders(token?: string) { return { Accept: "application/vnd.github+json", "User-Agent": "lix-website", ...(token ? { Authorization: `Bearer ${token}` } : {}), }; } async function fetchGithubRepoMetrics( repo: string, token?: string, ): Promise { try { const repoRes = await fetch(`https://api.github.com/repos/${repo}`, { headers: getHeaders(token), }); if (!repoRes.ok) { console.warn(`GitHub repo fetch failed for ${repo}: ${repoRes.status}`); return null; } const repoData = (await repoRes.json()) as { stargazers_count?: number; forks_count?: number; open_issues_count?: number; }; const openIssuesRes = await fetch( `https://api.github.com/search/issues?q=repo:${repo}+is:issue+is:open&per_page=1`, { headers: getHeaders(token) }, ); const closedIssuesRes = await fetch( `https://api.github.com/search/issues?q=repo:${repo}+is:issue+is:closed&per_page=1`, { headers: getHeaders(token) }, ); let openIssues = 0; if (openIssuesRes.ok) { const openData = (await openIssuesRes.json()) as { total_count?: number; }; openIssues = openData.total_count ?? 0; } let closedIssues = 0; if (closedIssuesRes.ok) { const closedData = (await closedIssuesRes.json()) as { total_count?: number; }; closedIssues = closedData.total_count ?? 0; } const contributorsRes = await fetch( `https://api.github.com/repos/${repo}/contributors?per_page=1&anon=1`, { headers: getHeaders(token) }, ); let contributorCount = 0; if (contributorsRes.ok) { const linkHeader = contributorsRes.headers.get("Link"); if (linkHeader) { const lastMatch = linkHeader.match(/page=(\d+)>; rel="last"/); if (lastMatch) { contributorCount = parseInt(lastMatch[1], 10); } } else { const data = (await contributorsRes.json()) as unknown[]; contributorCount = data.length; } } return { stars: repoData.stargazers_count ?? 0, forks: repoData.forks_count ?? 0, openIssues, closedIssues, contributorCount, }; } catch (error) { console.warn(`GitHub fetch failed for ${repo}`, error); return null; } } ================================================ FILE: packages/website/src/styles.css ================================================ @import "tailwindcss"; body { @apply m-0; font-family: Inter, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; color: #213547; background: #ffffff; } code { font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; } ================================================ FILE: packages/website/src/types/lix-js-plugin-json.d.ts ================================================ declare module "@lix-js/plugin-json"; ================================================ FILE: packages/website/tsconfig.json ================================================ { "include": ["**/*.ts", "**/*.tsx", "**/*.d.ts"], "compilerOptions": { "target": "ES2022", "jsx": "react-jsx", "module": "ESNext", "lib": ["ES2022", "DOM", "DOM.Iterable"], "types": ["vite/client"], /* Bundler mode */ "moduleResolution": "bundler", "allowImportingTsExtensions": true, "verbatimModuleSyntax": false, "noEmit": true, "resolveJsonModule": true, /* Linting */ "skipLibCheck": true, "strict": true, "noUnusedLocals": true, "noUnusedParameters": true, "noFallthroughCasesInSwitch": true, "noUncheckedSideEffectImports": true, "baseUrl": ".", "paths": { "@/*": ["./src/*"] } } } ================================================ FILE: packages/website/vite.config.ts ================================================ import { defineConfig, loadEnv, type Plugin } from "vite"; import { tanstackStart } from "@tanstack/react-start/plugin/vite"; import viteReact from "@vitejs/plugin-react"; import tailwindcss from "@tailwindcss/vite"; import { pluginReadmeSync } from "./scripts/plugin-readme-sync"; import { githubStarsPlugin } from "./src/ssg/github-stars-plugin"; import { viteStaticCopy } from "vite-plugin-static-copy"; import path from "path"; import fs from "fs"; import type { ViteDevServer } from "vite"; const mimeTypes: Record = { ".svg": "image/svg+xml", ".png": "image/png", ".jpg": "image/jpeg", ".jpeg": "image/jpeg", ".gif": "image/gif", ".webp": "image/webp", ".ico": "image/x-icon", }; /** * Serves blog assets from the blog directory in dev mode. */ function blogAssetsPlugin(): Plugin { return { name: "blog-assets", configureServer(server) { server.middlewares.use((req, res, next) => { if (req.url?.startsWith("/blog/") && !req.url.endsWith("/")) { const assetPath = req.url.replace("/blog/", ""); const filePath = path.resolve(__dirname, "../../blog", assetPath); if (fs.existsSync(filePath) && fs.statSync(filePath).isFile()) { const ext = path.extname(filePath).toLowerCase(); const contentType = mimeTypes[ext] || "application/octet-stream"; res.setHeader("Content-Type", contentType); return res.end(fs.readFileSync(filePath)); } } next(); }); }, }; } /** * Keeps the docs route module graph in sync when root docs files are added or * removed while the dev server is already running. */ function docsContentWatchPlugin(): Plugin { const docsDir = path.resolve(__dirname, "../../docs"); const docsRouteFiles = [ path.resolve(__dirname, "src/routes/docs/$slugId.tsx"), path.resolve(__dirname, "src/routes/docs/index.tsx"), ]; const invalidateDocsRoutes = (server: ViteDevServer) => { for (const routeFile of docsRouteFiles) { const modules = server.moduleGraph.getModulesByFile(routeFile); if (!modules) continue; for (const module of modules) { server.moduleGraph.invalidateModule(module); } } server.ws.send({ type: "full-reload" }); }; const isDocsFile = (file: string) => { const normalizedFile = path.normalize(file); return normalizedFile.startsWith(docsDir + path.sep); }; return { name: "docs-content-watch", configureServer(server) { server.watcher.add(docsDir); server.watcher.on("add", (file) => { if (isDocsFile(file)) invalidateDocsRoutes(server); }); server.watcher.on("unlink", (file) => { if (isDocsFile(file)) invalidateDocsRoutes(server); }); }, }; } const config = defineConfig(({ mode, command }) => { const isTest = process.env.VITEST === "true" || mode === "test"; const env = loadEnv(mode, process.cwd(), ""); const githubToken = process.env.LIX_WEBSITE_GITHUB_TOKEN ?? env.LIX_WEBSITE_GITHUB_TOKEN; return { server: { fs: { allow: ["../..", "."], }, }, resolve: { tsconfigPaths: true, }, plugins: [ command === "serve" && blogAssetsPlugin(), command === "serve" && docsContentWatchPlugin(), pluginReadmeSync(), githubStarsPlugin({ token: githubToken, }), tailwindcss(), !isTest && viteStaticCopy({ targets: [ { src: "../../blog/**", dest: "../client/blog", }, ], watch: command === "serve" ? { reloadPageOnChange: true } : undefined, }), tanstackStart({ prerender: { enabled: true, autoSubfolderIndex: true, autoStaticPathsDiscovery: true, crawlLinks: true, concurrency: 8, retryCount: 2, retryDelay: 1000, maxRedirects: 5, failOnError: true, }, sitemap: { enabled: true, host: "https://lix.dev", }, }), viteReact(), ].filter(Boolean), }; }); export default config; ================================================ FILE: packages/website/wrangler.json ================================================ { "$schema": "https://unpkg.com/wrangler@latest/config-schema.json", "name": "lix-website", "compatibility_date": "2025-11-23", "assets": { "directory": "./dist/client", "html_handling": "drop-trailing-slash" } } ================================================ FILE: pnpm-workspace.yaml ================================================ packages: - packages/**/* - '!packages/**/dist' onlyBuiltDependencies: - '@tailwindcss/oxide' - 'better-sqlite3' ================================================ FILE: rfcs/001-preprocess-writes/index.md ================================================ --- date: "2025-11-24" --- # Preprocess writes to avoid vtable overhead ## Summary Write operations in Lix are slow due to the vtable mechanism crossing the JS ↔ SQLite WASM boundary multiple times per row. This RFC proposes extending the existing SQL preprocessor to handle writes, bypassing [SQLite's Vtable mechanism](https://www.sqlite.org/vtab.html) entirely. ## Background & Current Architecture ### How We Got Here Lix evolved organically from application requirements: 1. **Git era**: Initially built on git, which [proved unsuited despite the ecosystem appeal](https://opral.substack.com/p/building-on-git-was-our-failure-mode). 2. **SQLite migration**: Rewrote on top of SQLite to gain ACID guarantees, a storage format, and a query engine. 3. **DML triggers**: Early prototypes used triggers on regular tables to track changes. 4. **VTable adoption**: The requirement to control transaction and commit semantics led to [SQLite's vtable mechanism](https://www.sqlite.org/vtab.html) to intercept reads and writes. 5. **Read performance fix**: VTables can't be optimized by SQLite (no filter pushdown for `json_extract`, etc.). A preprocessor was built ([#3723](https://github.com/opral/monorepo/pull/3723)) that rewrites SELECT queries to target real tables, achieving native read performance. 6. **Current state**: Reads are fast. Writes remain slow because they still hit the vtable. ### Current Data Model Lix has a unified read/write interface via the virtual table `lix_internal_state_vtable`. Underneath the vtable, the state is spread across four groups of physical tables: 1. **Change History** – `lix_internal_change` - Stores the history of changes which are used to materialize the committed state. - The foundation of the system. 2. **Transaction state** – `lix_internal_transaction_state` - Uncommitted changes (“staging area”) visible via the vtable before commit. 3. **Untracked state** – `lix_internal_state_all_untracked` - Local-only changes; not synced; coexist with transaction/committed rows. 4. **Committed state** – `lix_internal_state_cache_v1_*` - Schema-partitioned cache tables representing immutable history, optimized for reads. Materialized from `lix_internal_change`. Conceptually: ``` ┌─────────────────────────────────────────────────────────────────┐ │ lix_internal_state_vtable │ │ (unified read/write interface) │ └─────────────────────────────────────────────────────────────────┘ │ ┌────────────┼────────────────────────┐ ▼ ▼ ▼ ┌───────────────┐ ┌───────────┐ ┌─────────────────────────────┐ │ Transaction │ │ Untracked │ │ Committed State │ │ State │ │ State │ │ (cache tables) │ │ (staging) │ │ (local) │ │ │ └───────────────┘ └───────────┘ └──────────────▲──────────────┘ │ │ │ │ │ ┌──────────┴──────────┐ │ │ │ lix_internal_change │ │ │ │ (change history) │ │ │ └─────────────────────┘ │ │ │ └────────────────┴───────────────────────┘ │ Prioritized UNION (transaction > untracked > committed) ``` ### Current Read Path (Fast) ``` App Query Preprocessor SQLite │ │ │ │ SELECT * FROM vtable │ │ │ ─────────────────────────► │ │ │ │ Rewrite to UNION of │ │ │ physical tables │ │ │ ──────────────────────────► │ │ │ │ │ ◄───────────────────────────────────────────────────── │ │ Results (native speed) │ ``` The preprocessor intercepts SELECT queries and rewrites them into a `UNION` query combining the three physical tables, using `ROW_NUMBER()` to prioritize uncommitted/untracked changes. _Example rewritten query:_ ```sql -- User writes: SELECT * FROM lix_internal_state_vtable WHERE schema_key = 'lix_key_value' -- Preprocessor rewrites to (pseudocode): SELECT * FROM ( SELECT *, ROW_NUMBER() OVER ( PARTITION BY entity_id ORDER BY priority ) AS rn FROM ( -- Priority 1: Uncommitted transaction state SELECT *, 1 AS priority FROM lix_internal_transaction_state WHERE schema_key = 'lix_key_value' UNION ALL -- Priority 2: Untracked state SELECT *, 2 AS priority FROM lix_internal_state_all_untracked WHERE schema_key = 'lix_key_value' UNION ALL -- Priority 3: Committed state from schema-specific cache table SELECT *, 3 AS priority FROM lix_internal_state_cache_v1_lix_key_value ) ) WHERE rn = 1 ``` ### Current Write Path (Slow) ``` App Query SQLite JavaScript │ │ │ │ INSERT INTO vtable │ │ │ ───────────────────────► │ │ │ │ xUpdate() callback │ │ │ ──────────────────────────► │ │ │ │ ┌─────────────────┐ │ │ SELECT (validation) │ │ Per-row loop: │ │ │ ◄────────────────────────── │ │ • 1 timestamp │ │ │ │ │ • 3-5 schema │ │ │ SELECT (FK check) │ │ • N FK checks │ │ │ ◄────────────────────────── │ │ • N unique │ │ │ │ │ • 1 insert │ │ │ INSERT (transaction state) │ └─────────────────┘ │ │ ◄────────────────────────── │ │ │ ...repeat... │ ``` Each write triggers `xUpdate` in JavaScript, which executes multiple synchronous queries back into SQLite for validation. ### Query Breakdown Per Write From [`validate-state-mutation.ts`](https://github.com/opral/monorepo/blob/bbcb3b551f4d5cbf47f52eb8bc2846c3a5c0c411/packages/lix/sdk/src/state/vtable/validate-state-mutation.ts) and [`vtable.ts`](https://github.com/opral/monorepo/blob/bbcb3b551f4d5cbf47f52eb8bc2846c3a5c0c411/packages/lix/sdk/src/state/vtable/vtable.ts): | Phase | Queries | | ------------------------ | ------------------------ | | Timestamp | 1 | | Version existence check | 1 | | Schema retrieval | 1-2 | | JSON Schema validation | (in-memory via AJV) | | Primary key uniqueness | 1 | | Unique constraints | 1 per constraint | | Foreign key constraints | 2-3 per FK | | Transaction state insert | 1 | | File cache update | 0-2 (if file_descriptor) | **Total: ~10-25 queries per row**, depending on schema complexity. ## Problem **Write performance is poor.** A single logical write from the application results in: 1. One JS ↔ WASM boundary crossing to enter `xUpdate` 2. 10-25 internal SQL queries inside `xUpdate` for validation and bookkeeping 3. Each internal query crosses the JS ↔ WASM boundary again For bulk operations, this scales linearly: inserting 1,000 rows triggers 1,000 `xUpdate` calls and 10,000-25,000 boundary crossings. ### Quantifying the Problem Based on the existing benchmark suite (`vtable.insert.bench.ts`, `commit.bench.ts`): | Operation | Current Behavior (bench.base.json) | | -------------------- | --------------------------------------- | | Single row insert | ~15.3ms (state_by_version insert) | | 10-row chunk insert | ~39ms (state_by_version 10-row chunk) | | 100-row chunk insert | ~344ms (state_by_version 100-row chunk) | **Target**: A single mutation that writes ~100 rows should complete in <50ms. Why this target: - 50ms leaves another ~50ms in the 100ms UI budget for other work (rendering, effects). - 100 rows matches typical document transactions and keeps bulk edits responsive. ## Proposal Extend the preprocessor to handle `INSERT`, `UPDATE`, and `DELETE` statements, bypassing the vtable entirely. ### Write Path ``` App Query Preprocessor SQLite │ │ │ │ INSERT INTO vtable │ │ │ ─────────────────────────► │ │ │ │ 1. Parse SQL │ │ │ 2. Extract mutation rows │ │ │ 3. JSON Schema validate │ │ │ (in-memory) │ │ │ 4. File change detection │ │ │ (plugin callbacks) │ │ │ 5. Build bulk SQL with │ │ │ constraint checks │ │ │ ──────────────────────────► │ │ │ Single optimized query │ │ ◄───────────────────────────────────────────────────── │ │ Done (single boundary crossing) │ ``` ### Pseudocode Flow ```typescript async function execute(sql: string): Promise { // 1. Parse the incoming SQL const ast = parse(sql); if (!isMutation(ast)) return sql; // Pass through // 2. Extract target table and mutation type const { table, operation, rows } = extractMutation(ast); // 3. Resolve values (handle subqueries, defaults, etc.) const resolvedRows = await resolveRowValues(ast, rows); // 4. In-memory JSON Schema validation for (const row of resolvedRows) { const schema = getStoredSchema(row.schema_key); validateJsonSchema(row.snapshot_content, schema); // throws on error } // 5. File change detection (for file mutations) const detectedChanges: MutationRow[] = []; for (const row of resolvedRows) { if (row.schema_key === "lix_file") { const plugin = getMatchingPlugin(row); const changes = plugin.detectChanges({ after: row.snapshot_content, // Provide a query function that reads from pending state querySync: createPendingStateQuery(resolvedRows), }); detectedChanges.push(...changes); } } const allRows = [...resolvedRows, ...detectedChanges]; // 6. Build optimized SQL with constraint validation const targetTable = determineTargetTable(allRows); // transaction_state or untracked const optimizedSql = ` -- Constraint validation (fails entire transaction on violation) SELECT CASE WHEN EXISTS (${buildForeignKeyValidation(allRows)}) THEN RAISE(ABORT, 'Foreign key constraint failed') END; SELECT CASE WHEN EXISTS (${buildUniqueConstraintValidation(allRows)}) THEN RAISE(ABORT, 'Unique constraint failed') END; -- Bulk insert into physical table INSERT INTO ${targetTable} (entity_id, schema_key, file_id, ...) VALUES ${allRows.map(formatRow).join(", ")} ON CONFLICT (entity_id, schema_key, file_id, version_id) DO UPDATE SET snapshot_content = excluded.snapshot_content, ...; `; const result = sqlite.exec(optimizedSql); emit("onStateCommit", allRows); return result; } ``` ### Benefits 1. **No VTable Overhead**: By bypassing `xUpdate` and `xCommit`, we eliminate the costly JS ↔ WASM boundary crossings for every row. 2. **Elimination of `lix_internal_transaction_state`**: Since we write directly to the physical tables within the user's transaction, we no longer need a separate table to stage uncommitted changes. The underlying SQL database handles the transaction isolation for us. 3. **Bulk Performance**: Batch inserts (e.g., `INSERT INTO ... VALUES (...), (...)`) are handled as a single efficient SQL operation. In the vtable approach, SQLite loops and calls `xUpdate` for _each row_ individually, preventing bulk optimizations. ### Downsides / Risks 1. **Complexity of SQL Rewriting**: The preprocessor must correctly parse and rewrite potentially complex SQL statements, including handling edge cases. ### Bonus: Using Postgres or any other SQL database as backend Relying purely on preprocessing rather than SQLite (WASM build) specific APIs, enables lix to use any SQL database as backend e.g. PostgreSQL, Turso, or MySQL. ```ts const lix = await openLix({ environment: new PostgreSQL({ ... }), }); ``` For NodeJS we wouldn't need to build a SQLite WASM <-> FS bridge. Instead, we can use `better-sqlite3` or `node-postgres` directly. Of which the performance is significantly better than the WASM build which, for example, lacks WAL mode. ```ts const lix = await openLix({ environment: new BetterSqlite3({ ... }), }); ``` ================================================ FILE: rfcs/002-rewrite-in-rust/index.md ================================================ --- date: "2025-11-30" --- # Implement the Lix Engine in Rust ## Summary [RFC 001](../001-preprocess-writes/index.md) proposes extending the SQL preprocessor to handle writes. This RFC proposes implementing the **Lix Engine** - the core layer responsible for SQL preprocessing, validation, and rewriting - in Rust. ## Goals 1. **Leverage existing Rust libraries** - Rust has production-grade SQL parsers (`sqlparser-rs`), CEL evaluators (`cel-rust`), and JSON Schema validators that don't exist in JS. Our custom JS SQL parser is fragile and limited. 2. **Portable engine for multi-language bindings** - A Rust engine can be exposed to JS (NAPI-RS, WASM), Python (PyO3), and other languages. Implementing in Rust now means the core is written once. ## Non-Goals - **Deviate from SQLite dialect** - SQLite is the target for now. While `sqlparser-rs` supports multiple dialects, the initial implementation strictly targets SQLite. ## Context RFC 001 establishes that: 1. Lix is moving to a preprocessor-driven architecture that rewrites SQL against virtual tables into SQL against physical tables. 2. The preprocessor must handle both reads and writes, including parsing SQL, extracting mutations, validating schemas/constraints, and emitting optimized SQL. 3. A custom JS SQL parser is a source of fragility. The question is: **implement this in JavaScript or Rust?** ## Proposal Implement the Lix Engine in Rust. ### Architecture ``` ┌─────────────────────────────────────────────────────────────┐ │ SDK (JS/TS, Python, etc.) │ │ - High-level API (openLix, lix.db.select, etc.) │ │ - Owns SQLite connection (WASM or native) │ │ - Provides execute callback to engine │ └─────────────────────────────┬───────────────────────────────┘ │ engine.execute(sql, params) ▼ ┌─────────────────────────────────────────────────────────────┐ │ Engine (Rust) │ │ - SQL parsing & rewriting (sqlparser-rs) │ │ - Schema validation (JSON Schema, CEL) │ │ - Calls host.execute(sql) via callback │ │ - Calls host.detectChanges() for plugins │ └─────────────────────────────┬───────────────────────────────┘ │ callback: host.execute(sql) ▼ ┌─────────────────────────────────────────────────────────────┐ │ SQL Database (SQLite) │ │ - Physical storage │ │ - Transaction management | - Index/query execution │ └─────────────────────────────────────────────────────────────┘ ``` **Key design decisions:** - SDK owns SQLite - no bundling concerns in the engine - Engine controls flow - can run multiple queries internally via host callback - Preprocessing is internal - SDK never sees intermediate SQL ### Engine API From the SDK's perspective: ```typescript const engine = createEngine({ execute: (sql: string, params: unknown[]) => sqlite.exec(sql, params), detectChanges: (pluginId: string, before: Uint8Array, after: Uint8Array) => plugin.detectChanges({ before, after }), }); const result = engine.execute("INSERT INTO messages ...", [params]); ``` The Rust engine exposes bindings via: - **NAPI-RS** for Node.js (native addon) - **WASM** for browser environments - **C FFI** for other languages (Python via PyO3, etc.) ### Implementation The engine uses these Rust libraries: #### 1. SQL Parsing - `sqlparser-rs` ```rust use sqlparser::dialect::SQLiteDialect; use sqlparser::parser::Parser; let dialect = SQLiteDialect {}; let statements = Parser::parse_sql(&dialect, sql)?; for statement in statements { match statement { Statement::Query(query) => { /* rewrite SELECT */ } Statement::Insert(_) => { /* rewrite INSERT */ } Statement::Update(_) => { /* rewrite UPDATE */ } Statement::Delete(_) => { /* rewrite DELETE */ } other => { /* passthrough PRAGMA, etc. */ } } } ``` #### 2. CEL Validation - `cel-rust` ```rust use cel_interpreter::{Context, Program}; let program = Program::compile("data.amount > 0 && data.amount < 1000")?; let mut context = Context::default(); context.add_variable("data", row_data); let result = program.execute(&context)?; ``` #### 3. JSON Schema Validation - `jsonschema` ```rust use jsonschema::JSONSchema; let schema = serde_json::from_str(schema_json)?; let compiled = JSONSchema::compile(&schema)?; compiled.validate(&row_data)?; ``` #### 4. Host Plugin Callbacks ```rust pub trait HostBindings { fn execute(&self, sql: &str, params: &[Value]) -> Result>; fn detect_changes(&self, plugin_id: &str, before: &[u8], after: &[u8]) -> Result>; } // During file mutations, call back to host for plugin logic fn detect_file_changes(rows: &[MutationRow], host: &impl HostBindings) -> Result> { for row in rows.iter().filter(|r| r.schema_key == "lix_file") { let changes = host.detect_changes(&row.plugin_id, &row.before, &row.after)?; // ... collect changes } } ``` ### Pseudocode: Full Pipeline ```rust pub fn execute(sql: &str, params: &[Value], host: &impl HostBindings) -> Result> { let statements = Parser::parse_sql(&SQLiteDialect {}, sql)?; for statement in statements { match statement { Statement::Insert(_) | Statement::Update(_) | Statement::Delete(_) => { // 1. Extract mutation details let mutation = extract_mutation(statement)?; // 2. Materialize rows (resolve subqueries via host.execute) let rows = materialize_rows(&mutation, host)?; // 3. Validate in-memory (JSON Schema + CEL) validate_rows(&rows, &schemas, &cel_env)?; // 4. Detect file changes via host plugin callback let plugin_changes = detect_file_changes(&rows, host)?; // 5. Rewrite to physical tables and execute let rewritten_sql = build_write_sql(&rows, &plugin_changes)?; host.execute(&rewritten_sql, &[])?; } Statement::Query(query) => { // Rewrite vtable references to physical tables let rewritten = rewrite_select(query)?; return host.execute(&rewritten.to_string(), params); } other => { // Passthrough (PRAGMA, etc.) return host.execute(&other.to_string(), params); } } } } ``` ================================================ FILE: rfcs/003-canonical-lix-value/index.md ================================================ # RFC 003: Two-Layer Value Model With Canonical Boundaries ## Status Accepted ## Context Value-shape drift across JS/wasm/backend/CLI boundaries introduced brittle decode logic and inconsistent handling of `lix_file.data`. Observed drift patterns included: - Mixed object wrappers and raw primitives. - Binary values represented as `Uint8Array` in some APIs and `0x...` hex strings in CLI JSON. - Adapter-level implicit unwrapping that masked inconsistencies. ## Decision Adopt a two-layer value model: 1. Runtime contract for app-facing SQL APIs (`LixRuntimeValue`). 2. Canonical contract for boundary/wire/JSON surfaces (`LixCanonicalValue`). Runtime contract: ```ts export type LixRuntimeValue = | null | boolean | number | string | Uint8Array; export type LixRuntimeQueryResult = { columns: string[]; rows: LixRuntimeValue[][]; }; ``` Canonical contract: ```ts export type LixCanonicalValue = | { kind: "null"; value: null } | { kind: "bool"; value: boolean } | { kind: "int"; value: number } | { kind: "float"; value: number } | { kind: "text"; value: string } | { kind: "blob"; base64: string }; export type LixCanonicalQueryResult = { columns: string[]; rows: LixCanonicalValue[][]; }; ``` ## Invariants - Runtime SQL APIs accept/return `LixRuntimeValue`. - Boundary/wire/JSON APIs accept/return `LixCanonicalValue`. - `LixCanonicalQueryResult.columns` is always present and always a string array. - `LixCanonicalQueryResult.rows` is always present and always a 2D array. - `int.value` must be a finite integer and fit in signed 64-bit range. - `float.value` must be a finite number. - `blob.base64` uses RFC 4648 standard base64. - `lix_file.data` is representation-stable: - runtime: `Uint8Array` - canonical: `{ kind: "blob", base64: string }` ## Consequences - Runtime SQL APIs stay ergonomic and efficient (raw values, no base64 overhead in hot paths). - Canonical boundary format remains deterministic for CLI/IPC/JSON transports. - Legacy mixed forms are rejected at strict boundaries. - Conversion is explicit at boundaries via dedicated runtime<->canonical codecs. ================================================ FILE: skills/cli/SKILL.md ================================================ --- name: lix description: Use this skill when working with .lix repositories via the lix CLI. --- # Lix Skill Use this skill when working with `.lix` repositories. ## Goal Read and write data in a Lix repo safely through the Lix CLI. ## Concepts - Files: - Files are exposed through `lix_file` (active version) and `lix_file_by_version` (explicit version). - `data` is bytes; use `lix_text_encode('...')` for text payloads. - Entities: - Entities are schema-scoped records in `lix_state` / `lix_state_by_version`. - They are keyed by `schema_key` + `entity_id` + `file_id`, with schemas discoverable via `lix_registered_schema`. - Checkpoints: - A checkpoint is a committed history boundary (a saved change set) used to anchor history/diffs. - History views (`*_history`) are read-only projections over these committed changes. - Working changes: - Uncheckpointed changes are exposed via `lix_working_changes`. - This is the primary surface for “what changed since last checkpoint”. - Versions: - Versions are the name for "branches". Version is used because non technical users dont know what a branch is. - `lix_active_version` selects the current one; `lix_version` lists available versions. ## Rules (non-negotiable) 1. Never use `sqlite3` (or any direct SQLite client) on `.lix` files. 2. Always use the `lix` CLI. 3. Always pass `--path` to avoid operating on the wrong repo. 4. For `lix_file.data`, write bytes only: - text: `lix_text_encode('...')` - hex blob: `X'...'` - blob parameter ## CLI quickstart Build/run from source: ```sh cd /Users/samuel/git-repos/flashtype/submodule/lix/packages/cli cargo run --bin lix -- --help ``` ## Canonical commands Read: ```sh lix --path /path/to/repo.lix sql execute "SELECT id, path FROM lix_file ORDER BY path;" ``` Write text file data: ```sh lix --path /path/to/repo.lix sql execute \ "INSERT INTO lix_file (path, data) VALUES ('/hello.md', lix_text_encode('hello'));" ``` Read query via stdin: ```sh cat <<'SQL' | lix --path /path/to/repo.lix sql execute - SELECT path, hidden FROM lix_file ORDER BY path; SQL ``` ## Common gotchas - `.lix` is the repository. There is no checked-out working directory. - `lix_file` uses `id` (not `file_id`). - Some views are read-only (`*_history`). - Unknown table/column errors should be fixed by checking `lix_*` table/column names first.